Replies: 2 comments 1 reply
-
I'm fairly sure that the future is |
Beta Was this translation helpful? Give feedback.
-
Hi Kevin, thank you for making the CompressibleAgent. It's performing remarkably well. When used solely for compressing the current conversation without relying on an external database, I found it quite similar to the native MemGPT agent. One suggestion: it would be great if there are various compression modes to choose from. The current default method operates at the message level, which may trigger compression every time I input a new message. The MemGPT approach summarizes at the conversation level, so the summarization is shorter. The group chat of Autogen + MemGPT is a bit messy, particularly because not all agents are equipped with MemGPT's memory mechanism. They are confused about which agents have archival memory capabilities lol… |
Beta Was this translation helpful? Give feedback.
-
Seeking Collaborators to Tackle Token Limit Issues in AutoGen
Are you encountering token count limit issues with AutoGen? We need your insights and experiences! Our goal is to refine solutions that effectively address these challenges, and your contribution could be pivotal.
Join Our Experimentation Effort
We are actively experimenting with different strategies to manage the token count overflow. To that end, we’ve developed some promising tools, and we invite you to test and provide feedback:
CompressibleAgent: Replacing
AssistantAgent
withCompressibleAgent
to enable compression when a pre-set token limit is reached. Dive into the example here: agentchat_compression.ipynbCompressibleGroupManager ([Core] Compression in GroupChat #497): Replacing
GroupManager
withCompressibleGroupManager
so that the group manager can compress messages and broadcast to all agents in the group. Explore how it works: agentchat_groupchat_compression.ipynbWe Want to Hear From You!
CompressibleAgent
orCompressibleGroupManager
? Share your outcomes!Beta Was this translation helpful? Give feedback.
All reactions