Replies: 2 comments 4 replies
-
My research has surfaced the following. Can this be confirmed? Are there any workarounds to this? Based on the research and understanding of how AutoGen handles streaming and WebSockets,
This behavior occurs despite setting llm_config["stream"] = True, which usually enables incremental output. Here are some insights and potential solutions: Streaming Configuration: IOStream Usage: WebSocket Integration: What's interesting is even tried to register all agents to agent.regster_reply() using the following:
where utils.queue_messages is an asyncio Queue, with async write and read ops. And still user_proxy.initiate_chat, blocks. queue messages are successfully read and streamed to the client, but only after the conversation has completed. Can anyone help me here? |
Beta Was this translation helpful? Give feedback.
-
Hello all,
I'm seeking some assistance and have posted a pretty thorough description in Reddit. AutoGenAI group. But I have not had any good suggestions yet.
This is my first post in this forum. So, if it is preferred, I can repost my problem here, as opposed to reference Reddit. Just let me know. I'd appreciate some help in this space. I've been fighting through this for a week now.
For my context, I'm using pyautogen.
Thanks in advance for any help that you can provide.
link: Detailed Description of my problem.
Beta Was this translation helpful? Give feedback.
All reactions