Replies: 4 comments 1 reply
-
I'm stuck on the exact same problem (well given that I have a |
Beta Was this translation helpful? Give feedback.
-
So as far as I can tell the problem is, that you can't return something from an async function and have that returned object in turn get data from a running function, if that function is not declared async down to the callback. That means I'll have to make the whole pipeline run function (and the function it calls down to the component run functions) async. At that point you probably might as well just rewrite your whole pipeline as a generator.
I'm still kind of surprised that this use case is neither documented nor really expected apparently. |
Beta Was this translation helpful? Give feedback.
-
There a discord discussion where they propse to use the callback to write directly to the |
Beta Was this translation helpful? Give feedback.
-
https://dev.to/arya_minus/async-haystack-streaming-over-fastapi-endpoint-2kj0 if anyone is following this thread |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm wondering what the best way is to stream LLM responses over a custom fastapi server. I'm running into a bunch of asyncio related issues. My current code is the following, but it doesn't work:
where
assistant
is my haystack pipeline. I'm honestly kind of surprised that I can't find sample code to do that online. I also looked at the hayhooks repo but it doesn't seem they implemented streaming. Any help greatly appreciated!Beta Was this translation helpful? Give feedback.
All reactions