You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my understanding, please correct me if I'm wrong, when using Langchain's batch method we are essentially just running Runnables.batch which appears to just run the invoke method but in parallel using threadpools or async tasks.
So my questions are:
Is the invocation method for Langchain Vertex AI API is the same regardless of which method we choose (batch vs invoke)?
Under the hood is for Vertex is Langchain using the predict endpoint for either invocation method?
If above is true, are there plans to implement batch prediction API with Vertex?
I would like to you use GCPs Batch prediction with Vertex AI while leveraging the functionality and features / tools from Langchain. Is there a way to achieve this?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
From my understanding, please correct me if I'm wrong, when using Langchain's batch method we are essentially just running Runnables.batch which appears to just run the invoke method but in parallel using threadpools or async tasks.
So my questions are:
I would like to you use GCPs Batch prediction with Vertex AI while leveraging the functionality and features / tools from Langchain. Is there a way to achieve this?
Beta Was this translation helpful? Give feedback.
All reactions