Use client generated run_id for tracing and feedback, langserve + langsmith #705
jccarles
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, in case someone finds themselves in the same situation as I did, I will share here how we managed to use LangServe client library to interact with a chain and give feedback for a given chain run.
Our setup is the following:
We have a chain served using LangServe, we use the
add_routes
method to generate our application endpoints. We use LangSmith for tracing and history as well as to annotate our chain outputs. In the front end we use the LangServe client.Our issue was that we did not manage to generate a run_id in front end, and to pass it to LangSmith in order to give feedback later on using this run_id. No matter what we tried, LangSmith ended up using an auto-generated trace_id which was not forwarded in the response the LangServe client receives when calling
stream
.Here is how we managed to do it, I'd like to know if there is a better or different solution.
This might be unnecessary, but we added
"metadata"
to the list of supported config_keys when adding the routes to our application.In the front end, we passed an additional option when calling the stream method of the LangServe client
Finally, in the
per_req_config_modifier
function we fetch therun_id
passed in themetadata
and add it under the keyrun_id
to the config.Then LangChain does its magic and propagates the run_id to all of the chain elements and LangSmith uses it as the trace_id. This allows us later on to give feedback for this given run by re-using the generated
run_id
.I would be curious to see if someone had the same issue and solved it differently, it was mainly a struggle because for some reason you can set a run_id in LangServe client stream call but it is not forwarded in the config...
Beta Was this translation helpful? Give feedback.
All reactions