Replies: 2 comments
-
If you return it as a "comment" (instead of "reasoning") it'll be passed through |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks! I'll give that a go. Out of curiosity, I also tried using |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Question:
I have an evaluation function that returns metadata like this:
I can get the
score
using thelist_feedback
API call, but is there any way I can pull thereasoning
field (namely, the error) in the API?I can see this in the LangSmith web app, but not in the API.
More Context
I have written code that evaluates the validity of JSON strings using the LangChain evaluator
JsonValidityEvaluator
as followsRunning this on a test example like below produces the expected output, except for when the string is invalid JSON an extra piece of metadata
reasoning
is returned, like so:This works fine, and in LangSmith I can see the score in the experiment:
To get the metadata from the evaluator I can see it in the “Rendered Output” section when clicking the Evalutor button above.
My questions is that now I’d like to collect this metadata programatically; however, in the API I can’t find how to get this data
I can load the project with stats, but these are only aggregate:
I load all feedback associated with the project
But this only contains, the score and I can’t find the
reasoning
metadata anywhere:Is there any way I can pull out the metadata from the evaluation (namely, the error) in the API?
Beta Was this translation helpful? Give feedback.
All reactions