You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to write some tests for agent-ish bits of guidance code - specifically some flow control that is executed depending on the output of the LLM doing a select:
with user():
lm += (f"Here is a statement about the gene {gene_symbol}: \n{answer}\n"
"Does this statement make an association between the gene and a disease? "
"Answer only Yes or No\n")
with assistant():
lm += select(["Yes", "No"], name="disease_mention") + '\n'
if lm["disease_mention"] == "Yes":
... things happen
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm trying to write some tests for agent-ish bits of guidance code - specifically some flow control that is executed depending on the output of the LLM doing a
select
:When I use the mock LLM in https://github.com/guidance-ai/guidance/blob/855ce5bba90ad73d2f687e67a5abb1d4f00bee29/guidance/models/_mock.py I don't think I ever get the LLM to say 'Yes'. How do others approach this? I'd managed to mock/monkeypatch bits of the mock LLM to test other things like context exhaustion, but I can't see an obvious way to force something like this. Any help would be very much appreciated!
Beta Was this translation helpful? Give feedback.
All reactions