Replies: 4 comments 1 reply
-
For me LLama3 had the agent hanging. |
Beta Was this translation helpful? Give feedback.
1 reply
-
any progress on this front? |
Beta Was this translation helpful? Give feedback.
0 replies
-
If coding is supposed to happen, use CodeActAgent in the settings. A monologue agent does not have the skills to know about file editing and code execution etc. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Was trying to test local llama3-70b (also dbrx, command-r+, mixtral-8x22b) - so far it seems llama3 is showing some promise. It is able to process the monologue prompt, produce actions, compress internal state etc
i mostly tested with monologueAgent so far and i see it often slides into handwaving - 'i will do this, will do that etc' - but no actual code generating... lots of think actions. i'm thinking to try and bend the prompts a little bit, wondering what experience others have?
My plan is to try and compare this with gpt4/claude3.
Bending prompts per llm is fun only once :) I wonder if the the idea to tune llama3 using the sequences of the smarter models has been tried by anyone yet? I've seen it come up in #687 anyone tried anything? My memories are that we'd need about 1M tokens at least for a meaningful tuning. Considering the size of the prompts that shouldn't be that much. 10-15 sessions?
Beta Was this translation helpful? Give feedback.
All reactions