-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Input formatting fix for Llama3 with Bedrock #44
base: main
Are you sure you want to change the base?
Conversation
I noticed that with 0.1.4, llama3 was added with the referenced PR #32, I have been experimenting with react agents, and I get the error about
I can't say I'm an expert in poetry but I tried using this PR with |
Hi @ToyVo, what kind error did you get? Can you paste the log of the error? |
The error I get upon attempting to add the git repository as a dependency is
I have since looked at the github actions defined and seen how a release is made, I have not yet tried to build the branch in that way https://github.com/langchain-ai/langchain-aws/blob/main/.github/workflows/_release.yml#L33-L54 |
I have now built this branch and manually moved the files into my .venv Here is an error I get this time (with newlines replace by actual newlines for prettier output and removed color ascii codes)
So its mostly working, I'm not sure whats up with the output parser but the turns work this time. |
doing as suggested and adding handle_parsing_errors=True to the agent executor, doesn't exactly work, the error doesn't get thrown but the final answer is just kept given
|
Hi @ToyVo , I had the same error "Parsing LLM output produced both a final answer and a parse-able action". In my case, it was due to the prompt for the output format. |
elif provider in ("ai21", "cohere", "mistral"): | ||
input_body["prompt"] = prompt | ||
elif provider == "meta": | ||
input_body = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This excludes all model arguments from input body, is that an expected change? Should those be passed in a separate key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In AWS Bedrock, It says that that the body for a API request to Llama 3 should be in the format:
" "body": "{"prompt":"this is where you place your input text","max_gen_len":512,"temperature":0.5,"top_p":0.9}" "
(is mandatory to pass the field "prompt")
Without "input_body = dict()" I had the error:
"required key [prompt] not found#: extraneous key [stop_sequences] is not permitted"
Because in the input body remains the field "stop_sequence", which is a not-expected key for Llama 3.
So yes, all model arguments (only "stop_sequence" in this case) are to be excluded from the input body, and not be passed.
I was trying to implement Llama 3 70b, with Bedrock. I imported the @fedor-intercom fixes. But I had some troubles, such as:
"ValueError: Stop sequence key name for meta is not supported." or "required key [prompt] not found#: extraneous key [stopSequences] is not permitted".
So I inspected the bedrock.py file, and found out the provider "meta" was not handled correctly. I modified the handling of the provider "meta". With these modifications everything seems to works fine.
I hope this could be uselful.