Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error using custom imported Llama3.1 model on Bedrock #228

Open
mgaionWalit opened this issue Oct 4, 2024 · 5 comments
Open

Error using custom imported Llama3.1 model on Bedrock #228

mgaionWalit opened this issue Oct 4, 2024 · 5 comments
Labels
bedrock bug Something isn't working custom model

Comments

@mgaionWalit
Copy link

  • Lanchain V 0.3.2
  • Lanchain AWS v 0.2.2

We are using a fine-tuned version of Llama 3.1-instruct, uploaded to Bedrock. Since we are using an ARN model ID (which does not contain any information about the specific Foundation Model used), we encountered an issue.

In the code chat_models/bedrock.py at line 349, there is an if statement evaluating the model string to choose between Llama2 and Llama3 for prompt conversion.

In our case, we need to use convert_messages_to_prompt_llama3, but the logic falls into the else statement, which uses convert_messages_to_prompt_llama.

Is there any solution to ensure the correct conversion function is used?

Thank you!

@langcarl langcarl bot added the investigate label Oct 4, 2024
@mgaionWalit mgaionWalit changed the title Error - custom imported model on Bedrock Error prompt Llama convert - custom imported model on Bedrock Oct 4, 2024
@3coins
Copy link
Collaborator

3coins commented Oct 7, 2024

@mgaionWalit
The best solution here would be to and a new attribute to supply the base model used to fine tune the custom model.

@3coins 3coins added the bedrock label Oct 9, 2024
@mgaionWalit
Copy link
Author

Thanks for the reply.

There is a temporary workaround?

@3coins 3coins added bug Something isn't working and removed investigate labels Oct 10, 2024
@3coins 3coins changed the title Error prompt Llama convert - custom imported model on Bedrock Error using custom imported Llama3.1 model on Bedrock Oct 13, 2024
@3coins
Copy link
Collaborator

3coins commented Oct 13, 2024

@mgaionWalit
Don't think there is a way to handle this without a code change. I don't have any custom model setup to verify, but does the Bedrock API provide any info regarding the model that can help us infer the base provider/model?

@3coins
Copy link
Collaborator

3coins commented Oct 29, 2024

@mgaionWalit
The converse API now supports imported models for Meta and Mistral models, can you verify if this works as expected with the ChatBedrockConverse, by passing in a provider value.

@mgaionWalit
Copy link
Author

Hi @3coins

thanks for you support.

I've updated langchain-aws to the latest available version (0.2.4) and tried to init the custom model with ChatBedrockConverse :

bedrock_runtime = boto3.client(
    service_name="bedrock-runtime"
)
model = ChatBedrockConverse(
    client=bedrock_runtime,
    model_id='arn:aws:bedrock:us-east-1:XXXXXX:imported-model/XXXXXX', # here I set the arn of the imported model
    provider='meta',
    temperature=0.15,
    max_tokens=100
)

but I still get an error when trying to start a chat with stream that says:
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ConverseStream operation: This action doesn't support the model that you provided. Try again with a supported text or chat model

Am I doing something wrong? Do you have some suggestions on how to make it work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bedrock bug Something isn't working custom model
Projects
None yet
Development

No branches or pull requests

2 participants