Any way to use OpenLLM endpoints with OpenAI and LangChain? #768
Unanswered
newparad1gm
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Alright, I figured out a way to do this through OpenAILike on llama-index, so like this:
The only issue is that the
Although it its working locally for me on a self deployed facebook/opt-1.3 model and another remotely hosted model with Falcon |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I see now there is a way to use OpenLLM with OpenAI's API by passing in the OpenLLM domain in as base_url to openai.OpenAI.
My question is can this be continued to be used with LangChain's llms and chat_models like ChatOpenAI? My main issue is I cannot currently use the OpenLLM class from langchain.llms because that requires an install of the openllm package, and I'm getting a break with starting this up in Django in a Celery worker locally on my Mac due to kqueue problems with trio. The OpenLLM model is remotely hosted. So I was wondering if I could use the ChatOpenAI class instead with the OpenLLM domain to use in the service context for vector generation and loading that index. I am currently trying to use it with LangChain and llama-index like
But I am getting the error:
Unknown model 'meta-llama--Llama-2-7b-chat-hf'. Please provide a valid OpenAI model name in: gpt-4, etc
, as I am running LLaMa 2 on the server and trying to use that model, and this is tripping on the openai_utils.py check for valid OpenAI model names. I guess I'm asking if there is a way to get around this check or fake the model name on the OpenLLM server hosting the model, or do I need to go to LangChain to see what they can do?Beta Was this translation helpful? Give feedback.
All reactions