-
-
Notifications
You must be signed in to change notification settings - Fork 124
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use with local models like llama3.1 70b running on ollama #175
Comments
I found that there is too many steps for running with local LLms and still not working. This is what i did: 1- Install gpt 2- Install litellm 3- Follow the steps on the documentation
4- Try to launch gptme on terminal And i keep getting errors: File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1014, in _request |
The exact same issue here. Looks like the support for local/ollama is not implemented. |
Looks like some default ports got changed somewhere since I last tried this. Here are exact steps I just ran to get it working: # install litellm with proxy
pipx install litellm[proxy]
MODEL=llama3.2:1b
ollama pull $MODEL
ollama serve
litellm --model ollama/$MODEL
OPENAI_API_BASE="http://127.0.0.1:4000" gptme 'hello' -m local/ollama/$MODEL I will update the documentation and code to make this easier. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi,
I am trying to install it but I want to be able to run it with ollama running as the backend for my LLM on my machine.
How can we do that?
The text was updated successfully, but these errors were encountered: