Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use with local models like llama3.1 70b running on ollama #175

Closed
Gimel12 opened this issue Oct 8, 2024 · 3 comments
Closed

How to use with local models like llama3.1 70b running on ollama #175

Gimel12 opened this issue Oct 8, 2024 · 3 comments

Comments

@Gimel12
Copy link

Gimel12 commented Oct 8, 2024

Hi,

I am trying to install it but I want to be able to run it with ollama running as the backend for my LLM on my machine.

How can we do that?

@Gimel12
Copy link
Author

Gimel12 commented Oct 8, 2024

I found that there is too many steps for running with local LLms and still not working.

This is what i did:

1- Install gpt
pip install gptme

2- Install litellm
pip install litellm

3- Follow the steps on the documentation

ollama pull mistral
ollama serve
litellm --model ollama/mistral
export OPENAI_API_BASE="http://localhost:8000"

4- Try to launch gptme on terminal
gptme --model local/ollama/llama3.2:1b

And i keep getting errors:

File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1014, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1092, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1014, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1092, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1024, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

@ProfiGMan
Copy link

The exact same issue here. Looks like the support for local/ollama is not implemented.

@ErikBjare
Copy link
Owner

ErikBjare commented Oct 9, 2024

Looks like some default ports got changed somewhere since I last tried this.

Here are exact steps I just ran to get it working:

# install litellm with proxy
pipx install litellm[proxy]

MODEL=llama3.2:1b
ollama pull $MODEL
ollama serve
litellm --model ollama/$MODEL
OPENAI_API_BASE="http://127.0.0.1:4000" gptme 'hello' -m local/ollama/$MODEL

I will update the documentation and code to make this easier.

Repository owner locked and limited conversation to collaborators Oct 9, 2024
@ErikBjare ErikBjare converted this issue into discussion #178 Oct 9, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants