Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support for llama.cpp #2209

Open
kolinfluence opened this issue Oct 31, 2024 · 2 comments
Open

Feature request: support for llama.cpp #2209

kolinfluence opened this issue Oct 31, 2024 · 2 comments
Labels
question Further information is requested

Comments

@kolinfluence
Copy link

llama.cpp running in server mode, how to use this? any documentation on usage?

@paul-gauthier
Copy link
Collaborator

Thanks for trying aider and filing this issue. This doc may be helpful:

https://aider.chat/docs/llms/openai-compat.html

@paul-gauthier paul-gauthier added the question Further information is requested label Oct 31, 2024
@kolinfluence
Copy link
Author

@paul-gauthier thx for making aider possible.

i saw the doc but seemed limited. anyone have any tutorial to set up from scratch? a youtube link maybe?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants