Documentation of LLM and Embedding Configurations #436
Replies: 2 comments 6 replies
-
@donbr I'm not sure if you noticed but theres API keys in your comments. If they're active keys you may want to rotate them now! If they're not active ignore me just looking out for fellow contributors!! |
Beta Was this translation helpful? Give feedback.
-
Having an issues getting it to connect to my ollama server. is this correct? "WORKSPACE_DIR="./workspace" ollama - https://docs.litellm.ai/docs/providers/ollamaLLM_API_KEY="sk-111111111111111111111111111111111111111111111111" been fighting issues just getting it up and running.. npm gave me for some reason as im running linux on a windows sub system.. Any help would be awesome! right now server is up and running but a big read warning window at the top says "ERROR: Failed connection to server. Please ensure the server is reachable at ws://localhost:3000/ws." Would OObabooga be an option? could i run a llm in OObabooga instead of using ollama? |
Beta Was this translation helpful? Give feedback.
-
Thought I should share the documentation below with the team - I learned a lot going through the clear and concise architecture models that made it easier to get the configuration working.
I documented all LLM and embedding configuration in the same config.toml file as I wanted to do comparisons and standardization between OpenAI and the other models.
The team has built an incredible agentic system. I'm most impressed with your . In an open source solution, making sure people have a common vision of what is being built is priceless.
Beta Was this translation helpful? Give feedback.
All reactions