-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TUTORIAL] Any examples/tutorials with OLLAMA models would be useful #5685
Comments
Hey @ajayarunachalam - we don't have a dedicated tutorial for ollama other than evals but you can swap out any of the llms in LangChain, LlamaIndex, etc. With ollama. You can also use LiteLLM with ollama and get tracing that way as well. Let us know if you need any help. |
Hi @mikeldking Thanks for your response. Yes, it would be helpful if you can show the illustration or point out in the right direction with snippets. It would be worth if you can also guide through for tracing for this example - https://github.com/Arize-ai/phoenix/blob/main/tutorials/evals/local_llm.ipynb Specifically, I am trying to reproduce this example with ollama's LLM and embedding model https://github.com/Arize-ai/phoenix/blob/main/tutorials/llm_ops_overview.ipynb with evals & get tracing |
Hey @mikeldking Just to give you a bit background on my POC that I wish to trial using this platform is on evaluating and tracing the distractors response generated by the LLM for the set of MCQ questions. The options will include a "Key" (Correct Answer) and few "Distractors" (Incorrect answer). These distractors being generated has to be PLAUSIBLE, and we would like to do evals and tracing of this. The data being used is like this - "qno":1, |
Hi @ajayarunachalam - I added a couple tutorials that may be helpful in the linked PR here. One is an update to the existing local llm tutorial that adds tracing, and the new one is an end-to-end example of tracing and evaluating a rag pipeline with ollama and llamaindex. Let us know if you have any questions on either of those, hope they help! |
Hi @Jgilhuly Thanks for your response. I went through the tutorials and they are indeed helpful. Just a suggestion that if you can also supplement the latter one with visualizing/analyze the embeddings it would be useful. |
Hi @Jgilhuly For the new example that you provided of tracing and evaluating a rag pipeline with ollama and llamaindex it would be more comprehensive & useful to supplement it with UMAP projection and clustering to inspect embeddings. Thanks |
Hi @mikeldking, @Jgilhuly Unable to reproduce the tutorial local_llm_evals.ipynb Python version: 3.10. Error as seen below LlamaIndexInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider) |
Due to the RATE LIMIT constraints for the LLMs tried (OpenAI, Mistral, etc), I couldn't proceed with trialing any of your provided examples. I was trying with the OLLAMA models, but wasn't successful due to some connection error. It would be helpful if you demonstrate any tutorials with ollama LLM and embedding model
The text was updated successfully, but these errors were encountered: