Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TUTORIAL] Any examples/tutorials with OLLAMA models would be useful #5685

Open
ajayarunachalam opened this issue Dec 10, 2024 · 7 comments · May be fixed by #5720
Open

[TUTORIAL] Any examples/tutorials with OLLAMA models would be useful #5685

ajayarunachalam opened this issue Dec 10, 2024 · 7 comments · May be fixed by #5720
Assignees
Labels
documentation Improvements or additions to documentation enhancement New feature or request help wanted Extra attention is needed

Comments

@ajayarunachalam
Copy link

Due to the RATE LIMIT constraints for the LLMs tried (OpenAI, Mistral, etc), I couldn't proceed with trialing any of your provided examples. I was trying with the OLLAMA models, but wasn't successful due to some connection error. It would be helpful if you demonstrate any tutorials with ollama LLM and embedding model

@ajayarunachalam ajayarunachalam added enhancement New feature or request triage issues that need triage labels Dec 10, 2024
@github-project-automation github-project-automation bot moved this to 📘 Todo in phoenix Dec 10, 2024
@dosubot dosubot bot added the documentation Improvements or additions to documentation label Dec 10, 2024
@mikeldking mikeldking removed the triage issues that need triage label Dec 10, 2024
@mikeldking
Copy link
Contributor

Hey @ajayarunachalam - we don't have a dedicated tutorial for ollama other than evals but you can swap out any of the llms in LangChain, LlamaIndex, etc. With ollama. You can also use LiteLLM with ollama and get tracing that way as well.

Let us know if you need any help.

@mikeldking mikeldking added the help wanted Extra attention is needed label Dec 10, 2024
@ajayarunachalam
Copy link
Author

ajayarunachalam commented Dec 11, 2024

Hi @mikeldking Thanks for your response. Yes, it would be helpful if you can show the illustration or point out in the right direction with snippets. It would be worth if you can also guide through for tracing for this example - https://github.com/Arize-ai/phoenix/blob/main/tutorials/evals/local_llm.ipynb

Specifically, I am trying to reproduce this example with ollama's LLM and embedding model https://github.com/Arize-ai/phoenix/blob/main/tutorials/llm_ops_overview.ipynb with evals & get tracing

@ajayarunachalam
Copy link
Author

Hey @mikeldking Just to give you a bit background on my POC that I wish to trial using this platform is on evaluating and tracing the distractors response generated by the LLM for the set of MCQ questions. The options will include a "Key" (Correct Answer) and few "Distractors" (Incorrect answer). These distractors being generated has to be PLAUSIBLE, and we would like to do evals and tracing of this. The data being used is like this - "qno":1,
"stem":"Which feature of vertebrates is present in fish and reptiles?",
"options":[
{
"letter":"A",
"option":"fins"
},
{
"letter":"B",
"option":"hair"
},
{
"letter":"C",
"option":"legs"
},
{
"letter":"D",
"option":"scales"
}
],
"key":"D"
},

@Jgilhuly Jgilhuly moved this from 📘 Todo to 👨‍💻 In progress in phoenix Dec 12, 2024
@Jgilhuly Jgilhuly linked a pull request Dec 12, 2024 that will close this issue
@Jgilhuly
Copy link
Contributor

Hi @ajayarunachalam - I added a couple tutorials that may be helpful in the linked PR here. One is an update to the existing local llm tutorial that adds tracing, and the new one is an end-to-end example of tracing and evaluating a rag pipeline with ollama and llamaindex.

Let us know if you have any questions on either of those, hope they help!

@Jgilhuly Jgilhuly moved this from 👨‍💻 In progress to 🔍. Needs Review in phoenix Dec 12, 2024
@ajayarunachalam
Copy link
Author

Hi @Jgilhuly Thanks for your response. I went through the tutorials and they are indeed helpful. Just a suggestion that if you can also supplement the latter one with visualizing/analyze the embeddings it would be useful.

@ajayarunachalam
Copy link
Author

ajayarunachalam commented Dec 16, 2024

Hi @Jgilhuly For the new example that you provided of tracing and evaluating a rag pipeline with ollama and llamaindex it would be more comprehensive & useful to supplement it with UMAP projection and clustering to inspect embeddings. Thanks

@ajayarunachalam
Copy link
Author

Hi @mikeldking, @Jgilhuly Unable to reproduce the tutorial local_llm_evals.ipynb Python version: 3.10. Error as seen below

LlamaIndexInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\opentelemetry\instrumentation\instrumentor.py", line 114, in instrument
result = self.instrument( # pylint: disable=assignment-from-no-return
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\openinference\instrumentation\llama_index_init
.py", line 69, in instrument
from llama_index.core.instrumentation import get_dispatcher
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core_init
.py", line 25, in
from llama_index.core.indices import (
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices_init_.py", line 32, in
from llama_index.core.indices.loading import (
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\loading.py", line 6, in
from llama_index.core.indices.registry import INDEX_STRUCT_TYPE_TO_INDEX_CLASS
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\registry.py", line 13, in
from llama_index.core.indices.property_graph import PropertyGraphIndex
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\property_graph_init_.py", line 1, in
from llama_index.core.indices.property_graph.base import PropertyGraphIndex
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\property_graph\base.py", line 17, in
from llama_index.core.indices.property_graph.transformations import (
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\property_graph\transformations_init_.py", line 4, in
from llama_index.core.indices.property_graph.transformations.schema_llm import (
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\property_graph\transformations\schema_llm.py", line 95, in
class SchemaLLMPathExtractor(TransformComponent):
File "C:\llm_embedding_model_demo\llm_embedding_model_final_venv\lib\site-packages\llama_index\core\indices\property_graph\transformations\schema_llm.py", line 141, in SchemaLLMPathExtractor
possible_entities: Optional[TypeAlias] = None,
File "C:\Program Files\Python310\lib\typing.py", line 311, in inner
return func(*args, **kwds)
File "C:\Program Files\Python310\lib\typing.py", line 402, in getitem
return self._getitem(self, parameters)
File "C:\Program Files\Python310\lib\typing.py", line 528, in Optional
arg = _type_check(parameters, f"{self} requires a single type.")
File "C:\Program Files\Python310\lib\typing.py", line 171, in _type_check
raise TypeError(f"Plain {arg} is not valid as type argument")
TypeError: Plain typing.TypeAlias is not valid as type argument

@mikeldking mikeldking moved this from 🔍. Needs Review to 👍 Approved in phoenix Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request help wanted Extra attention is needed
Projects
Status: 👍 Approved
Development

Successfully merging a pull request may close this issue.

3 participants