From 03ab8abfb87d685ff6649d0a99d1276a622d08e2 Mon Sep 17 00:00:00 2001 From: Kinar R <42828719+kinarr@users.noreply.github.com> Date: Mon, 23 Sep 2024 19:44:53 +0530 Subject: [PATCH] Updated README.md to reposition notebook order in the list Moved this up to below the Ollama notebook --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 24e64d7..48efebb 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,7 @@ You can find the Gemma models on GitHub, Hugging Face models, Kaggle, Google Clo | [Prompt_chaining.ipynb](Gemma/Prompt_chaining.ipynb) | Illustrate prompt chaining and iterative generation with Gemma. | | [Advanced_Prompting_Techniques.ipynb](Gemma/Advanced_Prompting_Techniques.ipynb) | Illustrate advanced prompting techniques with Gemma. | | [Run_with_Ollama.ipynb](Gemma/Run_with_Ollama.ipynb) | Run Gemma models using [Ollama](https://www.ollama.com/). | +| [Using_Gemma_with_Llamafile.ipynb](Gemma/Using_Gemma_with_Llamafile.ipynb) | Run Gemma models using [Llamafile](https://github.com/Mozilla-Ocho/llamafile/). | | [Aligning_DPO_Gemma_2b_it.ipynb](Gemma/Aligning_DPO_Gemma_2b_it.ipynb) | Demonstrate how to align a Gemma model using DPO (Direct Preference Optimization) with [Hugging Face TRL](https://huggingface.co/docs/trl/en/index). | | [Deploy_with_vLLM.ipynb](Gemma/Deploy_with_vLLM.ipynb) | Deploy a Gemma model using [vLLM](https://github.com/vllm-project/vllm). | | [Deploy_Gemma_in_Vertex_AI.ipynb](Gemma/Deploy_Gemma_in_Vertex_AI.ipynb) | Deploy a Gemma model using [Vertex AI](https://cloud.google.com/vertex-ai). | @@ -43,7 +44,6 @@ You can find the Gemma models on GitHub, Hugging Face models, Kaggle, Google Clo | [Minimal_RAG.ipynb](Gemma/Minimal_RAG.ipynb) | Minimal example of building a RAG system with Gemma using [Google UniSim](https://github.com/google/unisim) and [Hugging Face](https://huggingface.co/). | | [RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb](Gemma/RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb) | RAG PDF Search in multiple documents using Gemma 2 2B on Google Colab. | | [Using_Gemma_with_LangChain.ipynb](Gemma/Using_Gemma_with_LangChain.ipynb) | Examples to demonstrate using Gemma with [LangChain](https://www.langchain.com/). | -| [Using_Gemma_with_Llamafile.ipynb](Gemma/Using_Gemma_with_Llamafile.ipynb) | An example to demonstrate using Gemma with [Llamafile](https://github.com/Mozilla-Ocho/llamafile/). | | [Gemma_RAG_LlamaIndex.ipynb](Gemma/Gemma_RAG_LlamaIndex.ipynb) | RAG example with [LlamaIndex](https://www.llamaindex.ai/) using Gemma. | | [Integrate_with_Mesop.ipynb](Gemma/Integrate_with_Mesop.ipynb) | Integrate Gemma with [Google Mesop](https://google.github.io/mesop/). | | [Integrate_with_OneTwo.ipynb](Gemma/Integrate_with_OneTwo.ipynb) | Integrate Gemma with [Google OneTwo](https://github.com/google-deepmind/onetwo). |