diff --git a/Gemma/Using_Gemini_and_Gemma_with_RouteLLM.ipynb b/Gemma/Using_Gemini_and_Gemma_with_RouteLLM.ipynb
new file mode 100644
index 0000000..fb02380
--- /dev/null
+++ b/Gemma/Using_Gemini_and_Gemma_with_RouteLLM.ipynb
@@ -0,0 +1,426 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "d4ES2HG8TMYT"
+ },
+ "source": [
+ "##### Copyright 2024 Google LLC."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Sagji9qaTPlK"
+ },
+ "outputs": [],
+ "source": [
+ "# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QmJNavKnTOei"
+ },
+ "source": [
+ "# Getting Started with Gemma 2, Gemini, and RouteLLM\n",
+ "\n",
+ "[Gemma](https://ai.google.dev/gemma) is a family of lightweight, state-of-the-art open-source language models from Google. Built from the same research and technology used to create the Gemini models, Gemma models are text-to-text, decoder-only large language models (LLMs), available in English, with open weights, pre-trained variants, and instruction-tuned variants.\n",
+ "Gemma models are well-suited for various text-generation tasks, including question-answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop, or cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone.\n",
+ "\n",
+ "[Gemini](https://blog.google/technology/ai/google-gemini-ai/) is a large language model developed by Google AI. It is a multimodal model, meaning it can process and generate text, code, images, and audio. Gemini is considered one of the most advanced language models available, and it has been shown to outperform other models on various tasks, such as question answering, summarization, and translation.\n",
+ "\n",
+ "[RouteLLM](https://github.com/lm-sys/RouteLLM) is a framework that helps you optimize your LLM usage by routing queries to the most appropriate model based on their complexity. It can significantly reduce costs without sacrificing performance. You can easily integrate it into your existing applications and experiment with different routing strategies.\n",
+ "\n",
+ "In this notebook, you will learn how to route between the Gemini and the Gemma 2 models using **RouteLLM** in a Google Colab environment. You'll install the necessary packages, set up the models, and run a sample prompt.\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "IAvbefp1fWUk"
+ },
+ "source": [
+ "## Setup\n",
+ "\n",
+ "### Select the Colab runtime\n",
+ "To complete this tutorial, you must have a Colab runtime with sufficient resources to run the Gemma model. In this case, you can use a T4 GPU:\n",
+ "\n",
+ "1. In the upper-right of the Colab window, select **▾ (Additional connection options)**.\n",
+ "2. Select **Change runtime type**.\n",
+ "3. Under **Hardware accelerator**, select **T4 GPU**.\n",
+ "\n",
+ "### Setup Hugging Face and Gemini\n",
+ "\n",
+ "**Before you dive into the tutorial, let's get you set up with Hugging Face and Gemma:**\n",
+ "\n",
+ "#### Hugging Face setup\n",
+ "\n",
+ "1. **Hugging Face Account:** If you don't already have one, you can create a free one by clicking [here](https://huggingface.co/join).\n",
+ "\n",
+ "2. **Hugging Face Token:** Generate a Hugging Face access (preferably `write` permission) token by clicking [here](https://huggingface.co/settings/tokens). You'll need this token later in the tutorial.\n",
+ "\n",
+ "#### Gemini setup\n",
+ "\n",
+ "1. **Gemini Token:** To use the Gemini API, you need an API key. You can create a key with a few clicks in [Google AI Studio](https://aistudio.google.com/app/apikey).\n",
+ "\n",
+ "**Once you've completed these steps, you're ready to move on to the next section where you'll set up environment variables in your Colab environment.**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QA1CAiSGlOHf"
+ },
+ "source": [
+ "### Configure your HF token and Gemini token\n",
+ "\n",
+ "Add your Hugging Face token and Gemini token to the Colab Secrets manager to securely store it.\n",
+ "\n",
+ "1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. \n",
+ "2. Create a new secret with the name `HF_TOKEN`.\n",
+ "3. Copy/paste your HF token key into the Value input box of `HF_TOKEN`.\n",
+ "4. Toggle the button on the left to allow notebook access to the secret.\n",
+ "5. Create a new secret with the name `GOOGLE_API_KEY`.\n",
+ "6. Copy/paste your Gemini token key into the Value input box of `GOOGLE_API_KEY`.\n",
+ "7. Toggle the button on the left to allow notebook access to the secret."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "pFNuO5RsmWfn"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "from google.colab import userdata\n",
+ "\n",
+ "# Note: `userdata.get` is a Colab API. If you're not using Colab, set the env\n",
+ "# vars as appropriate for your system.\n",
+ "os.environ[\"HF_TOKEN\"] = userdata.get(\"HF_TOKEN\")\n",
+ "os.environ[\"GEMINI_API_KEY\"] = userdata.get(\"GOOGLE_API_KEY\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gly2y32DwXH_"
+ },
+ "source": [
+ "Currently, RouteLLM checks for `OPENAI_API_KEY` before starting. You can set the `OPENAI_API_KEY` to a dummy value as a temporary workaround. The collaborators for RouteLLM are currently working on to fix this [issue](https://github.com/lm-sys/RouteLLM/issues/19)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "eZCUFnSlzTup"
+ },
+ "outputs": [],
+ "source": [
+ "os.environ[\"OPENAI_API_KEY\"] = \"dummy\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TjqCf1R6pJtY"
+ },
+ "source": [
+ "### Install dependencies\n",
+ "\n",
+ "You'll need to install a few Python packages and dependencies for RouteLLM and Gemini API.\n",
+ "\n",
+ "Run the following cell to install or upgrade it:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BsQey4_-SunS"
+ },
+ "outputs": [],
+ "source": [
+ "# Install RouteLLM package.\n",
+ "! pip install \"routellm[serve,eval]\"\n",
+ "\n",
+ "# Install the Python SDK for the Gemini API, `google-generativeai`.\n",
+ "! pip install -q google-generativeai"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dp2HxAxzqvSX"
+ },
+ "source": [
+ "### Setup Gemma2 with Ollama\n",
+ "\n",
+ "To use a local model like Gemma 2 with RouteLLM you need [Ollama](https://ollama.com/).\n",
+ "\n",
+ "Ollama is an open-source framework for building and running large language models (LLMs). It's designed to be flexible and customizable, allowing developers to train and deploy their models or fine-tune existing ones. Ollama is a popular choice for those who want to experiment with LLMs or build custom models without relying on proprietary platforms."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "V9xBkvRcyBvj"
+ },
+ "source": [
+ "#### Install Ollama"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BRFue2F5YtS0"
+ },
+ "outputs": [],
+ "source": [
+ "!curl https://ollama.ai/install.sh | sh"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "AID-4tDpqOTE"
+ },
+ "source": [
+ "#### Start Ollama as a background subprocess"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ZR3TgObHdvLz"
+ },
+ "outputs": [],
+ "source": [
+ "import subprocess\n",
+ "import time\n",
+ "\n",
+ "process = subprocess.Popen(\"ollama serve\", shell=True)\n",
+ "time.sleep(5)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gzp2EVqrywFG"
+ },
+ "source": [
+ "#### Run Gemma 2 in Ollama as a background subprocess"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KBF9QIady2nN"
+ },
+ "outputs": [],
+ "source": [
+ "process = subprocess.Popen(\"ollama run gemma2\", shell=True)\n",
+ "time.sleep(5)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2NeDPsUXyKB_"
+ },
+ "source": [
+ "#### Check if Ollama is running\n",
+ "\n",
+ "Run the following command to see whether Ollama is up and running. Continue to the next cell when the output of the following command says \"**Ollama is running**\"."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "HdkLSYwSO3tp"
+ },
+ "outputs": [],
+ "source": [
+ "!curl localhost:11434"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6WQqh6kQVaG_"
+ },
+ "source": [
+ "## How RouteLLM works?\n",
+ "\n",
+ "\n",
+ "RouteLLM is a systematic framework for preference-data-based LLM routing. There are two models in RouteLLM's routing setup: a weaker but less costly model and a stronger but more costly model.\n",
+ "\n",
+ "When a prompt is sent to RouteLLM, an underlying routing approach routes the prompt to either the strong or weak model for inference. RouteLLM offers four different routing approaches:\n",
+ "\n",
+ "1. **Similarity-weighted (SW) ranking**: A similarity-weighted (SW) ranking router that performs a \"weighted Elo calculation\" based on similarity.\n",
+ "\n",
+ "2. **Matrix factorization**: A matrix factorization model that learns a scoring function for how well a model can answer a prompt.\n",
+ "\n",
+ "3. **BERT classifier**: A BERT classifier that predicts which model can provide a better response.\n",
+ "\n",
+ "4. **Causal LLM classifier**: A causal LLM classifier that predicts which model can provide a better response.\n",
+ "\n",
+ "Read more about these routing methods and their performance stats from [lmsys's blog](https://lmsys.org/blog/2024-07-01-routellm/).\n",
+ "\n",
+ "You can also refer to the research paper, [RouteLLM: Learning to Route LLMs with Preference\n",
+ "Data](https://arxiv.org/abs/2406.18665) published by the creators."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tso2qSwazCGE"
+ },
+ "source": [
+ "## Routing between Gemini and Gemma 2 using the `routellm` library\n",
+ "\n",
+ "\n",
+ "You can use **RouteLLM** in your Python code using the `routellm` library.\n",
+ "\n",
+ "You will initialize the `Controller` from the `routellm` library by specifying the routers, strong_model, and weak_model. Here's what each argument of `Controller` does:\n",
+ "\n",
+ "- `routers`: Name of the router. For this notebook, you will choose `bert` since it is easier to run `bert` in a colab environment.\n",
+ "- `strong_model`: A stronger, more expensive model. For this example you will use **Gemini Pro**.\n",
+ "- `weak_model`: A weaker but cheaper model. You can put your local **Gemma 2** model here."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tUzFyAAnS16k"
+ },
+ "outputs": [],
+ "source": [
+ "from routellm.controller import Controller\n",
+ "\n",
+ "client = Controller(\n",
+ " routers=[\"bert\"], # Use `bert` router\n",
+ " strong_model=\"gemini/gemini-pro\",\n",
+ " weak_model=\"ollama_chat/gemma2\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "E3UAE4yloBAj"
+ },
+ "source": [
+ "**Threshold calibration** is essential for balancing cost and quality in LLM routing. The optimal threshold depends on your router and incoming queries. Calibrate your queries using a sample and your desired routing percentage. RouteLLM supports default calibration based on the public [Chatbot Arena dataset](https://huggingface.co/datasets/lmsys/lmsys-arena-human-preference-55k).\n",
+ "\n",
+ "For this example, you will calibrate the threshold for `bert` such that 30% of the calls are routed to the stronger model. To calculate this threshold, you can run the `routellm.calibrate_threshold` command and provide the following values.\n",
+ "\n",
+ "`--routers`: bert\n",
+ "\n",
+ "`--strong-model-pct` : 0.3"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "xnj66rs5MJtz"
+ },
+ "outputs": [],
+ "source": [
+ "!python -m routellm.calibrate_threshold --task calibrate --routers bert --strong-model-pct 0.3 --config config.example.yaml"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sWEaTvZ5tEAU"
+ },
+ "source": [
+ "The `client.chat.completions.create` function lets you prompt your RouteLLM setup. You can specify the threshold value you obtained in the previous step in the `model` argument of the `client.chat.completions.create` function. If the router is `bert` and the threshold is **0.46514**, pass `router-bert-0.46514` to the `model` argument. You can pass your chat messages to the `messages` argument."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "zWUwIOCVlAFG"
+ },
+ "outputs": [],
+ "source": [
+ "response = client.chat.completions.create(\n",
+ " # This tells RouteLLM to use the bert router with a cost threshold of 0.46514\n",
+ " model=\"router-bert-0.46514\",\n",
+ " messages=[\n",
+ " {\"role\": \"user\", \"content\": \"Hello!\"}\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "print(\"Selected model: {model}\\n\".format(model=response.model))\n",
+ "print(\"Response: {model_response}\".format(\n",
+ " model_response=response.choices[0].message.content))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zhSHtO_sZyl_"
+ },
+ "source": [
+ "To learn more about the Python usage of **RouteLLM**, visit [RouteLLM's GitHub page](https://github.com/lm-sys/RouteLLM)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xxKuGtofZ08F"
+ },
+ "source": [
+ "## Conclusion\n",
+ "\n",
+ "Congratulations! You've successfully routed between the Gemini and the Gemma 2 models using **RouteLLM** in a Google Colab environment."
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "name": "Using_Gemini_and_Gemma_with_RouteLLM.ipynb",
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/README.md b/README.md
index a963e8e..e91abc9 100644
--- a/README.md
+++ b/README.md
@@ -64,6 +64,7 @@ You can find the Gemma models on GitHub, Hugging Face models, Kaggle, Google Clo
| [Using_Gemma_with_Llamafile.ipynb](Gemma/Using_Gemma_with_Llamafile.ipynb) | Run Gemma models using [Llamafile](https://github.com/Mozilla-Ocho/llamafile/). |
| [Using_Gemma_with_LlamaCpp.ipynb](Gemma/Using_Gemma_with_LlamaCpp.ipynb) | Run Gemma models using [LlamaCpp](https://github.com/abetlen/llama-cpp-python/). |
| [Using_Gemma_with_LocalGemma.ipynb](Gemma/Using_Gemma_with_LocalGemma.ipynb) | Run Gemma models using [Local Gemma](https://github.com/huggingface/local-gemma/). |
+| [Using_Gemini_and_Gemma_with_RouteLLM.ipynb](Gemma/Using_Gemini_and_Gemma_with_RouteLLM.ipynb) | Route Gemma and Gemini models using [RouteLLM](https://github.com/lm-sys/RouteLLM/). |
| [Integrate_with_Mesop.ipynb](Gemma/Integrate_with_Mesop.ipynb) | Integrate Gemma with [Google Mesop](https://google.github.io/mesop/). |
| [Integrate_with_OneTwo.ipynb](Gemma/Integrate_with_OneTwo.ipynb) | Integrate Gemma with [Google OneTwo](https://github.com/google-deepmind/onetwo). |
| [Deploy_with_vLLM.ipynb](Gemma/Deploy_with_vLLM.ipynb) | Deploy a Gemma model using [vLLM](https://github.com/vllm-project/vllm). |
diff --git a/WISHLIST.md b/WISHLIST.md
index ea337d3..6c991ec 100644
--- a/WISHLIST.md
+++ b/WISHLIST.md
@@ -2,7 +2,6 @@ A wish list of cookbooks showcasing:
* Inference
* Integration with [Google GenKit](https://firebase.google.com/products/genkit)
- * Gemma+Gemini with [routerLLM](https://github.com/lm-sys/RouteLLM)
* [SGLang](https://github.com/sgl-project/sglang) integration
* Fine-tuning