Skip to content

RAG model created in langchain using python to tune Gemini API and give it context through tokenization of pdfs into vectorization of the databases.

Notifications You must be signed in to change notification settings

TalhaBruh/Gemini-RAG-model-in-FastAPI-using-Langchain

Repository files navigation

Langchain RAG Model using FastAPI & Python - LLM Gemini

Screenshot 2024-08-12 234636

Install dependencies

  1. Do the following before installing the dependencies found in requirements.txt file because of current challenges installing onnxruntime through pip install onnxruntime.

    • For MacOS users, a workaround is to first install onnxruntime dependency for chromadb using:
     conda install onnxruntime -c conda-forge

    See this thread for additonal help if needed.

    • For Windows users, follow the guide here to install the Microsoft C++ Build Tools. Be sure to follow through to the last step to set the enviroment variable path.
  2. Now run this command to install dependenies in the requirements.txt file.

pip install -r requirements.txt
  1. Install markdown depenendies with:
pip install "unstructured[md]"

Create database

Create the Chroma DB.

python create_database.py

Query the database

Query the Chroma DB.

python query_data.py "How does Alice meet the Mad Hatter?"

You'll also need to set up an OpenAI account (and set the OpenAI key in your environment variable) for this to work.

About

RAG model created in langchain using python to tune Gemini API and give it context through tokenization of pdfs into vectorization of the databases.

Topics

Resources

Stars

Watchers

Forks