Skip to content

An introduction to Retrieval Augmented Generation (RAG) using LangChain

License

Notifications You must be signed in to change notification settings

Ibrahim-Ola/RAG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Retrieval Augmented Generation (RAG)

Overview

This repository contains an introduction to Retrieval Augmented Generation (RAG) using the LangChain framework. For this project, I will use the Mixtral8x7b open source Large Language Model (LLM) and see how to "augment" its knowledge using user-specific (private) data. This project is divided into three parts:

  1. RAG101: This is a beginner's level introduction to RAG with LangChain. You will learn how to:

    • Load open source models using the Huggingface API in Langchain.
    • Prompt your loaded models.
    • Augment the LLM's knowledge with private data in a "naive way".
    • Go to RAG101.
  2. RAG102: This part introduces the key componenet of a conversation - memory. You will learn:

    • The different kinds of memory algorithm supported by LangChain.
    • Add memory to retrievals in LangChain to enable conversation.
    • Go to RAG102.

Usage

1. Setup the environment

mkdir RAG
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip

2. Clone the Repository

git clone https://github.com/Ibrahim-Ola/RAG.git
cd RAG

3. Install Source Code in Editable Mode

pip install -e .

4. Deactivate Environment

After running the experiments, you can deactivate the virtual environment by running the command below.

deactivate

About

An introduction to Retrieval Augmented Generation (RAG) using LangChain

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published