Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration with Vector Databases #1

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

SaiNivedh26
Copy link

@SaiNivedh26 SaiNivedh26 commented Jun 28, 2024

I have added functions to integrate words into Vector bases. I have utilized chroma Database which is using all-MiniLM-L6-v2 model from the Sentence Transformers library.

In SimpleLLm/tools/vector_db.py , I have added code as follows :

import os
import chromadb
from chromadb.utils import embedding_functions

class VectorDB:
    def __init__(self):
        persistence_directory = "./chroma_db"
        self.client = chromadb.PersistentClient(path=persistence_directory)
        self.embedding_function = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="all-MiniLM-L6-v2")
        self.collection = self.client.get_or_create_collection(
            name="responses",
            embedding_function=self.embedding_function
        )

    def store_vectors(self, texts):
        self.collection.add(documents=texts, ids=[f"id_{i}" for i in range(len(texts))])

    def query_vectors(self, query_text):
        results = self.collection.query(query_texts=[query_text], n_results=5)
        return results['documents'][0]

    def store_response(self, text):
        self.collection.add(documents=[text], ids=[f"id_{self.collection.count()}"])

    def query_similar(self, query_text):
        return self.query_vectors(query_text)

Then in SimplerLLM/language/llm.py , the following modifications were added,in addition to existing code, In order to invoke the Execution of the Vector databases

Initialized instance of an class

    `self.vector_db = VectorDB()`

Then

def store_response_as_vector(self, texts):
        self.vector_db.store_vectors(texts)

    def find_similar_responses(self, text):
        return self.vector_db.query_similar(text)

The Below given Libraries are required to be Installed

pip install chromadb sentence-transformers

Finally in requirements.txt, gave the correct versions


sentence-transformers==3.0.1
chromadb==0.5.3

You can test this working by executing following Sample code

Note : This is sample, you can modify and test it, as per your wish

from SimplerLLM.language.llm import LLM, LLMProvider
from dotenv import load_dotenv
import os
import time

load_dotenv()

def test_vector_storage_and_retrieval():
    llm = LLM(provider=LLMProvider.OPENAI, model_name="gpt-3.5-turbo")

    prompts = [
        "What is artificial intelligence and how does it differ from human intelligence?",
        "Explain the process of machine learning and its key components.",
        "Describe the architecture of deep neural networks and their layers.",
        "What are the applications of natural language processing in everyday technology?",
        "How does computer vision work and what are its real-world applications?",
        "Explain the concept of reinforcement learning and its use in robotics.",
        "What are the ethical concerns surrounding AI development and deployment?",
        "How does transfer learning accelerate AI model development?",
        "Describe the differences between supervised, unsupervised, and semi-supervised learning.",
        "What is the role of big data in advancing AI capabilities?",
        "Explain the concept of explainable AI and why it's important.",
        "How do genetic algorithms work in optimization problems?",
        "What are the challenges in developing artificial general intelligence (AGI)?",
        "Describe the impact of AI on healthcare diagnostics and treatment.",
        "How does AI contribute to autonomous vehicle technology?"
    ]

    print("Storing responses as vectors...")
    start_time = time.time()
    llm.store_response_as_vector(prompts)
    end_time = time.time()
    print(f"Responses stored successfully. Time taken: {end_time - start_time:.2f} seconds")

    query_prompts = [
        "What are the fundamental principles of AI?",
        "How do machines learn from data?",
        "Explain the inner workings of neural networks.",
        "What are some practical applications of NLP?",
        "How is AI changing the automotive industry?",
        "What are the moral implications of using AI in decision-making?",
        "How is AI transforming the healthcare sector?",
        "What are the key differences between AI learning paradigms?",
        "How does AI handle complex optimization problems?",
        "What are the challenges in making AI systems more transparent?"
    ]

    print("\nQuerying for similar responses:")
    for query_prompt in query_prompts:
        print(f"\nQuery: {query_prompt}")
        start_time = time.time()
        similar_responses = llm.find_similar_responses(query_prompt)
        end_time = time.time()
        print(f"Time taken: {end_time - start_time:.2f} seconds")
        print("Similar responses:")
        for i, response in enumerate(similar_responses, 1):
            print(f"{i}. {response}")

def main():
    print("Starting vector storage and retrieval test...")
    test_vector_storage_and_retrieval()

if __name__ == "__main__":
    main()

Summary by CodeRabbit

  • New Features

    • Introduced functionality for storing and retrieving responses as vectors using a language model.
    • Added methods for finding similar responses based on input prompts.
  • Enhancements

    • Improved response storage and querying capabilities using a new vector database.
  • Dependencies

    • Added sentence-transformers and chromadb to the project dependencies.

Copy link

coderabbitai bot commented Jun 28, 2024

Walkthrough

The recent changes introduce advanced vector handling and querying capabilities to SimplerLLM. Key updates include the integration of the VectorDB class for vector storage and retrieval, new methods in the OpenAILLM class for vector interactions, and additional dependencies in requirements.txt to support these functionalities. Moreover, the creation of a new file, new.py, demonstrates the practical application of these features with testing functions.

Changes

File/Path Summary
SimplerLLM/language/llm.py Added imports (os, dotenv, VectorDB), methods (store_response_as_vector, find_similar_responses), and VectorDB initialization in OpenAILLM class.
SimplerLLM/tools/vector_db.py Introduced VectorDB class with methods for storing and querying vectors and responses.
new.py Introduced functionality for storing and retrieving responses as vectors using a language model with test functions.
requirements.txt Added sentence-transformers version 3.0.1 and chromadb version 0.5.3.

Poem

🐇 In code's embrace, vectors align,
Queries and storage, oh so fine!
With VectorDB, our paths entwine,
Responses like shadows, in rows they shine.
In new.py they find a way,
Whispering secrets, night and day.
🎶 SimplerLLM, a dance divine!


Tip

Early access features: enabled

We are currently testing the following features in early access:

  • OpenAI gpt-4o model for code reviews and chat: OpenAI claims that this model is better at understanding and generating code than the previous models. We seek your feedback over the next few weeks before making it generally available.

Note:

  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.
  • Please join our Discord Community to provide feedback and report issues.
  • OSS projects are currently opted into early access features by default.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 9959100 and 240dc24.

Files ignored due to path filters (8)
  • chroma_db/104d460b-aaa5-4746-969c-b131149e52a7/data_level0.bin is excluded by !**/*.bin
  • chroma_db/104d460b-aaa5-4746-969c-b131149e52a7/header.bin is excluded by !**/*.bin
  • chroma_db/104d460b-aaa5-4746-969c-b131149e52a7/length.bin is excluded by !**/*.bin
  • chroma_db/104d460b-aaa5-4746-969c-b131149e52a7/link_lists.bin is excluded by !**/*.bin
  • chroma_db/7f8bd9ff-1cf4-4944-81ab-e7c257a0268c/data_level0.bin is excluded by !**/*.bin
  • chroma_db/7f8bd9ff-1cf4-4944-81ab-e7c257a0268c/header.bin is excluded by !**/*.bin
  • chroma_db/7f8bd9ff-1cf4-4944-81ab-e7c257a0268c/length.bin is excluded by !**/*.bin
  • chroma_db/7f8bd9ff-1cf4-4944-81ab-e7c257a0268c/link_lists.bin is excluded by !**/*.bin
Files selected for processing (4)
  • SimplerLLM/language/llm.py (3 hunks)
  • SimplerLLM/tools/vector_db.py (1 hunks)
  • new.py (1 hunks)
  • requirements.txt (1 hunks)
Files skipped from review due to trivial changes (1)
  • requirements.txt
Additional context used
Ruff
SimplerLLM/tools/vector_db.py

1-1: os imported but unused

Remove unused import: os

(F401)

new.py

3-3: os imported but unused

Remove unused import: os

(F401)

SimplerLLM/language/llm.py

1-1: os imported but unused

Remove unused import: os

(F401)


2-2: dotenv.load_dotenv imported but unused

Remove unused import: dotenv.load_dotenv

(F401)


4-4: SimplerLLM.language.llm_providers.openai_llm.generate_response imported but unused

Remove unused import: SimplerLLM.language.llm_providers.openai_llm.generate_response

(F401)


5-5: SimplerLLM.language.llm_providers.openai_llm.generate_response_async imported but unused

Remove unused import: SimplerLLM.language.llm_providers.openai_llm.generate_response_async

(F401)


116-116: Undefined name openai_llm

(F821)


155-155: Undefined name openai_llm

(F821)


215-215: Undefined name gemini_llm

(F821)


252-252: Undefined name gemini_llm

(F821)


301-301: Undefined name anthropic_llm

(F821)


339-339: Undefined name anthropic_llm

(F821)

Additional comments not posted (4)
SimplerLLM/tools/vector_db.py (2)

25-26: Review of query_similar method.

This method is a simple wrapper around query_vectors, which is a good use of code reuse. Ensure that the behavior of query_vectors is as intended, as any change will affect this method too.


15-16: Review of store_vectors method.

The method correctly stores vectors with unique IDs. However, consider handling potential exceptions that might arise from database operations to improve robustness.

new.py (1)

59-64: Review of main function and script entry point.

The main function and the script's entry point are well-defined. It's good practice to include such entry points in scripts intended for direct execution.

SimplerLLM/language/llm.py (1)

58-62: Review of vector storage and querying methods.

The methods store_response_as_vector and find_similar_responses are straightforward and utilize the VectorDB class effectively. Ensure that the underlying methods in VectorDB handle errors appropriately.

SimplerLLM/tools/vector_db.py Show resolved Hide resolved
SimplerLLM/tools/vector_db.py Show resolved Hide resolved
Comment on lines +6 to +13
def __init__(self):
persistence_directory = "./chroma_db"
self.client = chromadb.PersistentClient(path=persistence_directory)
self.embedding_function = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="all-MiniLM-L6-v2")
self.collection = self.client.get_or_create_collection(
name="responses",
embedding_function=self.embedding_function
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of VectorDB class constructor.

The constructor initializes the PersistentClient and sets up a collection with an embedding function. The hard-coded path for the database ("./chroma_db") could be made configurable to enhance flexibility in different environments.

- persistence_directory = "./chroma_db"
+ def __init__(self, persistence_directory="./chroma_db"):
+     self.client = chromadb.PersistentClient(path=persistence_directory)

Committable suggestion was skipped due to low confidence.

Comment on lines +22 to +23
def store_response(self, text):
self.collection.add(documents=[text], ids=[f"id_{self.collection.count()}"])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of store_response method.

The method adds a single document to the collection. Using the collection's count as an ID is risky as it can lead to race conditions in a concurrent environment. Consider using a more robust method for generating unique IDs.

- ids=[f"id_{self.collection.count()}"]
+ import uuid
+ ids=[str(uuid.uuid4())]
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def store_response(self, text):
self.collection.add(documents=[text], ids=[f"id_{self.collection.count()}"])
def store_response(self, text):
import uuid
self.collection.add(documents=[text], ids=[str(uuid.uuid4())])

new.py Outdated Show resolved Hide resolved
Comment on lines +27 to +28
self.vector_db = VectorDB()

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of LLM class constructor.

The constructor initializes a VectorDB instance. It's good practice to allow dependency injection for better testing and flexibility.

- self.vector_db = VectorDB()
+ def __init__(self, vector_db=None):
+     self.vector_db = vector_db if vector_db else VectorDB()
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.vector_db = VectorDB()
def __init__(self, vector_db=None):
self.vector_db = vector_db if vector_db else VectorDB()

SimplerLLM/language/llm.py Show resolved Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 240dc24 and f101cd6.

Files selected for processing (1)
  • new.py (1 hunks)
Additional context used
Ruff
new.py

3-3: os imported but unused

Remove unused import: os

(F401)

Additional comments not posted (1)
new.py (1)

62-67: LGTM!

The main function is straightforward and does not require changes.

new.py Show resolved Hide resolved
Comment on lines +51 to +61
print("\nQuerying for similar responses:")
for query_prompt in query_prompts:
print(f"\nQuery: {query_prompt}")
start_time = time.time()
similar_responses = llm.find_similar_responses(query_prompt)
end_time = time.time()
print(f"Time taken: {end_time - start_time:.2f} seconds")
print("Similar responses:")
for i, response in enumerate(similar_responses, 1):
print(f"{i}. {response}")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add error handling to the querying process.

The loop for querying similar responses is clear and straightforward. However, adding error handling would improve the robustness of the test.

+ try:
  similar_responses = llm.find_similar_responses(query_prompt)
+ except Exception as e:
+     print("Error occurred:", e)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print("\nQuerying for similar responses:")
for query_prompt in query_prompts:
print(f"\nQuery: {query_prompt}")
start_time = time.time()
similar_responses = llm.find_similar_responses(query_prompt)
end_time = time.time()
print(f"Time taken: {end_time - start_time:.2f} seconds")
print("Similar responses:")
for i, response in enumerate(similar_responses, 1):
print(f"{i}. {response}")
print("\nQuerying for similar responses:")
for query_prompt in query_prompts:
print(f"\nQuery: {query_prompt}")
start_time = time.time()
try:
similar_responses = llm.find_similar_responses(query_prompt)
except Exception as e:
print("Error occurred:", e)
continue
end_time = time.time()
print(f"Time taken: {end_time - start_time:.2f} seconds")
print("Similar responses:")
for i, response in enumerate(similar_responses, 1):
print(f"{i}. {response}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant