Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Research on Unit Test Generator #994

Closed
Melhaya opened this issue Sep 18, 2023 · 2 comments
Closed

Research on Unit Test Generator #994

Melhaya opened this issue Sep 18, 2023 · 2 comments
Assignees

Comments

@Melhaya
Copy link
Contributor

Melhaya commented Sep 18, 2023

Summary

This ticket provides an update on the current research and development efforts related to creating a Unit Test Generator. The primary focus is on leveraging Large Language Models (LLMs), specifically GPT-3, for generating unit tests for exercises written in Python code. Additionally, there is ongoing research into LLama models, and access to CodeOcean exercises has been provided by Sebastian.

Details:

1. Exploring LLMs for Unit Test Generation:

I have been actively exploring the potential of Large Language Models for automating the generation of unit tests. The main objective is to create unit tests that effectively exercise Python code, thus enhancing code quality and testing efficiency.

2. Current Use of GPT-3 API:

As of now, I am utilizing the GPT-3 API for generating unit tests. GPT-3 has shown promising results in generating natural language text and code-related tasks. This API serves as a crucial component in my initial experimentation and prototyping.

3. Research into LLama Models (Work in Progress):

In addition to GPT-3, I am actively researching LLama models for unit test generation. This research is currently a work in progress.

4. CodeOcean Exercises (Access Provided by Sebastian):

Sebastian has kindly provided access to CodeOcean exercises, which will be instrumental in my research and development efforts. These exercises will serve as valuable test cases for evaluating the effectiveness and accuracy of the unit tests generated by the system.

Next Steps:

  • Continue refining and expanding our unit test generation approach using LLMs, including GPT-3 and LLama models.
  • Collaborate with Sebastian to gather additional insights and feedback from CodeOcean exercises.
  • Evaluate the quality and effectiveness of the generated unit tests through testing and benchmarking.
  • Stay updated with the latest advancements in language models and incorporate relevant improvements into our project.
  • A repository with the relevant code will be shared soon for further collaboration
@Melhaya Melhaya self-assigned this Sep 18, 2023
@ishaan-jaff
Copy link

Hi @Melhaya I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm

TLDR:
**We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo- You can use llama/gpt/claude (100+ LLMs)

If you don't have access to certain LLMs you can use our proxy server or spin up your own proxy server using LiteLLM**

Usage

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# falcon call
response = completion(model="falcon-40b", messages=messages)

@MrSerth
Copy link
Member

MrSerth commented Oct 20, 2024

We've implemented a unit test generator with #1261, #1537, #1496, and #1588. The following research won't be tracked in this repo, so that I'll close this issue now.

@MrSerth MrSerth closed this as completed Oct 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants