You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This ticket provides an update on the current research and development efforts related to creating a Unit Test Generator. The primary focus is on leveraging Large Language Models (LLMs), specifically GPT-3, for generating unit tests for exercises written in Python code. Additionally, there is ongoing research into LLama models, and access to CodeOcean exercises has been provided by Sebastian.
Details:
1. Exploring LLMs for Unit Test Generation:
I have been actively exploring the potential of Large Language Models for automating the generation of unit tests. The main objective is to create unit tests that effectively exercise Python code, thus enhancing code quality and testing efficiency.
2. Current Use of GPT-3 API:
As of now, I am utilizing the GPT-3 API for generating unit tests. GPT-3 has shown promising results in generating natural language text and code-related tasks. This API serves as a crucial component in my initial experimentation and prototyping.
3. Research into LLama Models (Work in Progress):
In addition to GPT-3, I am actively researching LLama models for unit test generation. This research is currently a work in progress.
4. CodeOcean Exercises (Access Provided by Sebastian):
Sebastian has kindly provided access to CodeOcean exercises, which will be instrumental in my research and development efforts. These exercises will serve as valuable test cases for evaluating the effectiveness and accuracy of the unit tests generated by the system.
Next Steps:
Continue refining and expanding our unit test generation approach using LLMs, including GPT-3 and LLama models.
Collaborate with Sebastian to gather additional insights and feedback from CodeOcean exercises.
Evaluate the quality and effectiveness of the generated unit tests through testing and benchmarking.
Stay updated with the latest advancements in language models and incorporate relevant improvements into our project.
A repository with the relevant code will be shared soon for further collaboration
The text was updated successfully, but these errors were encountered:
We've implemented a unit test generator with #1261, #1537, #1496, and #1588. The following research won't be tracked in this repo, so that I'll close this issue now.
Summary
This ticket provides an update on the current research and development efforts related to creating a Unit Test Generator. The primary focus is on leveraging Large Language Models (LLMs), specifically GPT-3, for generating unit tests for exercises written in Python code. Additionally, there is ongoing research into LLama models, and access to CodeOcean exercises has been provided by Sebastian.
Details:
1. Exploring LLMs for Unit Test Generation:
I have been actively exploring the potential of Large Language Models for automating the generation of unit tests. The main objective is to create unit tests that effectively exercise Python code, thus enhancing code quality and testing efficiency.
2. Current Use of GPT-3 API:
As of now, I am utilizing the GPT-3 API for generating unit tests. GPT-3 has shown promising results in generating natural language text and code-related tasks. This API serves as a crucial component in my initial experimentation and prototyping.
3. Research into LLama Models (Work in Progress):
In addition to GPT-3, I am actively researching LLama models for unit test generation. This research is currently a work in progress.
4. CodeOcean Exercises (Access Provided by Sebastian):
Sebastian has kindly provided access to CodeOcean exercises, which will be instrumental in my research and development efforts. These exercises will serve as valuable test cases for evaluating the effectiveness and accuracy of the unit tests generated by the system.
Next Steps:
The text was updated successfully, but these errors were encountered: