Codium-ai / cover-agent
- четверг, 23 мая 2024 г. в 00:00:04
CodiumAI Cover-Agent: An AI-Powered Tool for Automated Test Generation and Code Coverage Enhancement! 💻🤖🧪🐞
Welcome to Cover-Agent. This focused project utilizes Generative AI to automate and enhance the generation of tests (currently mostly unit tests), aiming to streamline development workflows. Cover-Agent can run via a terminal, and is planned to be integrated into popular CI platforms.
We invite the community to collaborate and help extend the capabilities of Cover Agent, continuing its development as a cutting-edge solution in the automated unit test generation domain. We also wish to inspire researchers to leverage this open-source tool to explore new test-generation techniques.
This tool is part of a broader suite of utilities designed to automate the creation of unit tests for software projects. Utilizing advanced Generative AI models, it aims to simplify and expedite the testing process, ensuring high-quality software development. The system comprises several components:
Before you begin, make sure you have the following:
OPENAI_API_KEY set in your environment variables, which is required for calling the OpenAI API.If running directly from the repository you will also need:
The Cover Agent can be installed as a Python Pip package or run as a standalone executable.
To install the Python Pip package directly via GitHub run the following command:
pip install git+https://github.com/Codium-ai/cover-agent.git
The binary can be run without any Python environment installed on your system (e.g. within a Docker container that does not contain Python). You can download the release for your system by navigating to the project's release page.
Run the following command to install all the dependencies and run the project from source:
poetry installAfter downloading the executable or installing the Pip package you can run the Cover Agent to generate and validate unit tests. Execute it from the command line by using the following command:
cover-agent \
--source-file-path "<path_to_source_file>" \
--test-file-path "<path_to_test_file>" \
--code-coverage-report-path "<path_to_coverage_report>" \
--test-command "<test_command_to_run>" \
--test-command-dir "<directory_to_run_test_command>" \
--coverage-type "<type_of_coverage_report>" \
--desired-coverage <desired_coverage_between_0_and_100> \
--max-iterations <max_number_of_llm_iterations> \
--included-files "<optional_list_of_files_to_include>"You can use the example projects within this repository to run this code as a test.
Follow the steps in the README.md file located in the templated_tests/python_fastapi/ directory, then return to the root of the repository and run the following command to add tests to the python fastapi example:
cover-agent \
--source-file-path "templated_tests/python_fastapi/app.py" \
--test-file-path "templated_tests/python_fastapi/test_app.py" \
--code-coverage-report-path "templated_tests/python_fastapi/coverage.xml" \
--test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
--test-command-dir "templated_tests/python_fastapi" \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 10For an example using go cd into templated_tests/go_webservice, set up the project following the README.md.
To work with coverage reporting, you need to install gocov and gocov-xml. Run the following commands to install these tools:
go install github.com/axw/gocov/gocov@v1.1.0
go install github.com/AlekSi/gocov-xml@v1.1.0and then run the following command:
cover-agent \
--source-file-path "app.go" \
--test-file-path "app_test.go" \
--code-coverage-report-path "coverage.xml" \
--test-command "go test -coverprofile=coverage.out && gocov convert coverage.out | gocov-xml > coverage.xml" \
--test-command-dir $(pwd) \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 1Try and add more tests to this project by running this command at the root of this repository:
poetry run cover-agent \
--source-file-path "cover_agent/main.py" \
--test-file-path "tests/test_main.py" \
--code-coverage-report-path "coverage.xml" \
--test-command "poetry run pytest --junitxml=testLog.xml --cov=templated_tests --cov=cover_agent --cov-report=xml --cov-report=term --log-cli-level=INFO" \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 1 \
--model "gpt-4o"Note: If you are using Poetry then use the poetry run cover-agent command instead of the cover-agent run command.
A few debug files will be outputted locally within the repository (that are part of the .gitignore)
generated_prompt.md: The full prompt that is sent to the LLMrun.log: A copy of the logger that gets dumped to your stdouttest_results.html: A results table that contains the following for each generated test:
stderrstdoutThis project uses LiteLLM to communicate with OpenAI and other hosted LLMs (supporting 100+ LLMs to date). To use a different model other than the OpenAI default you'll need to:
--model option when calling Cover Agent.For example (as found in the LiteLLM Quick Start guide):
export VERTEX_PROJECT="hardy-project"
export VERTEX_LOCATION="us-west"
cover-agent \
...
--model "vertex_ai/gemini-pro"This section discusses the development of this project.
Before merging to main make sure to manually increment the version number in cover_agent/version.txt at the root of the repository.
Set up your development environment by running the poetry install command as you did above.
Note: for older versions of Poetry you may need to include the --dev option to install Dev dependencies.
After setting up your environment run the following command:
poetry run pytest --junitxml=testLog.xml --cov=templated_tests --cov=cover_agent --cov-report=xml --cov-report=term --log-cli-level=INFO
This will also generate all logs and output reports that are generated in .github/workflows/ci_pipeline.yml.
Below is the roadmap of planned features, with the current implementation status:
CodiumAI's mission is to enable busy dev teams to increase and maintain their code integrity. We offer various tools, including "Pro" versions of our open-source tools, which are meant to handle enterprise-level code complexity and are multi-repo codebase aware.