Papers
arxiv:2305.00418

An Empirical Study of Using Large Language Models for Unit Test Generation

Published on Apr 30, 2023
Authors:
,
,
,

Abstract

A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models (e.g. GitHub Copilot) are increasingly being adopted in practice, it is unclear whether they can successfully be used for unit test generation without fine-tuning. We investigated how well three generative models (Codex, GPT-3.5-Turbo, and StarCoder) can generate test cases to fill this gap. We used two benchmarks (HumanEval and Evosuite SF110) to investigate the context generation's effect in the unit test generation process. We evaluated the models based on compilation rates, test correctness, coverage, and test smells. We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the EvoSuite SF110 benchmark. The generated tests also suffered from test smells, such as Duplicated Asserts and Empty Tests.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2305.00418 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2305.00418 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2305.00418 in a Space README.md to link it from this page.

Collections including this paper 1