loubnabnl's picture
loubnabnl HF staff
add table
28f951a
raw
history blame
1.4 kB
A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported. Below are some examples for the selcted models.
For most models, we sample 200 candidate program completions, and compute pass@1, pass@10, and pass@100 using an unbiased sampling estimator. The table below shows the humanEval scores of CodeParrot, InCoder, GPT-neo models, GPT-J and Codex (not open-source).
| Model | pass@1 | pass@10 | pass@100|
|-------|--------|---------|---------|
|CodeParrot 🦜 (110M) | 3.80% | 6.57% | 12.78% |
|CodeParrot 🦜 (1.5B) | 3.58% | 8.03% | 14.96% |
|||||
|InCoder 🦜 (6.7B) | 15.2% | 27.8% | 47.00% |
|||||
|Codex (25M)| 3.21% | 7.1% | 12.89%|
|Codex (85M)| 8.22% | 12.81% | 22.40% |
|Codex (300M)| 13.17%| 20.37% | 36.27% |
|Codex (12B)| 28.81%| 46.81% | 72.31% |
|||||
|GPT-neo (125M)| 0.75% | 1.88% | 2.97% |
|GPT-neo (1.5B)| 4.79% | 7.47% | 16.30% |
|GPT-neo (2.7B)| 6.41% | 11.27% | 21.37% |
|GPT-J (6B)| 11.62% | 15.74% | 27.74% |