update table
Browse files- evaluation/intro.txt +1 -107
evaluation/intro.txt
CHANGED
@@ -1,108 +1,2 @@
|
|
1 |
A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
|
2 |
-
In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator.
|
3 |
-
|
4 |
-
|
5 |
-
| Model | pass@1 | pass@10 | pass@100|
|
6 |
-
|-------|--------|---------|---------|
|
7 |
-
|CodeParrot (110M) | 3.80% | 6.57% | 12.78% |
|
8 |
-
|CodeParrot (1.5B) | 3.58% | 8.03% | 14.96% |
|
9 |
-
|||||
|
10 |
-
|InCoder (6.7B) | 15.2% | 27.8% | 47.00% |
|
11 |
-
|||||
|
12 |
-
|PolyCoder (160M)| 2.13% | 3.35% | 4.88% |
|
13 |
-
|PolyCoder (400M)| 2.96% | 5.29% | 11.59% |
|
14 |
-
|PolyCoder (2.7B)| 5.59% | 9.84% | 17.68% |
|
15 |
-
|||||
|
16 |
-
|CodeGen-Mono (350M)| 12.76% | 23.11% | 35.19% |
|
17 |
-
|CodeGen-Mono (2.7B)| 23.70% | 36.64% | 57.01% |
|
18 |
-
|CodeGen-Mono (16.1B)| **29.28%** | **49.86%** | **75.00%** |
|
19 |
-
|||||
|
20 |
-
|Codex (25M)| 3.21% | 7.1% | 12.89%|
|
21 |
-
|Codex (300M)| 13.17%| 20.37% | 36.27% |
|
22 |
-
|Codex (12B)| 28.81%| 46.81% | 72.31% |
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
We can load HumanEval dataset and pass@k metric from 🤗 [`datasets`](https://huggingface.co/docs/datasets/index)
|
28 |
-
|
29 |
-
```python
|
30 |
-
from datasets import load_dataset, load_metric
|
31 |
-
|
32 |
-
human_eval = load_dataset("openai_humaneval")
|
33 |
-
code_eval_metric = load_metric("code_eval")
|
34 |
-
```
|
35 |
-
|
36 |
-
We can easily compute the pass@k for a problem that asks for the implementation of a function that sums two integers:
|
37 |
-
|
38 |
-
```python
|
39 |
-
test_cases = ["assert add(2,3)==5"]
|
40 |
-
candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]]
|
41 |
-
pass_at_k, results = code_eval_metric.compute(references=test_cases, predictions=candidates, k=[1, 2])
|
42 |
-
print(pass_at_k)
|
43 |
-
{'pass@1': 0.5, 'pass@2': 1.0}
|
44 |
-
```
|
45 |
-
|
46 |
-
To better understand how pass@k metric works, we will illustrate it with some concrete examples. We select two problems from the HumanEval dataset and see how CodeParrot 🦜 (110M) performs and which code completions pass the unit tests of the two problems below:
|
47 |
-
|
48 |
-
**Problem 1:**
|
49 |
-
|
50 |
-
```python
|
51 |
-
|
52 |
-
from typing import List
|
53 |
-
|
54 |
-
|
55 |
-
def separate_paren_groups(paren_string: str) -> List[str]:
|
56 |
-
""" Input to this function is a string containing multiple groups of nested parentheses. Your goal is to
|
57 |
-
separate those group into separate strings and return the list of those.
|
58 |
-
Separate groups are balanced (each open brace is properly closed) and not nested within each other
|
59 |
-
Ignore any spaces in the input string.
|
60 |
-
>>> separate_paren_groups('( ) (( )) (( )( ))')
|
61 |
-
['()', '(())', '(()())']
|
62 |
-
"""
|
63 |
-
````
|
64 |
-
**Problem 2:**
|
65 |
-
```python
|
66 |
-
|
67 |
-
def truncate_number(number: float) -> float:
|
68 |
-
""" Given a positive floating point number, it can be decomposed into
|
69 |
-
and integer part (largest integer smaller than given number) and decimals
|
70 |
-
(leftover part always smaller than 1).
|
71 |
-
|
72 |
-
Return the decimal part of the number.
|
73 |
-
>>> truncate_number(3.5)
|
74 |
-
0.5
|
75 |
-
"""
|
76 |
-
````
|
77 |
-
|
78 |
-
For each problem, instead of 200 candidate solutions, we will only generate 20 samples for illustration purposes. We use nucleus sampling with top-p where `p=0.95`, `temperature=0.2`, and sample tokens from the model until we encounter a stop sequence indicating the end of a method: ‘\nclass’, ‘\ndef’, ‘\n#’, ‘\nif’, or ‘\nprint’. For more details about decoding strategies for language generation, we recommend this [blog](https://huggingface.co/blog/how-to-generate).
|
79 |
-
|
80 |
-
**Remark**:
|
81 |
-
|
82 |
-
Regarding the temperature parameter, in [CodeGen](https://github.com/salesforce/CodeGen) paper, the authors observed that the best performing temperature increases as the number of samples permitted k increases. When a model is only allowed a few samples to pass unit tests, it is beneficial to use the learned distribution, through a low temperature, to select candidates that are likely to pass. But when a model is allowed for more chances with a high k, using a higher sampling temperature to tilt the learned model distribution lets it explore diverse samples and thus more likely to synthesize a correct program.
|
83 |
-
|
84 |
-
|
85 |
-
For our experiment, we compute pass@1, pass@10 and pass@20, each correspending to unit test pass rate when selecting respectively 1, 10 and 20 samples from the candidate solutions.
|
86 |
-
|
87 |
-
```
|
88 |
-
|
89 |
-
Results: {'pass@1': 0.0750, 'pass@10': 0.4473, 'pass@20': 0.5}
|
90 |
-
|
91 |
-
````
|
92 |
-
|
93 |
-
If we take a closer look at the unit test results for each candidate solution in the two problems, we find that 3 passed the test for the second problem, and none did for the first problem. This means that we have 3 correct solutions among 40, which corresponds to our pass@1 value `3/40 = 0.075`. The scores pass@10 and pass@20 are higher, because the more samples we select from the candidate completions, the more likely we are to include the correct implementation. As
|
94 |
-
for pass@20, it is `1/2 = 0.5`, since if we select all 20 candidates for each problem, the second problem get solved which gives 50% success rate. If you are curious about the candidate solutions that passed the tests, they all implemented this function:
|
95 |
-
|
96 |
-
```python
|
97 |
-
|
98 |
-
def truncate_number(number: float) -> float:
|
99 |
-
""" Given a positive floating point number, it can be decomposed into
|
100 |
-
and integer part (largest integer smaller than given number) and decimals
|
101 |
-
(leftover part always smaller than 1).
|
102 |
-
|
103 |
-
Return the decimal part of the number.
|
104 |
-
>>> truncate_number(3.5)
|
105 |
-
0.5
|
106 |
-
"""
|
107 |
-
return number % 1
|
108 |
-
```
|
|
|
1 |
A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
|
2 |
+
In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator. Table 1 below shows the HumanEval scores of CodeParrot, InCoder, PolyCoder, CodeGen and Codex (not open-source).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|