Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,287 Bytes
301e3f8
58de83c
 
301e3f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf83de3
301e3f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf83de3
 
 
 
 
 
 
 
 
 
 
 
 
301e3f8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
language:
- en
license: bsd-3-clause-clear
dataset_info:
  features:
  - name: repository
    dtype: string
  - name: repo_id
    dtype: string
  - name: target_module_path
    dtype: string
  - name: prompt
    dtype: string
  - name: relavent_test_path
    dtype: string
  - name: full_function
    dtype: string
  - name: function_name
    dtype: string
  splits:
  - name: train
    num_bytes: 5410189
    num_examples: 980
  download_size: 2045590
  dataset_size: 5410189
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'

Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.

To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.

## Usage

```python
from datasets import load_dataset

data = load_dataset('lt-asset/REPOCOD')
print(data)

DatasetDict({
    train: Dataset({
        features: ['repository', 'repo_id', 'target_module_path', 'prompt', 'relavent_test_path', 'full_function', 'function_name'],
        num_rows: 980
    })
})
```

## Data Fields
- repository: the source repository of the current sample
- repo_id: the unique index of the sample in the corresponding source repository
- target_module_path: the file path containing the current sample relative to the root of the source repository
- prompt: the developer provided function signature and docstring
- relavent_test_path: the path to the relevant test cases
- full_function: the canonical solution of the current sample
- function_name: the name of the target function (current sample)

## Example

```
"repository": "seaborn",                          # collected from seaborn
"repo_id": "0",                                   # first sample from seaborn 
"target_module_path": "seaborn/_core/scales.py",  # the target function is in this path
"prompt": "    def label( 
    self,
    formatter: Formatter | None = None, *,        
    like: str | Callable | None = None,           
    base: int | None | Default = default,         
    unit: str | None = None,
) -> Continuous: ....",                            # the function signature and docstring for the target function
"relevant_test_path": "/usr/src/app/target_test_cases/failed_tests_Continuous.label.txt", # Path to relevant tests for the function
"full_function": "    def label(
    self,
    formatter: Formatter | None = None, *,
    like: str | Callable | None = None,
    base: int | None | Default = default,
    unit: str | None = None,
) -> Continuous: ....",                            # the full snippet of the target function, including the function signature and docstring for the target function
"function_name": "Continuous.label"               # The name of the target function

```

## Citation
```
@misc{liang2024repocod,
      title={Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'}, 
      author={Shanchao Liang and Yiran Hu and Nan Jiang and Lin Tan},
      year={2024},
      eprint={2410.21647},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2410.21647}, 
}
```