Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,11 @@ Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pas
|
|
34 |
|
35 |
To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.
|
36 |
|
37 |
-
For more details on data collection and evaluation results, please refer to our arxiv [preprint](https://arxiv.org/abs/2410.21647).
|
|
|
|
|
|
|
|
|
38 |
|
39 |
## Usage
|
40 |
|
@@ -86,12 +90,6 @@ DatasetDict({
|
|
86 |
|
87 |
```
|
88 |
|
89 |
-
## Evaluation Scripts
|
90 |
-
Examples code for downloading repositories, preparing repository snapshot, and running test cases for evaluation are propived at [code](https://github.com/lt-asset/REPOCOD)
|
91 |
-
|
92 |
-
Check our [Leaderboard](https://lt-asset.github.io/REPOCOD/) for preliminary results using GPT-4o with BM25 and dense retrieval.
|
93 |
-
|
94 |
-
|
95 |
## Citation
|
96 |
```
|
97 |
@misc{liang2024repocod,
|
|
|
34 |
|
35 |
To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.
|
36 |
|
37 |
+
* For more details on data collection and evaluation results, please refer to our arxiv [preprint](https://arxiv.org/abs/2410.21647).
|
38 |
+
|
39 |
+
* Examples code for downloading repositories, preparing repository snapshot, and running test cases for evaluation are propived at [code](https://github.com/lt-asset/REPOCOD)
|
40 |
+
|
41 |
+
* Check our [Leaderboard](https://lt-asset.github.io/REPOCOD/) for preliminary results using GPT-4o with BM25 and dense retrieval.
|
42 |
|
43 |
## Usage
|
44 |
|
|
|
90 |
|
91 |
```
|
92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
## Citation
|
94 |
```
|
95 |
@misc{liang2024repocod,
|