Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ configs:
|
|
60 |
path: data/test-*
|
61 |
---
|
62 |
# Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
|
63 |
-
CanItEdit is a benchmark for evaluating LLMs on instructional code editing, the task of updating a program given a natural language instruction. The benchmark contains
|
64 |
|
65 |
The dataset’s dual natural language instructions test model efficiency in two scenarios:
|
66 |
1) Descriptive: Detailed instructions replicate situations where users provide specific specifications or
|
|
|
60 |
path: data/test-*
|
61 |
---
|
62 |
# Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
|
63 |
+
CanItEdit is a benchmark for evaluating LLMs on instructional code editing, the task of updating a program given a natural language instruction. The benchmark contains 105 hand-crafted Python programs with before and after code blocks, two types of natural language instructions (descriptive and lazy), and a hidden test suite.
|
64 |
|
65 |
The dataset’s dual natural language instructions test model efficiency in two scenarios:
|
66 |
1) Descriptive: Detailed instructions replicate situations where users provide specific specifications or
|