Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
programming-language
code
program-synthesis
automatic-code-repair
code-retrieval
code-translation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -105,7 +105,7 @@ We have two data files that are required for multiple tasks.
|
|
105 |
1. `problem_descriptions.jsonl`
|
106 |
2. `unittest_db.json`
|
107 |
|
108 |
-
You can find these two files in the root directory of the [main](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval/tree/main) branch of huggingface dataset repository. To avoid data
|
109 |
|
110 |
## Structure of `problem_descriptions.jsonl`
|
111 |
|
@@ -140,19 +140,19 @@ A sample,
|
|
140 |
### Key Definitions
|
141 |
|
142 |
1. `description`: Problem description in textual format, math operations are written in latex.
|
143 |
-
2. `input_from`: How the program should take unit test.
|
144 |
3. `output_to`: Where the program should output the result of the unit test.
|
145 |
4. `time_limit`: Time limit to solve the problem.
|
146 |
5. `memory_limit`: Memory limit to solve the problem.
|
147 |
-
6. `input_spec`: How and what order the input will be given to the program
|
148 |
-
7. `output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with *exact string match* or *floating point comparison* with a precision boundary.
|
149 |
8. `sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
|
150 |
9. `sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
|
151 |
10. `notes`: Explanation of `sample_inputs` & `sample_outputs`.
|
152 |
11. `tags`: The problem categories.
|
153 |
-
12. `src_uid`: The unique id of the problem. This ID is referred in the task data samples instead of putting all
|
154 |
-
13. `difficulty`: How difficult is it to solve the problem for a human (annotated by an expert human)
|
155 |
-
14. `created_at`: The
|
156 |
|
157 |
## Structure of `unittest_db.json`
|
158 |
|
@@ -182,8 +182,8 @@ unittest_db = {
|
|
182 |
### Key Definitions
|
183 |
|
184 |
1. `unittest_db.json` dict keys i.e., `db884d679d9cfb1dc4bc511f83beedda` are the `src_uid` from `problem_descriptions.jsonl`.
|
185 |
-
2. `input
|
186 |
-
3. `output
|
187 |
|
188 |
# Citation
|
189 |
|
|
|
105 |
1. `problem_descriptions.jsonl`
|
106 |
2. `unittest_db.json`
|
107 |
|
108 |
+
You can find these two files in the root directory of the [main](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval/tree/main) branch of huggingface dataset repository. To avoid data redundancy we didn't include these data with the relevant tasks, rather we add a unique id `src_uid` to retrieve these data.
|
109 |
|
110 |
## Structure of `problem_descriptions.jsonl`
|
111 |
|
|
|
140 |
### Key Definitions
|
141 |
|
142 |
1. `description`: Problem description in textual format, math operations are written in latex.
|
143 |
+
2. `input_from`: How the program should take the unit test.
|
144 |
3. `output_to`: Where the program should output the result of the unit test.
|
145 |
4. `time_limit`: Time limit to solve the problem.
|
146 |
5. `memory_limit`: Memory limit to solve the problem.
|
147 |
+
6. `input_spec`: How and in what order the input will be given to the program? It also includes the date range, types, and sizes.
|
148 |
+
7. `output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with an *exact string match* or *floating point comparison* with a precision boundary.
|
149 |
8. `sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
|
150 |
9. `sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
|
151 |
10. `notes`: Explanation of `sample_inputs` & `sample_outputs`.
|
152 |
11. `tags`: The problem categories.
|
153 |
+
12. `src_uid`: The unique id of the problem. This ID is referred to in the task data samples instead of putting all this information.
|
154 |
+
13. `difficulty`: How difficult is it to solve the problem for a human (annotated by an expert human)?
|
155 |
+
14. `created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
|
156 |
|
157 |
## Structure of `unittest_db.json`
|
158 |
|
|
|
182 |
### Key Definitions
|
183 |
|
184 |
1. `unittest_db.json` dict keys i.e., `db884d679d9cfb1dc4bc511f83beedda` are the `src_uid` from `problem_descriptions.jsonl`.
|
185 |
+
2. `input`: Input of the unit test.
|
186 |
+
3. `output`: List of expected outputs for the unit test.
|
187 |
|
188 |
# Citation
|
189 |
|