parquet-converter commited on
Commit
0ea574e
·
1 Parent(s): f993a73

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,275 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - en
8
- license:
9
- - apache-2.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 100K<n<1M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - question-answering
18
- - multiple-choice
19
- task_ids:
20
- - multiple-choice-qa
21
- - open-domain-qa
22
- paperswithcode_id: medmcqa
23
- pretty_name: MedMCQA
24
- dataset_info:
25
- features:
26
- - name: id
27
- dtype: string
28
- - name: question
29
- dtype: string
30
- - name: opa
31
- dtype: string
32
- - name: opb
33
- dtype: string
34
- - name: opc
35
- dtype: string
36
- - name: opd
37
- dtype: string
38
- - name: cop
39
- dtype:
40
- class_label:
41
- names:
42
- 0: a
43
- 1: b
44
- 2: c
45
- 3: d
46
- - name: choice_type
47
- dtype: string
48
- - name: exp
49
- dtype: string
50
- - name: subject_name
51
- dtype: string
52
- - name: topic_name
53
- dtype: string
54
- splits:
55
- - name: train
56
- num_bytes: 131904057
57
- num_examples: 182822
58
- - name: test
59
- num_bytes: 1447829
60
- num_examples: 6150
61
- - name: validation
62
- num_bytes: 2221468
63
- num_examples: 4183
64
- download_size: 55285460
65
- dataset_size: 135573354
66
- ---
67
-
68
- # Dataset Card for MedMCQA
69
-
70
- ## Table of Contents
71
- - [Dataset Description](#dataset-description)
72
- - [Dataset Summary](#dataset-summary)
73
- - [Supported Tasks](#supported-tasks-and-leaderboards)
74
- - [Languages](#languages)
75
- - [Dataset Structure](#dataset-structure)
76
- - [Data Instances](#data-instances)
77
- - [Data Fields](#data-instances)
78
- - [Data Splits](#data-instances)
79
- - [Dataset Creation](#dataset-creation)
80
- - [Curation Rationale](#curation-rationale)
81
- - [Source Data](#source-data)
82
- - [Annotations](#annotations)
83
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
84
- - [Considerations for Using the Data](#considerations-for-using-the-data)
85
- - [Social Impact of Dataset](#social-impact-of-dataset)
86
- - [Discussion of Biases](#discussion-of-biases)
87
- - [Other Known Limitations](#other-known-limitations)
88
- - [Additional Information](#additional-information)
89
- - [Dataset Curators](#dataset-curators)
90
- - [Licensing Information](#licensing-information)
91
- - [Citation Information](#citation-information)
92
-
93
- ## Dataset Description
94
-
95
- - **Homepage:** https://medmcqa.github.io
96
- - **Repository:** https://github.com/medmcqa/medmcqa
97
- - **Paper:** [MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering](https://proceedings.mlr.press/v174/pal22a)
98
- - **Leaderboard:** https://paperswithcode.com/dataset/medmcqa
99
- - **Point of Contact:** [Aaditya Ura](mailto:aadityaura@gmail.com)
100
-
101
- ### Dataset Summary
102
-
103
- MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
104
-
105
- MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
106
-
107
- Each sample contains a question, correct answer(s), and other options which require a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.
108
-
109
- MedMCQA provides an open-source dataset for the Natural Language Processing community.
110
- It is expected that this dataset would facilitate future research toward achieving better QA systems.
111
- The dataset contains questions about the following topics:
112
-
113
- - Anesthesia
114
- - Anatomy
115
- - Biochemistry
116
- - Dental
117
- - ENT
118
- - Forensic Medicine (FM)
119
- - Obstetrics and Gynecology (O&G)
120
- - Medicine
121
- - Microbiology
122
- - Ophthalmology
123
- - Orthopedics
124
- - Pathology
125
- - Pediatrics
126
- - Pharmacology
127
- - Physiology
128
- - Psychiatry
129
- - Radiology
130
- - Skin
131
- - Preventive & Social Medicine (PSM)
132
- - Surgery
133
-
134
- ### Supported Tasks and Leaderboards
135
-
136
- multiple-choice-QA, open-domain-QA: The dataset can be used to train a model for multi-choice questions answering, open domain questions answering. Questions in these exams are challenging and generally require deeper domain and language understanding as it tests the 10+ reasoning abilities across a wide range of medical subjects & topics.
137
-
138
- ### Languages
139
-
140
- The questions and answers are available in English.
141
-
142
- ## Dataset Structure
143
-
144
- ### Data Instances
145
-
146
- ```
147
- {
148
- "question":"A 40-year-old man presents with 5 days of productive cough and fever. Pseudomonas aeruginosa is isolated from a pulmonary abscess. CBC shows an acute effect characterized by marked leukocytosis (50,000 mL) and the differential count reveals a shift to left in granulocytes. Which of the following terms best describes these hematologic findings?",
149
- "exp": "Circulating levels of leukocytes and their precursors may occasionally reach very high levels (>50,000 WBC mL). These extreme elevations are sometimes called leukemoid reactions because they are similar to the white cell counts observed in leukemia, from which they must be distinguished. The leukocytosis occurs initially because of the accelerated release of granulocytes from the bone marrow (caused by cytokines, including TNF and IL-1) There is a rise in the number of both mature and immature neutrophils in the blood, referred to as a shift to the left. In contrast to bacterial infections, viral infections (including infectious mononucleosis) are characterized by lymphocytosis Parasitic infestations and certain allergic reactions cause eosinophilia, an increase in the number of circulating eosinophils. Leukopenia is defined as an absolute decrease in the circulating WBC count.",
150
- "cop":1,
151
- "opa":"Leukemoid reaction",
152
- "opb":"Leukopenia",
153
- "opc":"Myeloid metaplasia",
154
- "opd":"Neutrophilia",
155
- "subject_name":"Pathology",
156
- "topic_name":"Basic Concepts and Vascular changes of Acute Inflammation",
157
- "id":"4e1715fe-0bc3-494e-b6eb-2d4617245aef",
158
- "choice_type":"single"
159
- }
160
- ```
161
- ### Data Fields
162
-
163
- - `id` : a string question identifier for each example
164
- - `question` : question text (a string)
165
- - `opa` : Option A
166
- - `opb` : Option B
167
- - `opc` : Option C
168
- - `opd` : Option D
169
- - `cop` : Correct option, i.e., 1,2,3,4
170
- - `choice_type` ({"single", "multi"}): Question choice type.
171
- - "single": Single-choice question, where each choice contains a single option.
172
- - "multi": Multi-choice question, where each choice contains a combination of multiple suboptions.
173
- - `exp` : Expert's explanation of the answer
174
- - `subject_name` : Medical Subject name of the particular question
175
- - `topic_name` : Medical topic name from the particular subject
176
-
177
- ### Data Splits
178
-
179
- The goal of MedMCQA is to emulate the rigor of real word medical exams. To enable that, a predefined split of the dataset is provided. The split is by exams instead of the given questions. This also ensures the reusability and generalization ability of the models.
180
-
181
- The training set of MedMCQA consists of all the collected mock & online test series, whereas the test set consists of all AIIMS PG exam MCQs (years 1991-present). The development set consists of NEET PG exam MCQs (years 2001-present) to approximate real exam evaluation.
182
-
183
- Similar questions from train , test and dev set were removed based on similarity. The final split sizes are as follow:
184
-
185
- | | Train | Test | Valid |
186
- | ----- | ------ | ----- | ---- |
187
- | Question #| 182,822 | 6,150 | 4,183|
188
- | Vocab | 94,231 | 11,218 | 10,800 |
189
- | Max Ques tokens | 220 | 135| 88 |
190
- | Max Ans tokens | 38 | 21 | 25 |
191
-
192
- ## Dataset Creation
193
-
194
- ### Curation Rationale
195
-
196
- Before this attempt, very few works have been done to construct biomedical MCQA datasets (Vilares and Gomez-Rodr, 2019), and they are (1) mostly small, containing up to few thousand questions, and (2) cover a limited number of Medical topics and Subjects. This paper addresses the aforementioned limitations by introducing MedMCQA, a new large-scale, Multiple-Choice Question Answering
197
- (MCQA) dataset designed to address real-world medical entrance exam questions.
198
-
199
- ### Source Data
200
-
201
- #### Initial Data Collection and Normalization
202
-
203
- Historical Exam questions from official websites - AIIMS & NEET PG (1991- present)
204
- The raw data is collected from open websites and books
205
-
206
-
207
- #### Who are the source language producers?
208
-
209
- The dataset was created by Ankit Pal, Logesh Kumar Umapathi and Malaikannan Sankarasubbu
210
-
211
- ### Annotations
212
-
213
- #### Annotation process
214
-
215
- The dataset does not contain any additional annotations.
216
-
217
-
218
-
219
- #### Who are the annotators?
220
-
221
- [Needs More Information]
222
-
223
- ### Personal and Sensitive Information
224
-
225
- [Needs More Information]
226
-
227
- ## Considerations for Using the Data
228
-
229
- ### Social Impact of Dataset
230
-
231
- [Needs More Information]
232
-
233
- ### Discussion of Biases
234
-
235
- [Needs More Information]
236
-
237
- ### Other Known Limitations
238
-
239
- [Needs More Information]
240
-
241
- ## Additional Information
242
-
243
- ### Dataset Curators
244
-
245
- [Needs More Information]
246
-
247
- ### Licensing Information
248
-
249
- [Needs More Information]
250
-
251
- ### Citation Information
252
-
253
- If you find this useful in your research, please consider citing the dataset paper
254
-
255
- ```
256
- @InProceedings{pmlr-v174-pal22a,
257
- title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
258
- author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
259
- booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
260
- pages = {248--260},
261
- year = {2022},
262
- editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
263
- volume = {174},
264
- series = {Proceedings of Machine Learning Research},
265
- month = {07--08 Apr},
266
- publisher = {PMLR},
267
- pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
268
- url = {https://proceedings.mlr.press/v174/pal22a.html},
269
- abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS &amp; NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects &amp; topics. A detailed explanation of the solution, along with the above information, is provided in this study.}
270
- }
271
- ```
272
-
273
- ### Contributions
274
-
275
- Thanks to [@monk1337](https://github.com/monk1337) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. \nMedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.\nThe dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM)\nObstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology, \nPsychiatry, Radiology Skin, Preventive & Social Medicine (PSM) and Surgery\n", "citation": "CHILL'2022", "homepage": "https://medmcqa.github.io", "license": "Apache License 2.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "opa": {"dtype": "string", "id": null, "_type": "Value"}, "opb": {"dtype": "string", "id": null, "_type": "Value"}, "opc": {"dtype": "string", "id": null, "_type": "Value"}, "opd": {"dtype": "string", "id": null, "_type": "Value"}, "cop": {"num_classes": 4, "names": ["a", "b", "c", "d"], "id": null, "_type": "ClassLabel"}, "choice_type": {"dtype": "string", "id": null, "_type": "Value"}, "exp": {"dtype": "string", "id": null, "_type": "Value"}, "subject_name": {"dtype": "string", "id": null, "_type": "Value"}, "topic_name": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "med_mcqa", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 131904057, "num_examples": 182822, "dataset_name": "med_mcqa"}, "test": {"name": "test", "num_bytes": 1447829, "num_examples": 6150, "dataset_name": "med_mcqa"}, "validation": {"name": "validation", "num_bytes": 2221468, "num_examples": 4183, "dataset_name": "med_mcqa"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=15VkJdq5eyWIkfb_aoD3oS8i4tScbHYky": {"num_bytes": 55285460, "checksum": "16c1fbc6f47d548d2af7837b18e893aa45f45c0be9bda0a9adfff3c625bf9262"}}, "download_size": 55285460, "post_processing_size": null, "dataset_size": 135573354, "size_in_bytes": 190858814}}
 
 
default/medmcqa-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21feb2cea69f5621340c769d3ae634dedd65a965c20c798dc4345e650b817d3d
3
+ size 936357
default/medmcqa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74c60acce1a160c18d1846fd99b7269a34149fe3655eb238d2bef30f85bd5908
3
+ size 85899024
default/medmcqa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2b073f98a36f4579b0d914fa2804f08fc150477472fbfa21ce14e8d5efb9ec1
3
+ size 1476103
medmcqa.py DELETED
@@ -1,116 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"""
16
-
17
-
18
- import json
19
- import os
20
-
21
- import datasets
22
-
23
-
24
- _DESCRIPTION = """\
25
- MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
26
- MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
27
- The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM)
28
- Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology,
29
- Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM) and Surgery
30
- """
31
-
32
-
33
- _HOMEPAGE = "https://medmcqa.github.io"
34
-
35
- _LICENSE = "Apache License 2.0"
36
- _URL = "https://drive.google.com/uc?export=download&id=15VkJdq5eyWIkfb_aoD3oS8i4tScbHYky"
37
- _CITATION = """\
38
- @InProceedings{pmlr-v174-pal22a,
39
- title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
40
- author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
41
- booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
42
- pages = {248--260},
43
- year = {2022},
44
- editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
45
- volume = {174},
46
- series = {Proceedings of Machine Learning Research},
47
- month = {07--08 Apr},
48
- publisher = {PMLR},
49
- pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
50
- url = {https://proceedings.mlr.press/v174/pal22a.html},
51
- abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.}
52
- }
53
- """
54
-
55
-
56
- class MedMCQA(datasets.GeneratorBasedBuilder):
57
- """MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"""
58
-
59
- VERSION = datasets.Version("1.1.0")
60
-
61
- def _info(self):
62
-
63
- features = datasets.Features(
64
- {
65
- "id": datasets.Value("string"),
66
- "question": datasets.Value("string"),
67
- "opa": datasets.Value("string"),
68
- "opb": datasets.Value("string"),
69
- "opc": datasets.Value("string"),
70
- "opd": datasets.Value("string"),
71
- "cop": datasets.features.ClassLabel(names=["a", "b", "c", "d"]),
72
- "choice_type": datasets.Value("string"),
73
- "exp": datasets.Value("string"),
74
- "subject_name": datasets.Value("string"),
75
- "topic_name": datasets.Value("string"),
76
- }
77
- )
78
- return datasets.DatasetInfo(
79
- description=_DESCRIPTION,
80
- features=features,
81
- homepage=_HOMEPAGE,
82
- license=_LICENSE,
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- """Returns SplitGenerators."""
88
- data_dir = dl_manager.download_and_extract(_URL)
89
- return [
90
- datasets.SplitGenerator(
91
- name=datasets.Split.TRAIN,
92
- gen_kwargs={
93
- "filepath": os.path.join(data_dir, "train.json"),
94
- },
95
- ),
96
- datasets.SplitGenerator(
97
- name=datasets.Split.TEST,
98
- gen_kwargs={
99
- "filepath": os.path.join(data_dir, "test.json"),
100
- },
101
- ),
102
- datasets.SplitGenerator(
103
- name=datasets.Split.VALIDATION,
104
- gen_kwargs={
105
- "filepath": os.path.join(data_dir, "dev.json"),
106
- },
107
- ),
108
- ]
109
-
110
- def _generate_examples(self, filepath):
111
- with open(filepath, encoding="utf-8") as f:
112
- for key, row in enumerate(f):
113
- data = json.loads(row)
114
- data["cop"] = int(data.get("cop", 0)) - 1
115
- data["exp"] = data.get("exp", "")
116
- yield key, data