Datasets:
Tasks:
Question Answering
Formats:
parquet
Sub-tasks:
open-domain-qa
Languages:
English
Size:
10K - 100K
License:
Commit
·
ffefa2b
1
Parent(s):
4ebd011
Fix style in openbookqa dataset (#4270)
Browse files* Fix style in openbookqa dataset
* Fix style
* Fix dataset card
Commit from https://github.com/huggingface/datasets/commit/fbc3d1419aca2fc083cc2be11aa4d12ff2ba4399
- README.md +19 -2
- openbookqa.py +14 -21
README.md
CHANGED
@@ -1,11 +1,28 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
languages:
|
3 |
- en
|
4 |
-
|
|
|
|
|
|
|
5 |
pretty_name: OpenBookQA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
---
|
7 |
|
8 |
-
# Dataset Card for
|
9 |
|
10 |
## Table of Contents
|
11 |
- [Dataset Description](#dataset-description)
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
- expert-generated
|
5 |
+
language_creators:
|
6 |
+
- expert-generated
|
7 |
languages:
|
8 |
- en
|
9 |
+
licenses:
|
10 |
+
- unknown
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
pretty_name: OpenBookQA
|
14 |
+
size_categories:
|
15 |
+
- 1K<n<10K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- question-answering
|
20 |
+
task_ids:
|
21 |
+
- open-domain-qa
|
22 |
+
paperswithcode_id: openbookqa
|
23 |
---
|
24 |
|
25 |
+
# Dataset Card for OpenBookQA
|
26 |
|
27 |
## Table of Contents
|
28 |
- [Dataset Description](#dataset-description)
|
openbookqa.py
CHANGED
@@ -39,12 +39,9 @@ class OpenbookqaConfig(datasets.BuilderConfig):
|
|
39 |
Args:
|
40 |
data_dir: directory for the given dataset name
|
41 |
**kwargs: keyword arguments forwarded to super.
|
42 |
-
|
43 |
"""
|
44 |
|
45 |
-
super(
|
46 |
-
version=datasets.Version("1.0.0", ""), **kwargs
|
47 |
-
)
|
48 |
|
49 |
self.data_dir = data_dir
|
50 |
|
@@ -58,25 +55,25 @@ class Openbookqa(datasets.GeneratorBasedBuilder):
|
|
58 |
OpenbookqaConfig(
|
59 |
name="main",
|
60 |
description=textwrap.dedent(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
"""
|
62 |
-
It consists of 5,957 multiple-choice elementary-level science questions (4,957 train, 500 dev, 500 test),
|
63 |
-
which probe the understanding of a small “book” of 1,326 core science facts and the application of these facts to novel
|
64 |
-
situations. For training, the dataset includes a mapping from each question to the core science fact it was designed to
|
65 |
-
probe. Answering OpenBookQA questions requires additional broad common knowledge, not contained in the book. The questions,
|
66 |
-
by design, are answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. Strong neural
|
67 |
-
baselines achieve around 50% on OpenBookQA, leaving a large gap to the 92% accuracy of crowd-workers.
|
68 |
-
"""
|
69 |
),
|
70 |
data_dir="Main",
|
71 |
),
|
72 |
OpenbookqaConfig(
|
73 |
name="additional",
|
74 |
description=textwrap.dedent(
|
|
|
|
|
|
|
|
|
75 |
"""
|
76 |
-
Additionally, we provide 5,167 crowd-sourced common knowledge facts, and an expanded version of the train/dev/test questions where
|
77 |
-
each question is associated with its originating core fact, a human accuracy score, a clarity score, and an anonymized crowd-worker
|
78 |
-
ID (in the “Additional” folder).
|
79 |
-
"""
|
80 |
),
|
81 |
data_dir="Additional",
|
82 |
),
|
@@ -162,12 +159,8 @@ class Openbookqa(datasets.GeneratorBasedBuilder):
|
|
162 |
"id": data["id"],
|
163 |
"question_stem": data["question"]["stem"],
|
164 |
"choices": {
|
165 |
-
"text": [
|
166 |
-
|
167 |
-
],
|
168 |
-
"label": [
|
169 |
-
choice["label"] for choice in data["question"]["choices"]
|
170 |
-
],
|
171 |
},
|
172 |
"answerKey": data["answerKey"],
|
173 |
}
|
|
|
39 |
Args:
|
40 |
data_dir: directory for the given dataset name
|
41 |
**kwargs: keyword arguments forwarded to super.
|
|
|
42 |
"""
|
43 |
|
44 |
+
super().__init__(version=datasets.Version("1.0.0", ""), **kwargs)
|
|
|
|
|
45 |
|
46 |
self.data_dir = data_dir
|
47 |
|
|
|
55 |
OpenbookqaConfig(
|
56 |
name="main",
|
57 |
description=textwrap.dedent(
|
58 |
+
"""\
|
59 |
+
It consists of 5,957 multiple-choice elementary-level science questions (4,957 train, 500 dev, 500 test),
|
60 |
+
which probe the understanding of a small “book” of 1,326 core science facts and the application of these facts to novel
|
61 |
+
situations. For training, the dataset includes a mapping from each question to the core science fact it was designed to
|
62 |
+
probe. Answering OpenBookQA questions requires additional broad common knowledge, not contained in the book. The questions,
|
63 |
+
by design, are answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. Strong neural
|
64 |
+
baselines achieve around 50% on OpenBookQA, leaving a large gap to the 92% accuracy of crowd-workers.
|
65 |
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
),
|
67 |
data_dir="Main",
|
68 |
),
|
69 |
OpenbookqaConfig(
|
70 |
name="additional",
|
71 |
description=textwrap.dedent(
|
72 |
+
"""\
|
73 |
+
Additionally, we provide 5,167 crowd-sourced common knowledge facts, and an expanded version of the train/dev/test questions where
|
74 |
+
each question is associated with its originating core fact, a human accuracy score, a clarity score, and an anonymized crowd-worker
|
75 |
+
ID (in the 'Additional' folder).
|
76 |
"""
|
|
|
|
|
|
|
|
|
77 |
),
|
78 |
data_dir="Additional",
|
79 |
),
|
|
|
159 |
"id": data["id"],
|
160 |
"question_stem": data["question"]["stem"],
|
161 |
"choices": {
|
162 |
+
"text": [choice["text"] for choice in data["question"]["choices"]],
|
163 |
+
"label": [choice["label"] for choice in data["question"]["choices"]],
|
|
|
|
|
|
|
|
|
164 |
},
|
165 |
"answerKey": data["answerKey"],
|
166 |
}
|