system HF staff commited on
Commit
d562e66
1 Parent(s): 0b3e27f

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. arcd.py +4 -2
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://github.com/husseinmozannar/SOQAL/tree/master/data](https://github.com/husseinmozannar/SOQAL/tree/master/data)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,23 +37,23 @@
37
  - **Size of the generated dataset:** 1.62 MB
38
  - **Total amount of disk used:** 3.47 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.
43
 
44
- ### [Supported Tasks](#supported-tasks)
45
 
46
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
 
48
- ### [Languages](#languages)
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
- ## [Dataset Structure](#dataset-structure)
53
 
54
  We show detailed information for up to 5 configurations of the dataset.
55
 
56
- ### [Data Instances](#data-instances)
57
 
58
  #### plain_text
59
 
@@ -74,7 +74,7 @@ This example was too long and was cropped:
74
  }
75
  ```
76
 
77
- ### [Data Fields](#data-fields)
78
 
79
  The data fields are the same among all splits.
80
 
@@ -87,55 +87,55 @@ The data fields are the same among all splits.
87
  - `text`: a `string` feature.
88
  - `answer_start`: a `int32` feature.
89
 
90
- ### [Data Splits Sample Size](#data-splits-sample-size)
91
 
92
  | name |train|validation|
93
  |----------|----:|---------:|
94
  |plain_text| 693| 702|
95
 
96
- ## [Dataset Creation](#dataset-creation)
97
 
98
- ### [Curation Rationale](#curation-rationale)
99
 
100
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
 
102
- ### [Source Data](#source-data)
103
 
104
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
 
106
- ### [Annotations](#annotations)
107
 
108
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
109
 
110
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
111
 
112
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
 
114
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
115
 
116
- ### [Social Impact of Dataset](#social-impact-of-dataset)
117
 
118
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
 
120
- ### [Discussion of Biases](#discussion-of-biases)
121
 
122
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
 
124
- ### [Other Known Limitations](#other-known-limitations)
125
 
126
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
 
128
- ## [Additional Information](#additional-information)
129
 
130
- ### [Dataset Curators](#dataset-curators)
131
 
132
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
 
134
- ### [Licensing Information](#licensing-information)
135
 
136
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
 
138
- ### [Citation Information](#citation-information)
139
 
140
  ```
141
  @inproceedings{mozannar-etal-2019-neural,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://github.com/husseinmozannar/SOQAL/tree/master/data](https://github.com/husseinmozannar/SOQAL/tree/master/data)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 1.62 MB
38
  - **Total amount of disk used:** 3.47 MB
39
 
40
+ ### Dataset Summary
41
 
42
  Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.
43
 
44
+ ### Supported Tasks
45
 
46
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
 
48
+ ### Languages
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
+ ## Dataset Structure
53
 
54
  We show detailed information for up to 5 configurations of the dataset.
55
 
56
+ ### Data Instances
57
 
58
  #### plain_text
59
 
 
74
  }
75
  ```
76
 
77
+ ### Data Fields
78
 
79
  The data fields are the same among all splits.
80
 
 
87
  - `text`: a `string` feature.
88
  - `answer_start`: a `int32` feature.
89
 
90
+ ### Data Splits Sample Size
91
 
92
  | name |train|validation|
93
  |----------|----:|---------:|
94
  |plain_text| 693| 702|
95
 
96
+ ## Dataset Creation
97
 
98
+ ### Curation Rationale
99
 
100
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
 
102
+ ### Source Data
103
 
104
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
 
106
+ ### Annotations
107
 
108
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
109
 
110
+ ### Personal and Sensitive Information
111
 
112
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
 
114
+ ## Considerations for Using the Data
115
 
116
+ ### Social Impact of Dataset
117
 
118
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
 
120
+ ### Discussion of Biases
121
 
122
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
 
124
+ ### Other Known Limitations
125
 
126
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
 
128
+ ## Additional Information
129
 
130
+ ### Dataset Curators
131
 
132
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
 
134
+ ### Licensing Information
135
 
136
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
 
138
+ ### Citation Information
139
 
140
  ```
141
  @inproceedings{mozannar-etal-2019-neural,
arcd.py CHANGED
@@ -3,11 +3,13 @@
3
  from __future__ import absolute_import, division, print_function
4
 
5
  import json
6
- import logging
7
 
8
  import datasets
9
 
10
 
 
 
 
11
  _CITATION = """\
12
  @inproceedings{mozannar-etal-2019-neural,
13
  title = {Neural {A}rabic Question Answering},
@@ -91,7 +93,7 @@ class Arcd(datasets.GeneratorBasedBuilder):
91
 
92
  def _generate_examples(self, filepath):
93
  """This function returns the examples in the raw (text) form."""
94
- logging.info("generating examples from = %s", filepath)
95
  with open(filepath, encoding="utf-8") as f:
96
  arcd = json.load(f)
97
  for article in arcd["data"]:
 
3
  from __future__ import absolute_import, division, print_function
4
 
5
  import json
 
6
 
7
  import datasets
8
 
9
 
10
+ logger = datasets.logging.get_logger(__name__)
11
+
12
+
13
  _CITATION = """\
14
  @inproceedings{mozannar-etal-2019-neural,
15
  title = {Neural {A}rabic Question Answering},
 
93
 
94
  def _generate_examples(self, filepath):
95
  """This function returns the examples in the raw (text) form."""
96
+ logger.info("generating examples from = %s", filepath)
97
  with open(filepath, encoding="utf-8") as f:
98
  arcd = json.load(f)
99
  for article in arcd["data"]: