conceptofmind commited on
Commit
fb4e812
1 Parent(s): d3084f4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +318 -0
README.md ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - summarization
5
+ - question-answering
6
+ - text-generation
7
+ - text2text-generation
8
+ language:
9
+ - af
10
+ - ar
11
+ - az
12
+ - bn
13
+ - cs
14
+ - de
15
+ - en
16
+ - es
17
+ - et
18
+ - fa
19
+ - fi
20
+ - fr
21
+ - ga
22
+ - gl
23
+ - gu
24
+ - he
25
+ - hi
26
+ - hr
27
+ - id
28
+ - it
29
+ - ja
30
+ - ka
31
+ - kk
32
+ - km
33
+ - ko
34
+ - lt
35
+ - lv
36
+ - mk
37
+ - ml
38
+ - mn
39
+ - mr
40
+ - my
41
+ - ne
42
+ - nl
43
+ - pl
44
+ - ps
45
+ - pt
46
+ - ro
47
+ - ru
48
+ - si
49
+ - sl
50
+ - sv
51
+ - ta
52
+ - th
53
+ - tr
54
+ - uk
55
+ - ur
56
+ - vi
57
+ - xh
58
+ - zh
59
+ pretty_name: MegaWika
60
+ size_categories:
61
+ - 10M<n<100M
62
+ ---
63
+ # Dataset Card for MegaWika
64
+
65
+ ## Dataset Description
66
+
67
+ - **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
68
+ - **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
69
+ - **Paper:** [Coming soon]
70
+ - **Leaderboard:** [Coming soon]
71
+ - **Point of Contact:** [Samuel Barham](samuel.barham@jhuapl.edu)
72
+
73
+ ### Dataset Summary
74
+
75
+ MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
76
+ 50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
77
+ non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the
78
+ passages, and FrameNet events occurring in the passages are detected using the [LOME](https://aclanthology.org/2021.eacl-demos.19.pdf) FrameNet parser.
79
+
80
+
81
+ <!---
82
+ To get a feel for the dataset -- its structure, content, strengths and weaknesses -- you may visit the [dataset viewer](https://huggingface.co/spaces/hltcoe/megawika)
83
+ we have set up as a HuggingFace Space. It allows the curious visitor to explore a small set of examples spread across a number of the dataset's constituent languages.
84
+ -->
85
+
86
+ ### Dataset Creation
87
+
88
+ The pipeline through which MegaWika was created is complex, and is described in more detail in the paper (linked above),
89
+ but the following diagram illustrates the basic approach.
90
+
91
+ ![Illustration of MegaWikaProcess](images/MegaWikaProcess-cross-lingual.drawio.png)
92
+
93
+ ### Supported Tasks and Leaderboards
94
+
95
+ MegaWika is meant to support research across a variety of tasks, including report generation, summarization, information retrieval, question answering, etc.
96
+
97
+ ### Languages
98
+
99
+ MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code:
100
+ - `af`: Afrikaans
101
+ - `ar`: Arabic
102
+ - `az`: Azeri (Azerbaijani)
103
+ - `bn`: Bengali
104
+ - `cs`: Czech
105
+ - `de`: German (Deutsch)
106
+ - `en`: English
107
+ - `es`: Spanish (Español)
108
+ - `et`: Estonian
109
+ - `fa`: Farsi (Persian)
110
+ - `fi`: Finnish
111
+ - `fr`: French
112
+ - `ga`: Irish (Gaelic)
113
+ - `gl`: Galician
114
+ - `gu`: Gujarati
115
+ - `he`: Hebrew
116
+ - `hi`: Hindi
117
+ - `hr`: Hungarian
118
+ - `id`: Indonesian
119
+ - `it`: Italian
120
+ - `ja`: Japanese
121
+ - `ka`: Georgian (Kartvelian/Kartlian)
122
+ - `kk`: Kazakh
123
+ - `km`: Khmer
124
+ - `ko`: Korean
125
+ - `lt`: Lithuanian
126
+ - `lv`: Latvian
127
+ - `mk`: Macedonian (Makedonski)
128
+ - `ml`: Malay (Malayalam)
129
+ - `mn`: Mongolian
130
+ - `mr`: Marathi
131
+ - `my`: Burmese (Myanmar language)
132
+ - `ne`: Nepali
133
+ - `nl`: Dutch (Nederlands)
134
+ - `pl`: Polish
135
+ - `ps`: Pashto
136
+ - `pt`: Portuguese
137
+ - `ro`: Romanian
138
+ - `ru`: Russian
139
+ - `si`: Sinhalese (Sri Lankan language)
140
+ - `sl`: Slovenian
141
+ - `sv`: Swedish (Svenska)
142
+ - `ta`: Tamil
143
+ - `th`: Thai
144
+ - `tr`: Turkish
145
+ - `uk`: Ukrainian
146
+ - `ur`: Urdu
147
+ - `vi`: Vietnamese
148
+ - `xh`: Xhosa
149
+ - `zh`: Chinese (Zhōng wén)
150
+
151
+ ## Dataset Structure
152
+
153
+ The dataset is divided by language, and the data for each of the 50 languages is further chunked into discrete JSON lines files.
154
+ Each line of these files -- we'll call such a line an **instance** -- contains the data extracted from a single Wikipedia article.
155
+
156
+ ### Data Instances
157
+
158
+ Each instance contains the text of the seed Wikipedia article, along with a list of **entries**. Each entry consists basically in
159
+ an extracted Wikipedia passage, the URL and scraped text of the web source it cites, a list of questions/answer pairs extracted from the passage,
160
+ and a framenet parse of the passage. Where the passage is from a non-English Wikipedia, a machine translation into English is also provided.
161
+
162
+ ### Data Fields
163
+
164
+ The detailed structure of an instance is as follows:
165
+ ```
166
+ {
167
+ "article_title": <string : title of original Wikipedia article>
168
+ "article_text": <string : text of Wikipedia article>
169
+ "entries": [
170
+ # Wiki Passage
171
+ "id": <string : passage ID>
172
+ "passage": {
173
+ "text": <string : text of passage in English (possibly via MT)>
174
+ "parse": <list of dict : FrameNet parse of English passage text>
175
+ "en_tokens": <dict : tokenization of passage in English>
176
+ "lang_tokens": <dict : tokenization of original non-English passage>
177
+ "en_lang_token_map": <dict : alignment mapping between English and original language token indices>
178
+ }
179
+ # MT
180
+ "original": <string : original language passage>
181
+ "original_sents": <list of string : sentencized original language passage>
182
+ "translation": <string : machine translation of passage>
183
+ "translation_sents": <list of string : sentencized machine translation of passage>
184
+ "translation_probs": <list of float : log prob of machine translation by sentence, where available>
185
+ "repetitious_translation": <string \in ("true", "false") : automated judgment on whether machine translation is pathologically repetitious>
186
+ "source_lang": <string : language ID, 2-character ISO code>
187
+ # Source
188
+ "source_url": <string : URL of the cited web source>
189
+ "source_text": <string : content extracted from the scrape of the source URL>
190
+ # Question/Answer Pairs
191
+ "qa_pairs": [
192
+ ...
193
+ {
194
+ "question": <string : generated question>
195
+ "passage_id": <string : passage ID>
196
+ "en_answer": <string : English answer>
197
+ "lang_answer": <string : aligned original language answer>
198
+ "frames": [
199
+ ...
200
+ {
201
+ "frame": <string : frame triggered by the question>
202
+ "argument": <string : detected frame arguments>
203
+ }
204
+ ...
205
+ ]
206
+ # NB: answer matches can be empty, in the case no matching span exists
207
+ "en_matches_in_source": <list of int : start and end index of the English language-answer token(s) in the source document>
208
+ "en_match_in_passage": <list of int : start and end index of the English language-answer token(s) in the English language translation of the passage>
209
+ "lang_matches_in_source": <list of int : start and end index of the original language-answer token(s) in the source document>
210
+ "lang_match_in_passage": <list of int : start and end index of the original language-answer token(s) in the original language passage>
211
+ "passage": <list of string : sentencized view of the passage>
212
+ "en_answer_tokens": <list of string>
213
+ "match_disambiguated_question": <string : disambiguated version of question obtained by matching pronouns with article title (noisy but often helpful)>
214
+ }
215
+ ...
216
+ ]
217
+ ]
218
+ }
219
+ ```
220
+
221
+ English language instances differ not in structure but in content;
222
+ 1. Fields in the block labeled "MT" above are naturally null (that is, they are set to falsy values in Python -- specifically `None`)
223
+ 2. Since the Wiki passage only exists in English, and has no corresponding non-English "original language" version, answer spans also necessarily have only an English-language version (and no non-English "original-language" version. Therefore, fields in the `qa_pairs` block beginning with `lang_` are set to null/falsy values in Python (in this case, empty lists).
224
+
225
+
226
+ ### Data Splits
227
+
228
+ MegaWika is currently split only by language, as each task will imply its own approach to filtering, sampling, downselecting, and splitting into train/test splits.
229
+
230
+ <!---
231
+ ### Source Data
232
+
233
+ #### Initial Data Collection and Normalization
234
+
235
+ [More Information Needed]
236
+
237
+ #### Who are the source language producers?
238
+
239
+ [More Information Needed]
240
+
241
+ ### Annotations
242
+
243
+ #### Annotation process
244
+
245
+ [More Information Needed]
246
+
247
+ #### Who are the annotators?
248
+
249
+ [More Information Needed]
250
+
251
+ ### Personal and Sensitive Information
252
+
253
+ [More Information Needed]
254
+
255
+ ## Considerations for Using the Data
256
+
257
+ ### Social Impact of Dataset
258
+
259
+ [More Information Needed]
260
+
261
+ ### Discussion of Biases
262
+
263
+ [More Information Needed]
264
+
265
+ ### Other Known Limitations
266
+
267
+ [More Information Needed]
268
+ -->
269
+
270
+ ## Licensing and Takedown
271
+
272
+ MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
273
+
274
+ We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
275
+
276
+ We release this dataset and all its contents under CC-BY-SA-4.0.
277
+
278
+ ### Notice and Takedown Policy:
279
+ *NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
280
+
281
+ - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
282
+ - Clearly identify the copyrighted work claimed to be infringed.
283
+ - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
284
+
285
+ And contact the authors.
286
+
287
+ *Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
288
+
289
+ ## Additional Information
290
+
291
+ ### Dataset Curators
292
+
293
+ Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
294
+ You can contact one the MegaWika authors, including [Samuel Barham](mailto:samuel.barham@jhuapl.edu), [Orion Weller](mailto:oweller2@jhu.edu),
295
+ and [Ben van Durme](mailto:vandurme@jhu.edu) with questions.
296
+
297
+ ### Licensing Information
298
+
299
+ Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
300
+
301
+ ### Citation Information
302
+
303
+ ```
304
+ @misc{barham2023megawika,
305
+ title={MegaWika: Millions of reports and their sources across 50 diverse languages},
306
+ author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
307
+ year={2023},
308
+ eprint={2307.07049},
309
+ archivePrefix={arXiv},
310
+ primaryClass={cs.CL}
311
+ }
312
+ ```
313
+
314
+ <!--
315
+ ### Contributions
316
+
317
+ [More Information Needed]
318
+ -->