id
stringlengths 2
115
| README
stringlengths 0
977k
|
---|---|
jeopardy | ---
language:
- en
paperswithcode_id: null
pretty_name: jeopardy
dataset_info:
features:
- name: category
dtype: string
- name: air_date
dtype: string
- name: question
dtype: string
- name: value
dtype: int32
- name: answer
dtype: string
- name: round
dtype: string
- name: show_number
dtype: int32
splits:
- name: train
num_bytes: 35916080
num_examples: 216930
download_size: 55554625
dataset_size: 35916080
---
# Dataset Card for "jeopardy"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/](https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 12.72 MB
- **Size of the generated dataset:** 36.13 MB
- **Total amount of disk used:** 48.85 MB
### Dataset Summary
Dataset containing 216,930 Jeopardy questions, answers and other data.
The json file is an unordered list of questions where each question has
'category' : the question category, e.g. "HISTORY"
'value' : integer $ value of the question as string, e.g. "200"
Note: This is "None" for Final Jeopardy! and Tiebreaker questions
'question' : text of question
Note: This sometimes contains hyperlinks and other things messy text such as when there's a picture or video question
'answer' : text of answer
'round' : one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker"
Note: Tiebreaker questions do happen but they're very rare (like once every 20 years)
'show_number' : int of show number, e.g '4680'
'air_date' : string of the show air date in format YYYY-MM-DD
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 12.72 MB
- **Size of the generated dataset:** 36.13 MB
- **Total amount of disk used:** 48.85 MB
An example of 'train' looks as follows.
```
{
"air_date": "2004-12-31",
"answer": "Hattie McDaniel (for her role in Gone with the Wind)",
"category": "EPITAPHS & TRIBUTES",
"question": "'1939 Oscar winner: \"...you are a credit to your craft, your race and to your family\"'",
"round": "Jeopardy!",
"show_number": 4680,
"value": 2000
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `category`: a `string` feature.
- `air_date`: a `string` feature.
- `question`: a `string` feature.
- `value`: a `int32` feature.
- `answer`: a `string` feature.
- `round`: a `string` feature.
- `show_number`: a `int32` feature.
### Data Splits
| name |train |
|-------|-----:|
|default|216930|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
jfleg | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
- other-language-learner
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-GUG-grammaticality-judgements
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: jfleg
pretty_name: JHU FLuency-Extended GUG corpus
tags:
- grammatical-error-correction
dataset_info:
features:
- name: sentence
dtype: string
- name: corrections
sequence: string
splits:
- name: validation
num_bytes: 379991
num_examples: 755
- name: test
num_bytes: 379711
num_examples: 748
download_size: 731111
dataset_size: 759702
---
# Dataset Card for JFLEG
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/keisks/jfleg)
- **Repository:** [Github](https://github.com/keisks/jfleg)
- **Paper:** [Napoles et al., 2020](https://www.aclweb.org/anthology/E17-2037/)
- **Leaderboard:** [Leaderboard](https://github.com/keisks/jfleg#leader-board-published-results)
- **Point of Contact:** Courtney Napoles, Keisuke Sakaguchi
### Dataset Summary
JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus. It is a gold standard benchmark for developing and evaluating GEC systems with respect to fluency (extent to which a text is native-sounding) as well as grammaticality. For each source document, there are four human-written corrections.
### Supported Tasks and Leaderboards
Grammatical error correction.
### Languages
English (native as well as L2 writers)
## Dataset Structure
### Data Instances
Each instance contains a source sentence and four corrections. For example:
```python
{
'sentence': "They are moved by solar energy ."
'corrections': [
"They are moving by solar energy .",
"They are moved by solar energy .",
"They are moved by solar energy .",
"They are propelled by solar energy ."
]
}
```
### Data Fields
- sentence: original sentence written by an English learner
- corrections: corrected versions by human annotators. The order of the annotations are consistent (eg first sentence will always be written by annotator "ref0").
### Data Splits
- This dataset contains 1511 examples in total and comprise a dev and test split.
- There are 754 and 747 source sentences for dev and test, respectively.
- Each sentence has 4 corresponding corrected versions.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
This benchmark was proposed by [Napoles et al., 2020](https://www.aclweb.org/anthology/E17-2037/).
```
@InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,
author = {Napoles, Courtney and Sakaguchi, Keisuke and Tetreault, Joel},
title = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers},
month = {April},
year = {2017},
address = {Valencia, Spain},
publisher = {Association for Computational Linguistics},
pages = {229--234},
url = {http://www.aclweb.org/anthology/E17-2037}
}
@InProceedings{heilman-EtAl:2014:P14-2,
author = {Heilman, Michael and Cahill, Aoife and Madnani, Nitin and Lopez, Melissa and Mulholland, Matthew and Tetreault, Joel},
title = {Predicting Grammaticality on an Ordinal Scale},
booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
month = {June},
year = {2014},
address = {Baltimore, Maryland},
publisher = {Association for Computational Linguistics},
pages = {174--180},
url = {http://www.aclweb.org/anthology/P14-2029}
}
```
### Contributions
Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset. |
jigsaw_toxicity_pred | ---
annotations_creators:
- crowdsourced
language_creators:
- other
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: JigsawToxicityPred
dataset_info:
features:
- name: comment_text
dtype: string
- name: toxic
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: severe_toxic
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: obscene
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: threat
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: insult
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: identity_hate
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 71282358
num_examples: 159571
- name: test
num_bytes: 28241991
num_examples: 63978
download_size: 0
dataset_size: 99524349
train-eval-index:
- config: default
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
comment_text: text
toxic: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Jigsaw Comment Toxicity Classification Kaggle Competition](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments. This dataset consists of a large number of Wikipedia comments which have been labeled by human raters for toxic behavior.
### Supported Tasks and Leaderboards
The dataset support multi-label classification
### Languages
The comments are in English
## Dataset Structure
### Data Instances
A data point consists of a comment followed by multiple labels that can be associated with it.
{'id': '02141412314',
'comment_text': 'Sample comment text',
'toxic': 0,
'severe_toxic': 0,
'obscene': 0,
'threat': 0,
'insult': 0,
'identity_hate': 1,
}
### Data Fields
- `id`: id of the comment
- `comment_text`: the text of the comment
- `toxic`: value of 0(non-toxic) or 1(toxic) classifying the comment
- `severe_toxic`: value of 0(non-severe_toxic) or 1(severe_toxic) classifying the comment
- `obscene`: value of 0(non-obscene) or 1(obscene) classifying the comment
- `threat`: value of 0(non-threat) or 1(threat) classifying the comment
- `insult`: value of 0(non-insult) or 1(insult) classifying the comment
- `identity_hate`: value of 0(non-identity_hate) or 1(identity_hate) classifying the comment
### Data Splits
The data is split into a training and testing set.
## Dataset Creation
### Curation Rationale
The dataset was created to help in efforts to identify and curb instances of toxicity online.
### Source Data
#### Initial Data Collection and Normalization
The dataset is a collection of Wikipedia comments.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The "Toxic Comment Classification" dataset is released under [CC0], with the underlying comment text being governed by Wikipedia\'s [CC-SA-3.0].
### Citation Information
No citation information.
### Contributions
Thanks to [@Tigrex161](https://github.com/Tigrex161) for adding this dataset. |
jigsaw_unintended_bias | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
pretty_name: Jigsaw Unintended Bias in Toxicity Classification
tags:
- toxicity-prediction
dataset_info:
features:
- name: target
dtype: float32
- name: comment_text
dtype: string
- name: severe_toxicity
dtype: float32
- name: obscene
dtype: float32
- name: identity_attack
dtype: float32
- name: insult
dtype: float32
- name: threat
dtype: float32
- name: asian
dtype: float32
- name: atheist
dtype: float32
- name: bisexual
dtype: float32
- name: black
dtype: float32
- name: buddhist
dtype: float32
- name: christian
dtype: float32
- name: female
dtype: float32
- name: heterosexual
dtype: float32
- name: hindu
dtype: float32
- name: homosexual_gay_or_lesbian
dtype: float32
- name: intellectual_or_learning_disability
dtype: float32
- name: jewish
dtype: float32
- name: latino
dtype: float32
- name: male
dtype: float32
- name: muslim
dtype: float32
- name: other_disability
dtype: float32
- name: other_gender
dtype: float32
- name: other_race_or_ethnicity
dtype: float32
- name: other_religion
dtype: float32
- name: other_sexual_orientation
dtype: float32
- name: physical_disability
dtype: float32
- name: psychiatric_or_mental_illness
dtype: float32
- name: transgender
dtype: float32
- name: white
dtype: float32
- name: created_date
dtype: string
- name: publication_id
dtype: int32
- name: parent_id
dtype: float32
- name: article_id
dtype: int32
- name: rating
dtype:
class_label:
names:
'0': rejected
'1': approved
- name: funny
dtype: int32
- name: wow
dtype: int32
- name: sad
dtype: int32
- name: likes
dtype: int32
- name: disagree
dtype: int32
- name: sexual_explicit
dtype: float32
- name: identity_annotator_count
dtype: int32
- name: toxicity_annotator_count
dtype: int32
splits:
- name: train
num_bytes: 914264058
num_examples: 1804874
- name: test_private_leaderboard
num_bytes: 49188921
num_examples: 97320
- name: test_public_leaderboard
num_bytes: 49442360
num_examples: 97320
download_size: 0
dataset_size: 1012895339
---
# Dataset Card for Jigsaw Unintended Bias in Toxicity Classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification
- **Repository:**
- **Paper:**
- **Leaderboard:** https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard
- **Point of Contact:**
### Dataset Summary
The Jigsaw Unintended Bias in Toxicity Classification dataset comes from the eponymous Kaggle competition.
Please see the original [data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data)
description for more information.
### Supported Tasks and Leaderboards
The main target for this dataset is toxicity prediction. Several toxicity subtypes are also available, so the dataset
can be used for multi-attribute prediction.
See the original [leaderboard](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard)
for reference.
### Languages
English
## Dataset Structure
### Data Instances
A data point consists of an id, a comment, the main target, the other toxicity subtypes as well as identity attributes.
For instance, here's the first train example.
```
{
"article_id": 2006,
"asian": NaN,
"atheist": NaN,
"bisexual": NaN,
"black": NaN,
"buddhist": NaN,
"christian": NaN,
"comment_text": "This is so cool. It's like, 'would you want your mother to read this??' Really great idea, well done!",
"created_date": "2015-09-29 10:50:41.987077+00",
"disagree": 0,
"female": NaN,
"funny": 0,
"heterosexual": NaN,
"hindu": NaN,
"homosexual_gay_or_lesbian": NaN,
"identity_annotator_count": 0,
"identity_attack": 0.0,
"insult": 0.0,
"intellectual_or_learning_disability": NaN,
"jewish": NaN,
"latino": NaN,
"likes": 0,
"male": NaN,
"muslim": NaN,
"obscene": 0.0,
"other_disability": NaN,
"other_gender": NaN,
"other_race_or_ethnicity": NaN,
"other_religion": NaN,
"other_sexual_orientation": NaN,
"parent_id": NaN,
"physical_disability": NaN,
"psychiatric_or_mental_illness": NaN,
"publication_id": 2,
"rating": 0,
"sad": 0,
"severe_toxicity": 0.0,
"sexual_explicit": 0.0,
"target": 0.0,
"threat": 0.0,
"toxicity_annotator_count": 4,
"transgender": NaN,
"white": NaN,
"wow": 0
}
```
### Data Fields
- `id`: id of the comment
- `target`: value between 0(non-toxic) and 1(toxic) classifying the comment
- `comment_text`: the text of the comment
- `severe_toxicity`: value between 0(non-severe_toxic) and 1(severe_toxic) classifying the comment
- `obscene`: value between 0(non-obscene) and 1(obscene) classifying the comment
- `identity_attack`: value between 0(non-identity_hate) or 1(identity_hate) classifying the comment
- `insult`: value between 0(non-insult) or 1(insult) classifying the comment
- `threat`: value between 0(non-threat) and 1(threat) classifying the comment
- For a subset of rows, columns containing whether the comment mentions the entities (they may contain NaNs):
- `male`
- `female`
- `transgender`
- `other_gender`
- `heterosexual`
- `homosexual_gay_or_lesbian`
- `bisexual`
- `other_sexual_orientation`
- `christian`
- `jewish`
- `muslim`
- `hindu`
- `buddhist`
- `atheist`
- `other_religion`
- `black`
- `white`
- `asian`
- `latino`
- `other_race_or_ethnicity`
- `physical_disability`
- `intellectual_or_learning_disability`
- `psychiatric_or_mental_illness`
- `other_disability`
- Other metadata related to the source of the comment, such as creation date, publication id, number of likes,
number of annotators, etc:
- `created_date`
- `publication_id`
- `parent_id`
- `article_id`
- `rating`
- `funny`
- `wow`
- `sad`
- `likes`
- `disagree`
- `sexual_explicit`
- `identity_annotator_count`
- `toxicity_annotator_count`
### Data Splits
There are four splits:
- train: The train dataset as released during the competition. Contains labels and identity information for a
subset of rows.
- test: The train dataset as released during the competition. Does not contain labels nor identity information.
- test_private_expanded: The private leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
- test_public_expanded: The public leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
## Dataset Creation
### Curation Rationale
The dataset was created to help in efforts to identify and curb instances of toxicity online.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset is released under CC0, as is the underlying comment text.
### Citation Information
No citation is available for this dataset, though you may link to the [kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) competition
### Contributions
Thanks to [@iwontbecreative](https://github.com/iwontbecreative) for adding this dataset. |
jnlpba | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-genia-v3.02
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: BioNLP / JNLPBA Shared Task 2004
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DNA
'2': I-DNA
'3': B-RNA
'4': I-RNA
'5': B-cell_line
'6': I-cell_line
'7': B-cell_type
'8': I-cell_type
'9': B-protein
'10': I-protein
config_name: jnlpba
splits:
- name: train
num_bytes: 8775707
num_examples: 18546
- name: validation
num_bytes: 1801565
num_examples: 3856
download_size: 3171072
dataset_size: 10577272
---
# Dataset Card for JNLPBA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- **Repository:** [Needs More Information]
- **Paper:** https://www.aclweb.org/anthology/W04-1213.pdf
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-ner-on-jnlpba?p=biobert-a-pre-trained-biomedical-language
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search on MEDLINE using the MeSH terms human, blood cells and transcription factors. From this search 2,000 abstracts were selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification. Among the classes, 36 terminal classes were used to annotate the GENIA corpus.
### Supported Tasks and Leaderboards
NER
### Languages
English
## Dataset Structure
### Data Instances
{
'id': '1',
'tokens': ['IL-2', 'gene', 'expression', 'and', 'NF-kappa', 'B', 'activation', 'through', 'CD28', 'requires', 'reactive', 'oxygen', 'production', 'by', '5-lipoxygenase', '.'],
'ner_tags': [1, 2, 0, 0, 9, 10, 0, 0, 9, 0, 0, 0, 0, 0, 9, 0],
}
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no bio-entity mentioned, `1` signals the first token of a bio-entity and `2` the subsequent bio-entity tokens.
### Data Splits
Train samples: 37094
Validation samples: 7714
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset. |
journalists_questions | ---
annotations_creators:
- crowdsourced
language_creators:
- other
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: JournalistsQuestions
tags:
- question-identification
dataset_info:
features:
- name: tweet_id
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
- name: label_confidence
dtype: float32
config_name: plain_text
splits:
- name: train
num_bytes: 342296
num_examples: 10077
download_size: 271039
dataset_size: 342296
---
# Dataset Card for journalists_questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://qufaculty.qu.edu.qa/telsayed/datasets/
- **Repository:** [Needs More Information]
- **Paper:** https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/download/13221/12856
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Maram Hasanain]
maram.hasanain@qu.edu.qa
### Dataset Summary
The journalists_questions dataset supports question identification over Arabic tweets of journalists.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Arabic
## Dataset Structure
### Data Instances
Our dataset supports question identification task. It includes 10K Arabic tweets crawled from journalists accounts. Tweets were labelled by crowdsourcing. Each tweet is associated with one label: question tweet or not. A question tweet is a tweet that has at least one interrogative question. Each label is associated with a number that represents the confidence in the label, given that each tweet was labelled by 3 annotators and an aggregation method was followed to choose the final label.
Below is an example:
{
'tweet_id': '493235142128074753',
'label': 'yes',
'label_confidence':0.6359
}
### Data Fields
tweet_id: the Twitter assigned ID for the tweet object.
label: annotation of the tweet by whether it is a question or not
label_confidence: confidence score for the label given annotations of multiple annotators per tweet
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
The dataset includes tweet IDs only due to Twitter content re-distribution policy. It was created and shared for research purposes for parties interested in understanding questions expecting answers by Arab journalists on Twitter.
### Source Data
#### Initial Data Collection and Normalization
To construct our dataset of question tweets posted by journalists, we first acquire a list of Twitter accounts of 389 Arab journalists. We use the Twitter API to crawl their available tweets, keeping only those that are identified by Twitter to be both Arabic, and not retweets (as these would contain content that was not originally authored by journalists). We apply a rule-based question filter to this dataset of 465,599 tweets, extracting 49,119 (10.6%) potential question tweets from 363 (93.3%) Arab journalists.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@MaramHasanain](https://github.com/MaramHasanain) for adding this dataset. |
kan_hope | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- kn
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: KanHope
language_bcp47:
- en-IN
- kn-IN
tags:
- hope-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not-Hope
'1': Hope
splits:
- name: train
num_bytes: 494898
num_examples: 4940
- name: test
num_bytes: 65722
num_examples: 618
download_size: 568972
dataset_size: 560620
---
# Dataset Card for KanHope
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4904729
- **Repository:** [KanHope](https://github.com/adeepH/KanHope)
- **Paper:** [Hope speech detection in Under-resourced Kannada langauge](https://arxiv.org/abs/2108.04616)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Adeep Hande](adeeph18c@iiitt.ac.in)
### Dataset Summary
KanHope dataset is a code-mixed Kannada-English dataset for hope speech detection. All texts are scraped from the comments section of YouTube. The dataset consists of 6,176 user-generated comments in code mixed Kannada scraped from YouTube and manually annotated as bearing hope speech or Not-hope speech.
### Supported Tasks and Leaderboards
This task aims to detect Hope speech content of the code-mixed dataset of comments/posts in Dravidian Languages ( Kannada-English) collected from social media. The comment/post may contain more than one sentence, but the average sentence length of the corpora is 1. Each comment/post is annotated at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Languages
Code-mixed text in Dravidian languages (Kannada-English).
## Dataset Structure
### Data Instances
An example from the Kannada dataset looks as follows:
| text | label |
| :------ | :----- |
| ��������� ��ͭ� heartly heltidini... plz avrigella namma nimmellara supprt beku | 0 (Non_hope speech) |
| Next song gu kuda alru andre evaga yar comment madidera alla alrru like madi share madi nam industry na next level ge togond hogaona. | 1 (Hope Speech) |
### Data Fields
Kannada
- `text`: Kannada-English code mixed comment.
- `label`: integer from either of 0 or 1 that corresponds to these values: "Non_hope Speech", "Hope Speech"
### Data Splits
| | train | validation | test |
|---------|------:|-----------:|-----:|
| Kannada | 4941 | 618 | 617 |
## Dataset Creation
### Curation Rationale
Numerous methods have been developed to monitor the spread of negativity in modern years by eliminating vulgar, offensive, and fierce comments from social media platforms. However, there are relatively lesser amounts of study that converges on embracing positivity, reinforcing supportive and reassuring content in online forums.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{hande2021hope,
title={Hope Speech detection in under-resourced Kannada language},
author={Adeep Hande and Ruba Priyadharshini and Anbukkarasi Sampath and Kingston Pal Thamburaj and Prabakaran Chandran and Bharathi Raja Chakravarthi},
year={2021},
eprint={2108.04616},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@adeepH](https://github.com/adeepH) for adding this dataset. |
kannada_news | ---
annotations_creators:
- other
language_creators:
- other
language:
- kn
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: KannadaNews Dataset
dataset_info:
features:
- name: headline
dtype: string
- name: label
dtype:
class_label:
names:
'0': sports
'1': tech
'2': entertainment
splits:
- name: train
num_bytes: 969216
num_examples: 5167
- name: validation
num_bytes: 236817
num_examples: 1293
download_size: 0
dataset_size: 1206033
---
# Dataset Card for kannada_news dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle link](https://www.kaggle.com/disisbig/kannada-news-dataset) for kannada news headlines dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** More information about the dataset and the models can be found [here](https://github.com/goru001/nlp-for-kannada)
### Dataset Summary
The Kannada news dataset contains only the headlines of news article in three categories:
Entertainment, Tech, and Sports.
The data set contains around 6300 news article headlines which are collected from Kannada news websites.
The data set has been cleaned and contains train and test set using which can be used to benchmark topic classification models in Kannada.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Kannada (kn)
## Dataset Structure
### Data Instances
The data has two files. A train.csv and valid.csv. An example row of the dataset is as below:
```
{
'headline': 'ಫಿಫಾ ವಿಶ್ವಕಪ್ ಫೈನಲ್: ಅತಿರೇಕಕ್ಕೇರಿದ ಸಂಭ್ರಮಾಚರಣೆ; ಅಭಿಮಾನಿಗಳ ಹುಚ್ಚು ವರ್ತನೆಗೆ ವ್ಯಾಪಕ ಖಂಡನೆ',
'label':'sports'
}
```
NOTE: The data has very few examples on the technology (class label: 'tech') topic. [More Information Needed]
### Data Fields
Data has two fields:
- headline: text headline in kannada (string)
- label : corresponding class label which the headlines pertains to in english (string)
### Data Splits
The dataset is divided into two splits. All the headlines are scraped from news websites on the internet.
| | train | validation |
|-----------------|--------:|-----------:|
| Input Sentences | 5167 | 1293 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
There are starkingly less amount of data for South Indian languages, especially Kannada, available in digital format which can be used for NLP purposes.
Though having roughly 38 million native speakers, it is a little under-represented language and will benefit from active contribution from the community.
This dataset, however, can just help people get exposed to Kannada and help proceed further active participation for enabling continuous progress and development.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Gaurav Arora] (https://github.com/goru001/nlp-for-kannada). Has also got some starter models an embeddings to help get started.
### Licensing Information
cc-by-sa-4.0
### Citation Information
https://www.kaggle.com/disisbig/kannada-news-dataset
### Contributions
Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset. |
kd_conv | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
language:
- zh
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: kdconv
pretty_name: Knowledge-driven Conversation
dataset_info:
- config_name: travel_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 3241550
num_examples: 1200
- name: test
num_bytes: 793883
num_examples: 150
- name: validation
num_bytes: 617177
num_examples: 150
download_size: 11037768
dataset_size: 4652610
- config_name: travel_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 1517024
num_examples: 1154
download_size: 11037768
dataset_size: 1517024
- config_name: music_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 3006192
num_examples: 1200
- name: test
num_bytes: 801012
num_examples: 150
- name: validation
num_bytes: 633905
num_examples: 150
download_size: 11037768
dataset_size: 4441109
- config_name: music_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 5980643
num_examples: 4441
download_size: 11037768
dataset_size: 5980643
- config_name: film_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 4867659
num_examples: 1200
- name: test
num_bytes: 956995
num_examples: 150
- name: validation
num_bytes: 884232
num_examples: 150
download_size: 11037768
dataset_size: 6708886
- config_name: film_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 10500882
num_examples: 8090
download_size: 11037768
dataset_size: 10500882
- config_name: all_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 11115313
num_examples: 3600
- name: test
num_bytes: 2551802
num_examples: 450
- name: validation
num_bytes: 2135226
num_examples: 450
download_size: 11037768
dataset_size: 15802341
- config_name: all_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 17998529
num_examples: 13685
download_size: 11037768
dataset_size: 17998529
---
# Dataset Card for KdConv
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/thu-coai/KdConv)
- **Paper:** [{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation](https://www.aclweb.org/anthology/2020.acl-main.635.pdf)
### Dataset Summary
KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn
conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel),
and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related
topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer
learning and domain adaptation.
### Supported Tasks and Leaderboards
This dataset can be leveraged for dialogue modelling tasks involving multi-turn and Knowledge base setup.
### Languages
This dataset has only Chinese Language.
## Dataset Structure
### Data Instances
Each data instance is a multi-turn conversation between 2 people with annotated knowledge base data used while talking
, e.g.:
```
{
"messages": [
{
"message": "对《我喜欢上你时的内心活动》这首歌有了解吗?"
},
{
"attrs": [
{
"attrname": "Information",
"attrvalue": "《我喜欢上你时的内心活动》是由韩寒填词,陈光荣作曲,陈绮贞演唱的歌曲,作为电影《喜欢你》的主题曲于2017年4月10日首发。2018年,该曲先后提名第37届香港电影金像奖最佳原创电影歌曲奖、第7届阿比鹿音乐奖流行单曲奖。",
"name": "我喜欢上你时的内心活动"
}
],
"message": "有些了解,是电影《喜欢你》的主题曲。"
},
...
{
"attrs": [
{
"attrname": "代表作品",
"attrvalue": "旅行的意义",
"name": "陈绮贞"
},
{
"attrname": "代表作品",
"attrvalue": "时间的歌",
"name": "陈绮贞"
}
],
"message": "我还知道《旅行的意义》与《时间的歌》,都算是她的代表作。"
},
{
"message": "好,有时间我找出来听听。"
}
],
"name": "我喜欢上你时的内心活动"
}
```
The corresponding entries in Knowledge base is a dictionary with list of knowledge base triplets (head entity
, relationship, tail entity), e.g.:
```
"忽然之间": [
[
"忽然之间",
"Information",
"《忽然之间》是歌手 莫文蔚演唱的歌曲,由 周耀辉, 李卓雄填词, 林健华谱曲,收录在莫文蔚1999年发行专辑《 就是莫文蔚》里。"
],
[
"忽然之间",
"谱曲",
"林健华"
]
...
]
```
### Data Fields
Conversation data fields:
- `name`: the starting topic (entity) of the conversation
- `domain`: the domain this sample belongs to. Categorical value among `{travel, film, music}`
- `messages`: list of all the turns in the dialogue. For each turn:
- `message`: the utterance
- `attrs`: list of knowledge graph triplets referred by the utterance. For each triplet:
- `name`: the head entity
- `attrname`: the relation
- `attrvalue`: the tail entity
Knowledge Base data fields:
- `head_entity`: the head entity
- `kb_triplets`: list of corresponding triplets
- `domain`: the domain this sample belongs to. Categorical value among `{travel, film, music}`
### Data Splits
The conversation dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|--------|------:|-----------:|-----:|
| travel | 1200 | 1200 | 1200 |
| film | 1200 | 150 | 150 |
| music | 1200 | 150 | 150 |
| all | 3600 | 450 | 450 |
The Knowledge base dataset is having only train split with following sizes:
| | train |
|--------|------:|
| travel | 1154 |
| film | 8090 |
| music | 4441 |
| all | 13685 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache License 2.0
### Citation Information
```
@inproceedings{zhou-etal-2020-kdconv,
title = "{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation",
author = "Zhou, Hao and
Zheng, Chujie and
Huang, Kaili and
Huang, Minlie and
Zhu, Xiaoyan",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.635",
doi = "10.18653/v1/2020.acl-main.635",
pages = "7098--7108",
}
```
### Contributions
Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset. |
kde4 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- as
- ast
- be
- bg
- bn
- br
- ca
- crh
- cs
- csb
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- ha
- he
- hi
- hne
- hr
- hsb
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- lb
- lt
- lv
- mai
- mk
- ml
- mr
- ms
- mt
- nb
- nds
- ne
- nl
- nn
- nso
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- rw
- se
- si
- sk
- sl
- sr
- sv
- ta
- te
- tg
- th
- tr
- uk
- uz
- vi
- wa
- xh
- zh
language_bcp47:
- bn-IN
- en-GB
- pt-BR
- zh-CN
- zh-HK
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: KDE4
dataset_info:
- config_name: fi-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- nl
splits:
- name: train
num_bytes: 8845933
num_examples: 101593
download_size: 2471355
dataset_size: 8845933
- config_name: it-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 8827049
num_examples: 109003
download_size: 2389051
dataset_size: 8827049
- config_name: nl-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- sv
splits:
- name: train
num_bytes: 22294586
num_examples: 188454
download_size: 6203460
dataset_size: 22294586
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 27132585
num_examples: 220566
download_size: 7622662
dataset_size: 27132585
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 25650409
num_examples: 210173
download_size: 7049364
dataset_size: 25650409
---
# Dataset Card for KDE4
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/KDE4.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/KDE4.php
E.g.
`dataset = load_dataset("kde4", lang1="en", lang2="nl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
kelm | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: kelm
pretty_name: Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)
tags:
- data-to-text-generation
dataset_info:
features:
- name: triple
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1343187306
num_examples: 6371131
- name: validation
num_bytes: 167790917
num_examples: 796471
- name: test
num_bytes: 167921750
num_examples: 796493
download_size: 1631259869
dataset_size: 1678899973
---
# Dataset Card for Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/KELM-corpus
- **Repository:** https://github.com/google-research-datasets/KELM-corpus
- **Paper:** https://arxiv.org/abs/2010.12688
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into
a natural language sentence(s). This dataset consists of English KG data converted into paired natural language text.
The generated corpus consists of ∼18M sentences spanning ∼45M triples with ∼1500 distinct relations.
### Supported Tasks and Leaderboards
The intended task is data-to-text generation, taking in a knowledge graph tuple and generating a natural language
representation from it. Specifically, the data is in the format the authors used to train a seq2seq language model
with the tuples concatenated into a single sequence.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
Each instance consists of one KG triple paired with corresponding natural language.
### Data Fields
- `triple`: Wikipedia triples of the form `<subject> <relation> <object>` where some subjects have multiple
relations, e.g. `<subject> <relation1> <object1> <relation2> <object2> <relation3> <object3>`. For more details on
how these relations are grouped, please refer to the paper.
- `sentence`: The corresponding Wikipedia sentence.
### Data Splits
The dataset includes a pre-determined train, validation, and test split.
## Dataset Creation
### Curation Rationale
The goal of the dataset's curation and the associated modeling work discussed in the paper is to be able to generate
natural text from a knowledge graph.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The data is sourced from English Wikipedia and it's associated knowledge graph.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> Wikipedia has documented ideological, gender6, and racial biases in its text. While the KELM corpus may still
contain some of these biases, certain types of biases may be reduced.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset has been released under the [CC BY-SA 2.0 license](https://creativecommons.org/licenses/by-sa/2.0/).
### Citation Information
```
@misc{agarwal2020large,
title={Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training},
author={Oshin Agarwal and Heming Ge and Siamak Shakeri and Rami Al-Rfou},
year={2020},
eprint={2010.12688},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
kilt_tasks | ---
annotations_creators:
- crowdsourced
- found
- machine-generated
language_creators:
- crowdsourced
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
source_datasets:
- extended|natural_questions
- extended|other-aidayago
- extended|other-fever
- extended|other-hotpotqa
- extended|other-trex
- extended|other-triviaqa
- extended|other-wizardsofwikipedia
- extended|other-wned-cweb
- extended|other-wned-wiki
- extended|other-zero-shot-re
- original
task_categories:
- fill-mask
- question-answering
- text-classification
- text-generation
- text-retrieval
- text2text-generation
task_ids:
- abstractive-qa
- dialogue-modeling
- document-retrieval
- entity-linking-retrieval
- extractive-qa
- fact-checking
- fact-checking-retrieval
- open-domain-abstractive-qa
- open-domain-qa
- slot-filling
paperswithcode_id: kilt
pretty_name: KILT
configs:
- aidayago2
- cweb
- eli5
- fever
- hotpotqa
- nq
- structured_zeroshot
- trex
- triviaqa_support_only
- wned
- wow
dataset_info:
- config_name: triviaqa_support_only
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 72024147
num_examples: 61844
- name: validation
num_bytes: 6824774
num_examples: 5359
- name: test
num_bytes: 341964
num_examples: 6586
download_size: 111546348
dataset_size: 79190885
- config_name: fever
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 23941622
num_examples: 104966
- name: validation
num_bytes: 3168503
num_examples: 10444
- name: test
num_bytes: 1042660
num_examples: 10100
download_size: 45954548
dataset_size: 28152785
- config_name: aidayago2
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 68944642
num_examples: 18395
- name: validation
num_bytes: 20743548
num_examples: 4784
- name: test
num_bytes: 14211859
num_examples: 4463
download_size: 105637528
dataset_size: 103900049
- config_name: wned
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: validation
num_bytes: 12659894
num_examples: 3396
- name: test
num_bytes: 13082096
num_examples: 3376
download_size: 26163472
dataset_size: 25741990
- config_name: cweb
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: validation
num_bytes: 89819628
num_examples: 5599
- name: test
num_bytes: 99209665
num_examples: 5543
download_size: 190444736
dataset_size: 189029293
- config_name: trex
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 1190269126
num_examples: 2284168
- name: validation
num_bytes: 2573820
num_examples: 5000
- name: test
num_bytes: 758742
num_examples: 5000
download_size: 1757029516
dataset_size: 1193601688
- config_name: structured_zeroshot
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 47171201
num_examples: 147909
- name: validation
num_bytes: 1612499
num_examples: 3724
- name: test
num_bytes: 1141537
num_examples: 4966
download_size: 74927220
dataset_size: 49925237
- config_name: nq
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 30388752
num_examples: 87372
- name: validation
num_bytes: 6190493
num_examples: 2837
- name: test
num_bytes: 334178
num_examples: 1444
download_size: 60166499
dataset_size: 36913423
- config_name: hotpotqa
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 33598679
num_examples: 88869
- name: validation
num_bytes: 2371638
num_examples: 5600
- name: test
num_bytes: 888476
num_examples: 5569
download_size: 57516638
dataset_size: 36858793
- config_name: eli5
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 525586490
num_examples: 272634
- name: validation
num_bytes: 13860153
num_examples: 1507
- name: test
num_bytes: 108108
num_examples: 600
download_size: 562498660
dataset_size: 539554751
- config_name: wow
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 41873570
num_examples: 63734
- name: validation
num_bytes: 2022128
num_examples: 3054
- name: test
num_bytes: 1340818
num_examples: 2944
download_size: 52647339
dataset_size: 45236516
---
# Dataset Card for KILT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ai.facebook.com/tools/kilt/
- **Repository:** https://github.com/facebookresearch/KILT
- **Paper:** https://arxiv.org/abs/2009.02252
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/689/leaderboard/
- **Point of Contact:** [Needs More Information]
### Dataset Summary
KILT has been built from 11 datasets representing 5 types of tasks:
- Fact-checking
- Entity linking
- Slot filling
- Open domain QA
- Dialog generation
All these datasets have been grounded in a single pre-processed Wikipedia dump, allowing for fairer and more consistent evaluation as well as enabling new task setups such as multitask and transfer learning with minimal effort. KILT also provides tools to analyze and understand the predictions made by models, as well as the evidence they provide for their predictions.
#### Loading the KILT knowledge source and task data
The original KILT [release](https://github.com/facebookresearch/KILT) only provides question IDs for the TriviaQA task. Using the full dataset requires mapping those back to the TriviaQA questions, which can be done as follows:
```python
from datasets import load_dataset
# Get the pre-processed Wikipedia knowledge source for kild
kilt_wiki = load_dataset("kilt_wikipedia")
# Get the KILT task datasets
kilt_triviaqa = load_dataset("kilt_tasks", name="triviaqa_support_only")
# Most tasks in KILT already have all required data, but KILT-TriviaQA
# only provides the question IDs, not the questions themselves.
# Thankfully, we can get the original TriviaQA data with:
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# The KILT IDs can then be mapped to the TriviaQA questions with:
triviaqa_map = {}
def add_missing_data(x, trivia_qa_subset, triviaqa_map):
i = triviaqa_map[x['id']]
x['input'] = trivia_qa_subset[i]['question']
x['output']['original_answer'] = trivia_qa_subset[i]['answer']['value']
return x
for k in ['train', 'validation', 'test']:
triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(trivia_qa[k]['question_id'])])
kilt_triviaqa[k] = kilt_triviaqa[k].filter(lambda x: x['id'] in triviaqa_map)
kilt_triviaqa[k] = kilt_triviaqa[k].map(add_missing_data, fn_kwargs=dict(trivia_qa_subset=trivia_qa[k], triviaqa_map=triviaqa_map))
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
### Data Instances
An example of open-domain QA from the Natural Questions `nq` configuration looks as follows:
```
{'id': '-5004457603684974952',
'input': 'who is playing the halftime show at super bowl 2016',
'meta': {'left_context': '',
'mention': '',
'obj_surface': [],
'partial_evidence': [],
'right_context': '',
'sub_surface': [],
'subj_aliases': [],
'template_questions': []},
'output': [{'answer': 'Coldplay',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 186,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 178,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Beyoncé',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 224,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 217,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Bruno Mars',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 239,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 229,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Coldplay with special guest performers Beyoncé and Bruno Mars',
'meta': {'score': 0},
'provenance': []},
{'answer': 'British rock group Coldplay with special guest performers Beyoncé and Bruno Mars',
'meta': {'score': 0},
'provenance': []},
{'answer': '',
'meta': {'score': 0},
'provenance': [{'bleu_score': 0.9657992720603943,
'end_character': 341,
'end_paragraph_id': 1,
'meta': {'annotation_id': '2430977867500315580',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': 'NONE'},
'section': 'Section::::Abstract.',
'start_character': 0,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': '',
'meta': {'score': 0},
'provenance': [{'bleu_score': -1.0,
'end_character': -1,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': ['It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars',
'It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who previously had headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively.',
"The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars",
"The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars,"],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': -1,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]}]}
```
### Data Fields
Examples from all configurations have the following features:
- `input`: a `string` feature representing the query.
- `output`: a `list` of features each containing information for an answer, made up of:
- `answer`: a `string` feature representing a possible answer.
- `provenance`: a `list` of features representing Wikipedia passages that support the `answer`, denoted by:
- `title`: a `string` feature, the title of the Wikipedia article the passage was retrieved from.
- `section`: a `string` feature, the title of the section in Wikipedia article.
- `wikipedia_id`: a `string` feature, a unique identifier for the Wikipedia article.
- `start_character`: a `int32` feature.
- `start_paragraph_id`: a `int32` feature.
- `end_character`: a `int32` feature.
- `end_paragraph_id`: a `int32` feature.
### Data Splits
The configurations have the following splits:
| | Train | Validation | Test |
| ----------- | ----------- | ----------- | ----------- |
| triviaqa | 61844 | 5359 | 6586 |
| fever | 104966 | 10444 | 10100 |
| aidayago2 | 18395 | 4784 | 4463 |
| wned | | 3396 | 3376 |
| cweb | | 5599 | 5543 |
| trex | 2284168 | 5000 | 5000 |
| structured_zeroshot | 147909 | 3724 | 4966 |
| nq | 87372 | 2837 | 1444 |
| hotpotqa | 88869 | 5600 | 5569 |
| eli5 | 272634 | 1507 | 600 |
| wow | 94577 | 3058 | 2944 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{kilt_tasks,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick S. H. Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vladimir Karpukhin and
Jean Maillard and
Vassilis Plachouras and
Tim Rockt{\"{a}}schel and
Sebastian Riedel},
editor = {Kristina Toutanova and
Anna Rumshisky and
Luke Zettlemoyer and
Dilek Hakkani{-}T{\"{u}}r and
Iz Beltagy and
Steven Bethard and
Ryan Cotterell and
Tanmoy Chakraborty and
Yichao Zhou},
title = {{KILT:} a Benchmark for Knowledge Intensive Language Tasks},
booktitle = {Proceedings of the 2021 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language Technologies,
{NAACL-HLT} 2021, Online, June 6-11, 2021},
pages = {2523--2544},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://www.aclweb.org/anthology/2021.naacl-main.200/}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
kilt_wikipedia | ---
paperswithcode_id: null
pretty_name: KiltWikipedia
dataset_info:
features:
- name: kilt_id
dtype: string
- name: wikipedia_id
dtype: string
- name: wikipedia_title
dtype: string
- name: text
sequence:
- name: paragraph
dtype: string
- name: anchors
sequence:
- name: paragraph_id
dtype: int32
- name: start
dtype: int32
- name: end
dtype: int32
- name: text
dtype: string
- name: href
dtype: string
- name: wikipedia_title
dtype: string
- name: wikipedia_id
dtype: string
- name: categories
dtype: string
- name: wikidata_info
struct:
- name: description
dtype: string
- name: enwikiquote_title
dtype: string
- name: wikidata_id
dtype: string
- name: wikidata_label
dtype: string
- name: wikipedia_title
dtype: string
- name: aliases
sequence:
- name: alias
dtype: string
- name: history
struct:
- name: pageid
dtype: int32
- name: parentid
dtype: int32
- name: revid
dtype: int32
- name: pre_dump
dtype: bool
- name: timestamp
dtype: string
- name: url
dtype: string
config_name: '2019-08-01'
splits:
- name: full
num_bytes: 29372535718
num_examples: 5903530
download_size: 37318876722
dataset_size: 29372535718
---
# Dataset Card for "kilt_wikipedia"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/facebookresearch/KILT](https://github.com/facebookresearch/KILT)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 37.32 GB
- **Size of the generated dataset:** 29.37 GB
- **Total amount of disk used:** 66.69 GB
### Dataset Summary
KILT-Wikipedia: Wikipedia pre-processed for KILT.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 2019-08-01
- **Size of downloaded dataset files:** 37.32 GB
- **Size of the generated dataset:** 29.37 GB
- **Total amount of disk used:** 66.69 GB
An example of 'full' looks as follows.
```
{
"anchors": {
"end": [],
"href": [],
"paragraph_id": [],
"start": [],
"text": [],
"wikipedia_id": [],
"wikipedia_title": []
},
"categories": "",
"history": {
"pageid": 0,
"parentid": 0,
"pre_dump": true,
"revid": 0,
"timestamp": "",
"url": ""
},
"kilt_id": "",
"text": {
"paragraph": []
},
"wikidata_info": {
"aliases": {
"alias": []
},
"description": "",
"enwikiquote_title": "",
"wikidata_id": "",
"wikidata_label": "",
"wikipedia_title": ""
},
"wikipedia_id": "",
"wikipedia_title": ""
}
```
### Data Fields
The data fields are the same among all splits.
#### 2019-08-01
- `kilt_id`: a `string` feature.
- `wikipedia_id`: a `string` feature.
- `wikipedia_title`: a `string` feature.
- `text`: a dictionary feature containing:
- `paragraph`: a `string` feature.
- `anchors`: a dictionary feature containing:
- `paragraph_id`: a `int32` feature.
- `start`: a `int32` feature.
- `end`: a `int32` feature.
- `text`: a `string` feature.
- `href`: a `string` feature.
- `wikipedia_title`: a `string` feature.
- `wikipedia_id`: a `string` feature.
- `categories`: a `string` feature.
- `description`: a `string` feature.
- `enwikiquote_title`: a `string` feature.
- `wikidata_id`: a `string` feature.
- `wikidata_label`: a `string` feature.
- `wikipedia_title`: a `string` feature.
- `aliases`: a dictionary feature containing:
- `alias`: a `string` feature.
- `pageid`: a `int32` feature.
- `parentid`: a `int32` feature.
- `revid`: a `int32` feature.
- `pre_dump`: a `bool` feature.
- `timestamp`: a `string` feature.
- `url`: a `string` feature.
### Data Splits
| name | full |
|----------|------:|
|2019-08-01|5903530|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{fb_kilt,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vassilis Plachouras and
Tim Rockt"aschel and
Sebastian Riedel},
title = {{KILT:} a {B}enchmark for {K}nowledge {I}ntensive {L}anguage {T}asks},
journal = {CoRR},
archivePrefix = {arXiv},
year = {2020},
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
kinnews_kirnews | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- rn
- rw
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- topic-classification
paperswithcode_id: kinnews-and-kirnews
pretty_name: KinnewsKirnews
configs:
- kinnews_cleaned
- kinnews_raw
- kirnews_cleaned
- kirnews_raw
dataset_info:
- config_name: kinnews_raw
features:
- name: label
dtype:
class_label:
names:
'0': politics
'1': sport
'2': economy
'3': health
'4': entertainment
'5': history
'6': technology
'7': tourism
'8': culture
'9': fashion
'10': religion
'11': environment
'12': education
'13': relationship
- name: kin_label
dtype: string
- name: en_label
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 38316546
num_examples: 17014
- name: test
num_bytes: 11971938
num_examples: 4254
download_size: 27377755
dataset_size: 50288484
- config_name: kinnews_cleaned
features:
- name: label
dtype:
class_label:
names:
'0': politics
'1': sport
'2': economy
'3': health
'4': entertainment
'5': history
'6': technology
'7': tourism
'8': culture
'9': fashion
'10': religion
'11': environment
'12': education
'13': relationship
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 32780382
num_examples: 17014
- name: test
num_bytes: 8217453
num_examples: 4254
download_size: 27377755
dataset_size: 40997835
- config_name: kirnews_raw
features:
- name: label
dtype:
class_label:
names:
'0': politics
'1': sport
'2': economy
'3': health
'4': entertainment
'5': history
'6': technology
'7': tourism
'8': culture
'9': fashion
'10': religion
'11': environment
'12': education
'13': relationship
- name: kir_label
dtype: string
- name: en_label
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 7343223
num_examples: 3689
- name: test
num_bytes: 2499189
num_examples: 923
download_size: 5186111
dataset_size: 9842412
- config_name: kirnews_cleaned
features:
- name: label
dtype:
class_label:
names:
'0': politics
'1': sport
'2': economy
'3': health
'4': entertainment
'5': history
'6': technology
'7': tourism
'8': culture
'9': fashion
'10': religion
'11': environment
'12': education
'13': relationship
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 6629767
num_examples: 3689
- name: test
num_bytes: 1570745
num_examples: 923
download_size: 5186111
dataset_size: 8200512
---
# Dataset Card for kinnews_kirnews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed]
- **Repository:** https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus
- **Paper:** [KINNEWS and KIRNEWS: Benchmarking Cross-Lingual Text Classification for Kinyarwanda and Kirundi](https://arxiv.org/abs/2010.12174)
- **Leaderboard:** NA
- **Point of Contact:** [Rubungo Andre Niyongabo1](mailto:niyongabor.andre@std.uestc.edu.cn)
### Dataset Summary
Kinyarwanda and Kirundi news classification datasets (KINNEWS and KIRNEWS,respectively), which were both collected from Rwanda and Burundi news websites and newspapers, for low-resource monolingual and cross-lingual multiclass classification tasks.
### Supported Tasks and Leaderboards
This dataset can be used for text classification of news articles in Kinyarwadi and Kirundi languages. Each news article can be classified into one of the 14 possible classes. The classes are:
- politics
- sport
- economy
- health
- entertainment
- history
- technology
- culture
- religion
- environment
- education
- relationship
### Languages
Kinyarwanda and Kirundi
## Dataset Structure
### Data Instances
Here is an example from the dataset:
| Field | Value |
| ----- | ----------- |
| label | 1 |
| kin_label/kir_label | 'inkino' |
| url | 'https://nawe.bi/Primus-Ligue-Imirwi-igiye-guhura-gute-ku-ndwi-ya-6-y-ihiganwa.html' |
| title | 'Primus Ligue\xa0: Imirwi igiye guhura gute ku ndwi ya 6 y’ihiganwa\xa0?'|
| content | ' Inkino zitegekanijwe kuruno wa gatandatu igenekerezo rya 14 Nyakanga umwaka wa 2019...'|
| en_label| 'sport'|
### Data Fields
The raw version of the data for Kinyarwanda language consists of these fields
- label: The category of the news article
- kin_label/kir_label: The associated label in Kinyarwanda/Kirundi language
- en_label: The associated label in English
- url: The URL of the news article
- title: The title of the news article
- content: The content of the news article
The cleaned version contains only the `label`, `title` and the `content` fields
### Data Splits
Lang| Train | Test |
|---| ----- | ---- |
|Kinyarwandai Raw|17014|4254|
|Kinyarwandai Clean|17014|4254|
|Kirundi Raw|3689|923|
|Kirundi Clean|3689|923|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{niyongabo2020kinnews,
title={KINNEWS and KIRNEWS: Benchmarking Cross-Lingual Text Classification for Kinyarwanda and Kirundi},
author={Niyongabo, Rubungo Andre and Qu, Hong and Kreutzer, Julia and Huang, Li},
journal={arXiv preprint arXiv:2010.12174},
year={2020}
}
```
### Contributions
Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset. |
klue | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
- question-answering
- text-classification
- text-generation
- token-classification
task_ids:
- extractive-qa
- named-entity-recognition
- natural-language-inference
- parsing
- semantic-similarity-scoring
- text-scoring
- topic-classification
paperswithcode_id: klue
pretty_name: KLUE
configs:
- dp
- mrc
- ner
- nli
- re
- sts
- wos
- ynat
tags:
- relation-extraction
dataset_info:
- config_name: ynat
features:
- name: guid
dtype: string
- name: title
dtype: string
- name: label
dtype:
class_label:
names:
'0': IT과학
'1': 경제
'2': 사회
'3': 생활문화
'4': 세계
'5': 스포츠
'6': 정치
- name: url
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 10109664
num_examples: 45678
- name: validation
num_bytes: 2039197
num_examples: 9107
download_size: 4932555
dataset_size: 12148861
- config_name: sts
features:
- name: guid
dtype: string
- name: source
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
struct:
- name: label
dtype: float64
- name: real-label
dtype: float64
- name: binary-label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2832921
num_examples: 11668
- name: validation
num_bytes: 122657
num_examples: 519
download_size: 1349875
dataset_size: 2955578
- config_name: nli
features:
- name: guid
dtype: string
- name: source
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 5719930
num_examples: 24998
- name: validation
num_bytes: 673276
num_examples: 3000
download_size: 1257374
dataset_size: 6393206
- config_name: ner
features:
- name: sentence
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-DT
'1': I-DT
'2': B-LC
'3': I-LC
'4': B-OG
'5': I-OG
'6': B-PS
'7': I-PS
'8': B-QT
'9': I-QT
'10': B-TI
'11': I-TI
'12': O
splits:
- name: train
num_bytes: 19891953
num_examples: 21008
- name: validation
num_bytes: 4937579
num_examples: 5000
download_size: 4308644
dataset_size: 24829532
- config_name: re
features:
- name: guid
dtype: string
- name: sentence
dtype: string
- name: subject_entity
struct:
- name: word
dtype: string
- name: start_idx
dtype: int32
- name: end_idx
dtype: int32
- name: type
dtype: string
- name: object_entity
struct:
- name: word
dtype: string
- name: start_idx
dtype: int32
- name: end_idx
dtype: int32
- name: type
dtype: string
- name: label
dtype:
class_label:
names:
'0': no_relation
'1': org:dissolved
'2': org:founded
'3': org:place_of_headquarters
'4': org:alternate_names
'5': org:member_of
'6': org:members
'7': org:political/religious_affiliation
'8': org:product
'9': org:founded_by
'10': org:top_members/employees
'11': org:number_of_employees/members
'12': per:date_of_birth
'13': per:date_of_death
'14': per:place_of_birth
'15': per:place_of_death
'16': per:place_of_residence
'17': per:origin
'18': per:employee_of
'19': per:schools_attended
'20': per:alternate_names
'21': per:parents
'22': per:children
'23': per:siblings
'24': per:spouse
'25': per:other_family
'26': per:colleagues
'27': per:product
'28': per:religion
'29': per:title
- name: source
dtype: string
splits:
- name: train
num_bytes: 11145538
num_examples: 32470
- name: validation
num_bytes: 2559300
num_examples: 7765
download_size: 5669259
dataset_size: 13704838
- config_name: dp
features:
- name: sentence
dtype: string
- name: index
list: int32
- name: word_form
list: string
- name: lemma
list: string
- name: pos
list: string
- name: head
list: int32
- name: deprel
list: string
splits:
- name: train
num_bytes: 7900009
num_examples: 10000
- name: validation
num_bytes: 1557506
num_examples: 2000
download_size: 2033461
dataset_size: 9457515
- config_name: mrc
features:
- name: title
dtype: string
- name: context
dtype: string
- name: news_category
dtype: string
- name: source
dtype: string
- name: guid
dtype: string
- name: is_impossible
dtype: bool
- name: question_type
dtype: int32
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 46505665
num_examples: 17554
- name: validation
num_bytes: 15583053
num_examples: 5841
download_size: 19218422
dataset_size: 62088718
- config_name: wos
features:
- name: guid
dtype: string
- name: domains
list: string
- name: dialogue
list:
- name: role
dtype: string
- name: text
dtype: string
- name: state
list: string
splits:
- name: train
num_bytes: 26677002
num_examples: 8000
- name: validation
num_bytes: 3488943
num_examples: 1000
download_size: 4785657
dataset_size: 30165945
---
# Dataset Card for KLUE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://klue-benchmark.com/
- **Repository:** https://github.com/KLUE-benchmark/KLUE
- **Paper:** [KLUE: Korean Language Understanding Evaluation](https://arxiv.org/abs/2105.09680)
- **Leaderboard:** [Leaderboard](https://klue-benchmark.com/leaderboard)
- **Point of Contact:** https://github.com/KLUE-benchmark/KLUE/issues
### Dataset Summary
KLUE is a collection of 8 tasks to evaluate natural language understanding capability of Korean language models. We delibrately select the 8 tasks, which are Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking.
### Supported Tasks and Leaderboards
Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### ynat
An example of 'train' looks as follows.
```
{'date': '2016.06.30. 오전 10:36',
'guid': 'ynat-v1_train_00000',
'label': 3,
'title': '유튜브 내달 2일까지 크리에이터 지원 공간 운영',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008508947'}
```
#### sts
An example of 'train' looks as follows.
```
{'guid': 'klue-sts-v1_train_00000',
'labels': {'label': 3.7, 'real-label': 3.714285714285714, 'binary-label': 1},
'sentence1': '숙소 위치는 찾기 쉽고 일반적인 한국의 반지하 숙소입니다.',
'sentence2': '숙박시설의 위치는 쉽게 찾을 수 있고 한국의 대표적인 반지하 숙박시설입니다.',
'source': 'airbnb-rtt'}
```
#### nli
An example of 'train' looks as follows.
```
{'guid': 'klue-nli-v1_train_00000',
'hypothesis': '힛걸 진심 최고로 멋지다.',
'label': 0,
'premise': '힛걸 진심 최고다 그 어떤 히어로보다 멋지다',
'source': 'NSMC'}
```
#### ner
An example of 'train' looks as follows.
```
{'tokens': ['특', '히', ' ', '영', '동', '고', '속', '도', '로', ' ', '강', '릉', ' ', '방', '향', ' ', '문', '막', '휴', '게', '소', '에', '서', ' ', '만', '종', '분', '기', '점', '까', '지', ' ', '5', '㎞', ' ', '구', '간', '에', '는', ' ', '승', '용', '차', ' ', '전', '용', ' ', '임', '시', ' ', '갓', '길', '차', '로', '제', '를', ' ', '운', '영', '하', '기', '로', ' ', '했', '다', '.'],
'ner_tags': [12, 12, 12, 2, 3, 3, 3, 3, 3, 12, 2, 3, 12, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 8, 9, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
'sentence': '특히 <영동고속도로:LC> <강릉:LC> 방향 <문막휴게소:LC>에서 <만종분기점:LC>까지 <5㎞:QT> 구간에는 승용차 전용 임시 갓길차로제를 운영하기로 했다.'}
```
#### re
An example of 'train' looks as follows.
```
{'guid': 'klue-re-v1_train_00000',
'label': 0,
'object_entity': {'word': '조지 해리슨',
'start_idx': 13,
'end_idx': 18,
'type': 'PER'},
'sentence': '〈Something〉는 조지 해리슨이 쓰고 비틀즈가 1969년 앨범 《Abbey Road》에 담은 노래다.',
'source': 'wikipedia',
'subject_entity': {'word': '비틀즈',
'start_idx': 24,
'end_idx': 26,
'type': 'ORG'}}
```
#### dp
An example of 'train' looks as follows.
```
{'deprel': ['NP', 'NP_OBJ', 'VP', 'NP', 'NP_SBJ', 'NP', 'NP_MOD', 'NP_CNJ', 'NP_CNJ', 'NP', 'NP', 'NP_OBJ', 'AP', 'VP'],
'head': [2, 3, 14, 5, 14, 7, 10, 10, 10, 11, 12, 14, 14, 0],
'index': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
'lemma': ['해당', '그림 을', '보 면', '디즈니', '공주 들 이', '브리트니', '스피어스 의', '앨범 이나', '뮤직 비디오 ,', '화보', '속', '모습 을', '똑같이', '재연 하 였 다 .'],
'pos': ['NNG', 'NNG+JKO', 'VV+EC', 'NNP', 'NNG+XSN+JKS', 'NNP', 'NNP+JKG', 'NNG+JC', 'NNG+NNG+SP', 'NNG', 'NNG', 'NNG+JKO', 'MAG', 'NNG+XSA+EP+EF+SF'],
'sentence': '해당 그림을 보면 디즈니 공주들이 브리트니 스피어스의 앨범이나 뮤직비디오, 화보 속 모습을 똑같이 재연했다.',
'word_form': ['해당', '그림을', '보면', '디즈니', '공주들이', '브리트니', '스피어스의', '앨범이나', '뮤직비디오,', '화보', '속', '모습을', '똑같이', '재연했다.']}
```
#### mrc
An example of 'train' looks as follows.
```
{'answers': {'answer_start': [478, 478], 'text': ['한 달가량', '한 달']},
'context': '올여름 장마가 17일 제주도에서 시작됐다. 서울 등 중부지방은 예년보다 사나흘 정도 늦은 이달 말께 장마가 시작될 전망이다.17일 기상청에 따르면 제주도 남쪽 먼바다에 있는 장마전선의 영향으로 이날 제주도 산간 및 내륙지역에 호우주의보가 내려지면서 곳곳에 100㎜에 육박하는 많은 비가 내렸다. 제주의 장마는 평년보다 2~3일, 지난해보다는 하루 일찍 시작됐다. 장마는 고온다습한 북태평양 기단과 한랭 습윤한 오호츠크해 기단이 만나 형성되는 장마전선에서 내리는 비를 뜻한다.장마전선은 18일 제주도 먼 남쪽 해상으로 내려갔다가 20일께 다시 북상해 전남 남해안까지 영향을 줄 것으로 보인다. 이에 따라 20~21일 남부지방에도 예년보다 사흘 정도 장마가 일찍 찾아올 전망이다. 그러나 장마전선을 밀어올리는 북태평양 고기압 세력이 약해 서울 등 중부지방은 평년보다 사나흘가량 늦은 이달 말부터 장마가 시작될 것이라는 게 기상청의 설명이다. 장마전선은 이후 한 달가량 한반도 중남부를 오르내리며 곳곳에 비를 뿌릴 전망이다. 최근 30년간 평균치에 따르면 중부지방의 장마 시작일은 6월24~25일이었으며 장마기간은 32일, 강수일수는 17.2일이었다.기상청은 올해 장마기간의 평균 강수량이 350~400㎜로 평년과 비슷하거나 적을 것으로 내다봤다. 브라질 월드컵 한국과 러시아의 경기가 열리는 18일 오전 서울은 대체로 구름이 많이 끼지만 비는 오지 않을 것으로 예상돼 거리 응원에는 지장이 없을 전망이다.',
'guid': 'klue-mrc-v1_train_12759',
'is_impossible': False,
'news_category': '종합',
'question': '북태평양 기단과 오호츠크해 기단이 만나 국내에 머무르는 기간은?',
'question_type': 1,
'source': 'hankyung',
'title': '제주도 장마 시작 … 중부는 이달 말부터'}
```
#### wos
An example of 'train' looks as follows.
```
{'dialogue': [{'role': 'user',
'text': '쇼핑을 하려는데 서울 서쪽에 있을까요?',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽']},
{'role': 'sys',
'text': '서울 서쪽에 쇼핑이 가능한 곳이라면 노량진 수산물 도매시장이 있습니다.',
'state': []},
{'role': 'user',
'text': '오 네 거기 주소 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '노량진 수산물 도매시장의 주소는 서울 동작구 93806입니다.', 'state': []},
{'role': 'user',
'text': '알려주시는김에 연락처랑 평점도 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '그럼. 연락처는 6182006591이고 평점은 4점입니다.', 'state': []},
{'role': 'user',
'text': '와 감사합니다.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '감사합니다.', 'state': []}],
'domains': ['관광'],
'guid': 'wos-v1_train_00001'}
```
### Data Fields
#### ynat
+ `guid`: a `string` feature
+ `title`: a `string` feature
+ `label`: a classification label, with possible values `IT과학`(0), `경제`(1), `사회`(2), `생활문화`(3), `세계`(4), `스포츠`(5), `정치`(6)
+ `url`: a `string` feature
+ `date`: a `string` feature
#### sts
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `sentence1`: a `string` feature
+ `sentence2`: a `string` feature
+ `labels`: a dictionary feature containing
+ `label`: a `float64` feature
+ `real-label`: a `float64` feature
+ `binary-label`: a classification label, with possible values `negative`(0), `positive`(1)
#### nli
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `premise`: a `string` feature
+ `hypothesis`: a `string` feature
+ `label`: a classification label, with possible values `entailment`(0), `neutral`(1), `contradiction`(2)
#### ner
+ `sentence`: a `string` feature
+ `tokens`: a list of a `string` feature (tokenization is at character level)
+ `ner_tags`: a list of classification labels, with possible values including `B-DT`(0), `I-DT`(1),
`B-LC`(2), `I-LC`(3), `B-OG`(4), `I-OG`(5), `B-PS`(6), `I-PS`(7), `B-QT`(8), `I-QT`(9), `B-TI`(10),
`I-TI`(11), `O`(12)
#### re
+ `guid`: a `string` feature
+ `sentence`: a `string` feature
+ `subject_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `object_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `label`: a list of labels, with possible values including `no_relation`(0), `org:dissolved`(1),
`org:founded`(2), `org:place_of_headquarters`(3), `org:alternate_names`(4), `org:member_of`(5),
`org:members`(6), `org:political/religious_affiliation`(7), `org:product`(8), `org:founded_by`(9),`org:top_members/employees`(10),
`org:number_of_employees/members`(11), `per:date_of_birth`(12), `per:date_of_death`(13), `per:place_of_birth`(14),
`per:place_of_death`(15), `per:place_of_residence`(16), `per:origin`(17), `per:employee_of`(18),
`per:schools_attended`(19), `per:alternate_names`(20), `per:parents`(21), `per:children`(22),
`per:siblings`(23), `per:spouse`(24), `per:other_family`(25), `per:colleagues`(26), `per:product`(27),
`per:religion`(28), `per:title`(29),
+ `source`: a `string` feature
#### dp
+ `sentence`: a `string` feature
+ `index`: a list of `int32` feature
+ `word_form`: a list of `string` feature
+ `lemma`: a list of `string` feature
+ `pos`: a list of `string` feature
+ `head`: a list of `int32` feature
+ `deprel`: a list of `string` feature
#### mrc
+ `title`: a `string` feature
+ `context`: a `string` feature
+ `news_category`: a `string` feature
+ `source`: a `string` feature
+ `guid`: a `string` feature
+ `is_impossible`: a `bool` feature
+ `question_type`: a `int32` feature
+ `question`: a `string` feature
+ `answers`: a dictionary feature containing
+ `answer_start`: a `int32` feature
+ `text`: a `string` feature
#### wos
+ `guid`: a `string` feature
+ `domains`: a `string` feature
+ `dialogue`: a list of dictionary feature containing
+ `role`: a `string` feature
+ `text`: a `string` feature
+ `state`: a `string` feature
### Data Splits
#### ynat
You can see more details in [here](https://klue-benchmark.com/tasks/66/data/description).
+ train: 45,678
+ validation: 9,107
#### sts
You can see more details in [here](https://klue-benchmark.com/tasks/67/data/description).
+ train: 11,668
+ validation: 519
#### nli
You can see more details in [here](https://klue-benchmark.com/tasks/68/data/description).
+ train: 24,998
+ validation: 3,000
#### ner
You can see more details in [here](https://klue-benchmark.com/tasks/69/overview/description).
+ train: 21,008
+ validation: 5,000
#### re
You can see more details in [here](https://klue-benchmark.com/tasks/70/overview/description).
+ train: 32,470
+ validation: 7,765
#### dp
You can see more details in [here](https://klue-benchmark.com/tasks/71/data/description).
+ train: 10,000
+ validation: 2,000
#### mrc
You can see more details in [here](https://klue-benchmark.com/tasks/72/overview/description).
+ train: 17,554
+ validation: 5,841
#### wos
You can see more details in [here](https://klue-benchmark.com/tasks/73/overview/description).
+ train: 8,000
+ validation: 1,000
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jungwhank](https://github.com/jungwhank), [@bzantium](https://github.com/bzantium) for adding this dataset. |
kor_3i4k | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
pretty_name: 3i4K
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': fragment
'1': statement
'2': question
'3': command
'4': rhetorical question
'5': rhetorical command
'6': intonation-dependent utterance
- name: text
dtype: string
splits:
- name: train
num_bytes: 3102158
num_examples: 55134
- name: test
num_bytes: 344028
num_examples: 6121
download_size: 2956114
dataset_size: 3446186
---
# Dataset Card for 3i4K
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [3i4K](https://github.com/warnikchow/3i4k)
- **Repository:** [3i4K](https://github.com/warnikchow/3i4k)
- **Paper:** [Speech Intention Understanding in a Head-final Language: A Disambiguation Utilizing Intonation-dependency](https://arxiv.org/abs/1811.04231)
- **Point of Contact:** [Won Ik Cho](wicho@hi.snu.ac.kr)
### Dataset Summary
The 3i4K dataset is a set of frequently used Korean words (corpus provided by the Seoul National University Speech Language Processing Lab) and manually created questions/commands containing short utterances. The goal is to identify the speaker intention of a spoken utterance based on its transcript, and whether in some cases, requires using auxiliary acoustic features. The classification system decides whether the utterance is a fragment, statement, question, command, rhetorical question, rhetorical command, or an intonation-dependent utterance. This is important because in head-final languages like Korean, the level of the intonation plays a significant role in identifying the speaker's intention.
### Supported Tasks and Leaderboards
* `intent-classification`: The dataset can be trained with a CNN or BiLISTM-Att to identify the intent of a spoken utterance in Korean and the performance can be measured by its F1 score.
### Languages
The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
## Dataset Structure
### Data Instances
An example data instance contains a short utterance and it's label:
```
{
"label": 3,
"text": "선수잖아 이 케이스 저 케이스 많을 거 아냐 선배라고 뭐 하나 인생에 도움도 안주는데 내가 이렇게 진지하게 나올 때 제대로 한번 조언 좀 해줘보지"
}
```
### Data Fields
* `label`: determines the intention of the utterance and can be one of `fragment` (0), `statement` (1), `question` (2), `command` (3), `rhetorical question` (4), `rhetorical command` (5) and `intonation-depedent utterance` (6).
* `text`: the text in Korean about common topics like housework, weather, transportation, etc.
### Data Splits
The data is split into a training set comrpised of 55134 examples and a test set of 6121 examples.
## Dataset Creation
### Curation Rationale
For head-final languages like Korean, intonation can be a determining factor in identifying the speaker's intention. The purpose of this dataset is to to determine whether an utterance is a fragment, statement, question, command, or a rhetorical question/command using the intonation-depedency from the head-finality. This is expected to improve language understanding of spoken Korean utterances and can be beneficial for speech-to-text applications.
### Source Data
#### Initial Data Collection and Normalization
The corpus was provided by Seoul National University Speech Language Processing Lab, a set of frequently used words from the National Institute of Korean Language and manually created commands and questions. The utterances cover topics like weather, transportation and stocks. 20k lines were randomly selected.
#### Who are the source language producers?
Korean speakers produced the commands and questions.
### Annotations
#### Annotation process
Utterances were classified into seven categories. They were provided clear instructions on the annotation guidelines (see [here](https://docs.google.com/document/d/1-dPL5MfsxLbWs7vfwczTKgBq_1DX9u1wxOgOPn1tOss/edit#) for the guidelines) and the resulting inter-annotator agreement was 0.85 and the final decision was done by majority voting.
#### Who are the annotators?
The annotation was completed by three Seoul Korean L1 speakers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is curated by Won Ik Cho, Hyeon Seung Lee, Ji Won Yoon, Seok Min Kim and Nam Soo Kim.
### Licensing Information
The dataset is licensed under the CC BY-SA-4.0.
### Citation Information
```
@article{cho2018speech,
title={Speech Intention Understanding in a Head-final Language: A Disambiguation Utilizing Intonation-dependency},
author={Cho, Won Ik and Lee, Hyeon Seung and Yoon, Ji Won and Kim, Seok Min and Kim, Nam Soo},
journal={arXiv preprint arXiv:1811.04231},
year={2018}
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |
kor_hate | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: korean-hatespeech-dataset
pretty_name: Korean HateSpeech Dataset
dataset_info:
features:
- name: comments
dtype: string
- name: contain_gender_bias
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: bias
dtype:
class_label:
names:
'0': none
'1': gender
'2': others
- name: hate
dtype:
class_label:
names:
'0': hate
'1': offensive
'2': none
splits:
- name: train
num_bytes: 983608
num_examples: 7896
- name: test
num_bytes: 58913
num_examples: 471
download_size: 968449
dataset_size: 1042521
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
- **Repository:** [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
- **Paper:** [BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection](https://arxiv.org/abs/2005.12503)
- **Point of Contact:** [Steven Liu](stevhliu@gmail.com)
### Dataset Summary
The Korean HateSpeech Dataset is a dataset of 8367 human-labeled entertainment news comments from a popular Korean news aggregation platform. Each comment was evaluated for either social bias (labels: `gender`, `others` `none`), hate speech (labels: `hate`, `offensive`, `none`) or gender bias (labels: `True`, `False`). The dataset was created to support the identification of toxic comments on online platforms where users can remain anonymous.
### Supported Tasks and Leaderboards
* `multi-label classification`: The dataset can be used to train a model for hate speech detection. A BERT model can be presented with a Korean entertainment news comment and be asked to label whether it contains social bias, gender bias and hate speech. Users can participate in a Kaggle leaderboard [here](https://www.kaggle.com/c/korean-hate-speech-detection/overview).
### Languages
The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
## Dataset Structure
### Data Instances
An example data instance contains a `comments` containing the text of the news comment and then labels for each of the following fields: `contain_gender_bias`, `bias` and `hate`.
```python
{'comments':'설마 ㅈ 현정 작가 아니지??'
'contain_gender_bias': 'True',
'bias': 'gender',
'hate': 'hate'
}
```
### Data Fields
* `comments`: text from the Korean news comment
* `contain_gender_bias`: a binary `True`/`False` label for the presence of gender bias
* `bias`: determines the type of social bias, which can be:
* `gender`: if the text includes bias for gender role, sexual orientation, sexual identity, and any thoughts on gender-related acts
* `others`: other kinds of factors that are considered not gender-related but social bias, including race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience
* `none`: a comment that does not incorporate the bias
* `hate`: determines how aggressive the comment is, which can be:
* `hate`: if the text is defined as an expression that display aggressive stances towards individuals/groups with certain characteristics (gender role, sexual orientation, sexual identity, any thoughts on gender-related acts, race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience, etc.)
* `offensive`: if the text contains rude or aggressive contents, can emit sarcasm through rhetorical question or irony, encompass an unethical expression or conveys unidentified rumors
* `none`: a comment that does not incorporate hate
### Data Splits
The data is split into a training and development (test) set. It contains 8371 annotated comments that are split into 7896 comments in the training set and 471 comments in the test set.
## Dataset Creation
### Curation Rationale
The dataset was created to provide the first human-labeled Korean corpus for toxic speech detection from a Korean online entertainment news aggregator. Recently, two young Korean celebrities suffered from a series of tragic incidents that led to two major Korean web portals to close the comments section on their platform. However, this only serves as a temporary solution, and the fundamental issue has not been solved yet. This dataset hopes to improve Korean hate speech detection.
### Source Data
#### Initial Data Collection and Normalization
A total of 10.4 million comments were collected from an online Korean entertainment news aggregator between Jan. 1, 2018 and Feb. 29, 2020. 1,580 articles were drawn using stratified sampling and the top 20 comments were extracted ranked in order of their Wilson score on the downvote for each article. Duplicate comments, single token comments and comments with more than 100 characters were removed (because they could convey various opinions). From here, 10K comments were randomly chosen for annotation.
#### Who are the source language producers?
The language producers are users of the Korean online news platform between 2018 and 2020.
### Annotations
#### Annotation process
Each comment was assigned to three random annotators to assign a majority decision. For more ambiguous comments, annotators were allowed to skip the comment. See Appendix A in the [paper](https://arxiv.org/pdf/2005.12503.pdf) for more detailed guidelines.
#### Who are the annotators?
Annotation was performed by 32 annotators, consisting of 29 annotators from the crowdsourcing platform DeepNatural AI and three NLP researchers.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to tackle the social issue of users creating toxic comments on online platforms. This dataset aims to improve detection of toxic comments online.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is curated by Jihyung Moon, Won Ik Cho and Junbum Lee.
### Licensing Information
[N/A]
### Citation Information
```
@inproceedings
{moon-et-al-2020-beep
title = "{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection",
author = "Moon, Jihyung and
Cho, Won Ik and
Lee, Junbum",
booktitle = "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.socialnlp-1.4",
pages = "25--31",
abstract = "Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.",
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |
kor_ner | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- ko
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: KorNER
dataset_info:
features:
- name: text
dtype: string
- name: annot_text
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': SO
'1': SS
'2': VV
'3': XR
'4': VCP
'5': JC
'6': VCN
'7': JKB
'8': MM
'9': SP
'10': XSN
'11': SL
'12': NNP
'13': NP
'14': EP
'15': JKQ
'16': IC
'17': XSA
'18': EC
'19': EF
'20': SE
'21': XPN
'22': ETN
'23': SH
'24': XSV
'25': MAG
'26': SW
'27': ETM
'28': JKO
'29': NNB
'30': MAJ
'31': NNG
'32': JKV
'33': JKC
'34': VA
'35': NR
'36': JKG
'37': VX
'38': SF
'39': JX
'40': JKS
'41': SN
- name: ner_tags
sequence:
class_label:
names:
'0': I
'1': O
'2': B_OG
'3': B_TI
'4': B_LC
'5': B_DT
'6': B_PS
splits:
- name: train
num_bytes: 3948938
num_examples: 2928
- name: test
num_bytes: 476850
num_examples: 366
- name: validation
num_bytes: 486178
num_examples: 366
download_size: 3493175
dataset_size: 4911966
---
# Dataset Card for KorNER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/kmounlp/NER)
- **Repository:** [Github](https://github.com/kmounlp/NER)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each row consists of the following fields:
- `text`: The full text, as is
- `annot_text`: Annotated text including POS-tagged information
- `tokens`: An ordered list of tokens from the full text
- `pos_tags`: Part-of-speech tags for each token
- `ner_tags`: Named entity recognition tags for each token
Note that by design, the length of `tokens`, `pos_tags`, and `ner_tags` will always be identical.
`pos_tags` corresponds to the list below:
```
['SO', 'SS', 'VV', 'XR', 'VCP', 'JC', 'VCN', 'JKB', 'MM', 'SP', 'XSN', 'SL', 'NNP', 'NP', 'EP', 'JKQ', 'IC', 'XSA', 'EC', 'EF', 'SE', 'XPN', 'ETN', 'SH', 'XSV', 'MAG', 'SW', 'ETM', 'JKO', 'NNB', 'MAJ', 'NNG', 'JKV', 'JKC', 'VA', 'NR', 'JKG', 'VX', 'SF', 'JX', 'JKS', 'SN']
```
`ner_tags` correspond to the following:
```
["I", "O", "B_OG", "B_TI", "B_LC", "B_DT", "B_PS"]
```
The prefix `B` denotes the first item of a phrase, and an `I` denotes any non-initial word. In addition, `OG` represens an organization; `TI`, time; `DT`, date, and `PS`, person.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
kor_nli | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|multi_nli
- extended|snli
- extended|xnli
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: kornli
pretty_name: KorNLI
dataset_info:
- config_name: multi_nli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 84729207
num_examples: 392702
download_size: 42113232
dataset_size: 84729207
- config_name: snli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 80137097
num_examples: 550152
download_size: 42113232
dataset_size: 80137097
- config_name: xnli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: validation
num_bytes: 518830
num_examples: 2490
- name: test
num_bytes: 1047437
num_examples: 5010
download_size: 42113232
dataset_size: 1566267
---
# Dataset Card for "kor_nli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/kakaobrain/KorNLUDatasets](https://github.com/kakaobrain/KorNLUDatasets)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 126.34 MB
- **Size of the generated dataset:** 166.43 MB
- **Total amount of disk used:** 292.77 MB
### Dataset Summary
Korean Natural Language Inference datasets.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### multi_nli
- **Size of downloaded dataset files:** 42.11 MB
- **Size of the generated dataset:** 84.72 MB
- **Total amount of disk used:** 126.85 MB
An example of 'train' looks as follows.
```
```
#### snli
- **Size of downloaded dataset files:** 42.11 MB
- **Size of the generated dataset:** 80.13 MB
- **Total amount of disk used:** 122.25 MB
An example of 'train' looks as follows.
```
```
#### xnli
- **Size of downloaded dataset files:** 42.11 MB
- **Size of the generated dataset:** 1.56 MB
- **Total amount of disk used:** 43.68 MB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### multi_nli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### snli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### xnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
#### multi_nli
| |train |
|---------|-----:|
|multi_nli|392702|
#### snli
| |train |
|----|-----:|
|snli|550152|
#### xnli
| |validation|test|
|----|---------:|---:|
|xnli| 2490|5010|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under Creative Commons [Attribution-ShareAlike license (CC BY-SA 4.0)](http://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@article{ham2020kornli,
title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
journal={arXiv preprint arXiv:2004.03289},
year={2020}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
kor_nlu | ---
annotations_creators:
- found
language_creators:
- expert-generated
- found
- machine-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|snli
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
pretty_name: KorNlu
dataset_info:
- config_name: nli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 80135707
num_examples: 550146
- name: validation
num_bytes: 318170
num_examples: 1570
- name: test
num_bytes: 1047250
num_examples: 4954
download_size: 80030037
dataset_size: 81501127
- config_name: sts
features:
- name: genre
dtype:
class_label:
names:
'0': main-news
'1': main-captions
'2': main-forum
'3': main-forums
- name: filename
dtype:
class_label:
names:
'0': images
'1': MSRpar
'2': MSRvid
'3': headlines
'4': deft-forum
'5': deft-news
'6': track5.en-en
'7': answers-forums
'8': answer-answer
- name: year
dtype:
class_label:
names:
'0': '2017'
'1': '2016'
'2': '2013'
'3': 2012train
'4': '2014'
'5': '2015'
'6': 2012test
- name: id
dtype: int32
- name: score
dtype: float32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: train
num_bytes: 1056664
num_examples: 5703
- name: validation
num_bytes: 305009
num_examples: 1471
- name: test
num_bytes: 249671
num_examples: 1379
download_size: 1603824
dataset_size: 1611344
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/kakaobrain/KorNLUDatasets)
- **Repository:** [Github](https://github.com/kakaobrain/KorNLUDatasets)
- **Paper:** [Arxiv](https://arxiv.org/abs/2004.03289)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
kor_qpair | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- ko
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: KorQpair
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: is_duplicate
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 515365
num_examples: 6136
- name: test
num_bytes: 63466
num_examples: 758
- name: validation
num_bytes: 57242
num_examples: 682
download_size: 545236
dataset_size: 636073
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/songys/Question_pair)
- **Repository:** [Github](https://github.com/songys/Question_pair)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each row in the dataset contains two questions and a `is_duplicate` label.
- `question1`: The first question
- `question2`: The second question
- `is_duplicate`: 0 if `question1` and `question2` are semantically similar; 1 otherwise
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
kor_sae | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
pretty_name: Structured Argument Extraction for Korean
dataset_info:
features:
- name: intent_pair1
dtype: string
- name: intent_pair2
dtype: string
- name: label
dtype:
class_label:
names:
'0': yes/no
'1': alternative
'2': wh- questions
'3': prohibitions
'4': requirements
'5': strong requirements
splits:
- name: train
num_bytes: 2885167
num_examples: 30837
download_size: 2545926
dataset_size: 2885167
---
# Dataset Card for Structured Argument Extraction for Korean
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Structured Argument Extraction for Korean](https://github.com/warnikchow/sae4k)
- **Repository:** [Structured Argument Extraction for Korean](https://github.com/warnikchow/sae4k)
- **Paper:** [Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives](https://arxiv.org/abs/1912.00342)
- **Point of Contact:** [Won Ik Cho](wicho@hi.snu.ac.kr)
### Dataset Summary
The Structured Argument Extraction for Korean dataset is a set of question-argument and command-argument pairs with their respective question type label and negativeness label. Often times, agents like Alexa or Siri, encounter conversations without a clear objective from the user. The goal of this dataset is to extract the intent argument of a given utterance pair without a clear directive. This may yield a more robust agent capable of parsing more non-canonical forms of speech.
### Supported Tasks and Leaderboards
* `intent_classification`: The dataset can be trained with a Transformer like [BERT](https://huggingface.co/bert-base-uncased) to classify the intent argument or a question/command pair in Korean, and it's performance can be measured by it's BERTScore.
### Languages
The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
## Dataset Structure
### Data Instances
An example data instance contains a question or command pair and its label:
```
{
"intent_pair1": "내일 오후 다섯시 조별과제 일정 추가해줘"
"intent_pair2": "내일 오후 다섯시 조별과제 일정 추가하기"
"label": 4
}
```
### Data Fields
* `intent_pair1`: a question/command pair
* `intent_pair2`: a corresponding question/command pair
* `label`: determines the intent argument of the pair and can be one of `yes/no` (0), `alternative` (1), `wh- questions` (2), `prohibitions` (3), `requirements` (4) and `strong requirements` (5)
### Data Splits
The corpus contains 30,837 examples.
## Dataset Creation
### Curation Rationale
The Structured Argument Extraction for Korean dataset was curated to help train models extract intent arguments from utterances without a clear objective or when the user uses non-canonical forms of speech. This is especially helpful in Korean because in English, the `Who, what, where, when and why` usually comes in the beginning, but this isn't necessarily the case in the Korean language. So for low-resource languages, this lack of data can be a bottleneck for comprehension performance.
### Source Data
#### Initial Data Collection and Normalization
The corpus was taken from the one constructed by [Cho et al.](https://arxiv.org/abs/1811.04231), a Korean single utterance corpus for identifying directives/non-directives that contains a wide variety of non-canonical directives.
#### Who are the source language producers?
Korean speakers are the source language producers.
### Annotations
#### Annotation process
Utterances were categorized as question or command arguments and then further classified according to their intent argument.
#### Who are the annotators?
The annotation was done by three Korean natives with a background in computational linguistics.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is curated by Won Ik Cho, Young Ki Moon, Sangwhan Moon, Seok Min Kim and Nam Soo Kim.
### Licensing Information
The dataset is licensed under the CC BY-SA-4.0.
### Citation Information
```
@article{cho2019machines,
title={Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives},
author={Cho, Won Ik and Moon, Young Ki and Moon, Sangwhan and Kim, Seok Min and Kim, Nam Soo},
journal={arXiv preprint arXiv:1912.00342},
year={2019}
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |
kor_sarcasm | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ko
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: Korean Sarcasm Detection
tags:
- sarcasm-detection
dataset_info:
features:
- name: tokens
dtype: string
- name: label
dtype:
class_label:
names:
'0': no_sarcasm
'1': sarcasm
splits:
- name: train
num_bytes: 1012030
num_examples: 9000
- name: test
num_bytes: 32480
num_examples: 301
download_size: 1008955
dataset_size: 1044510
---
# Dataset Card for Korean Sarcasm Detection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Korean Sarcasm Detection](https://github.com/SpellOnYou/korean-sarcasm)
- **Repository:** [Korean Sarcasm Detection](https://github.com/SpellOnYou/korean-sarcasm)
- **Point of Contact:** [Dionne Kim](jiwon.kim.096@gmail.com)
### Dataset Summary
The Korean Sarcasm Dataset was created to detect sarcasm in text, which can significantly alter the original meaning of a sentence. 9319 tweets were collected from Twitter and labeled for `sarcasm` or `not_sarcasm`. These tweets were gathered by querying for: `역설, 아무말, 운수좋은날, 笑, 뭐래 아닙니다, 그럴리없다, 어그로, irony sarcastic, and sarcasm`. The dataset was pre-processed by removing the keyword hashtag, urls and mentions of the user to maintain anonymity.
### Supported Tasks and Leaderboards
* `sarcasm_detection`: The dataset can be used to train a model to detect sarcastic tweets. A [BERT](https://huggingface.co/bert-base-uncased) model can be presented with a tweet in Korean and be asked to determine whether it is sarcastic or not.
### Languages
The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
## Dataset Structure
### Data Instances
An example data instance contains a Korean tweet and a label whether it is sarcastic or not. `1` maps to sarcasm and `0` maps to no sarcasm.
```
{
"tokens": "[ 수도권 노선 아이템 ] 17 . 신분당선의 #딸기 : 그의 이미지 컬러 혹은 머리 색에서 유래한 아이템이다 . #메트로라이프"
"label": 0
}
```
### Data Fields
* `tokens`: contains the text of the tweet
* `label`: determines whether the text is sarcastic (`1`: sarcasm, `0`: no sarcasm)
### Data Splits
The data is split into a training set comrpised of 9018 tweets and a test set of 301 tweets.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset was created by gathering HTML data from Twitter. Queries for hashtags that include sarcasm and variants of it were used to return tweets. It was preprocessed by removing the keyword hashtag, urls and mentions of the user to preserve anonymity.
#### Who are the source language producers?
The source language producers are Korean Twitter users.
### Annotations
#### Annotation process
Tweets were labeled `1` for sarcasm and `0` for no sarcasm.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Mentions of the user in a tweet were removed to keep them anonymous.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by Dionne Kim.
### Licensing Information
This dataset is licensed under the MIT License.
### Citation Information
```
@misc{kim2019kocasm,
author = {Kim, Jiwon and Cho, Won Ik},
title = {Kocasm: Korean Automatic Sarcasm Detection},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SpellOnYou/korean-sarcasm}}
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |
labr | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: labr
pretty_name: LABR
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
config_name: plain_text
splits:
- name: train
num_bytes: 7051103
num_examples: 11760
- name: test
num_bytes: 1703399
num_examples: 2935
download_size: 39953712
dataset_size: 8754502
---
# Dataset Card for LABR
## Table of Contents
- [Dataset Card for LABR](#dataset-card-for-labr)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [|split|num examples|](#splitnum-examples)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [LABR](https://github.com/mohamedadaly/LABR)
- **Paper:** [LABR: Large-scale Arabic Book Reviews Dataset](https://aclanthology.org/P13-2088/)
- **Point of Contact:** [Mohammed Aly](mailto:mohamed@mohamedaly.info)
### Dataset Summary
This dataset contains over 63,000 book reviews in Arabic. It is the largest sentiment analysis dataset for Arabic to-date. The book reviews were harvested from the website Goodreads during the month or March 2013. Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review.
### Supported Tasks and Leaderboards
The dataset was published on this [paper](https://www.aclweb.org/anthology/P13-2088.pdf).
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises a rating from 1 to 5 where the higher the rating the better the review.
### Data Fields
- `text` (str): Review text.
- `label` (int): Review rating.
### Data Splits
The data is split into a training and testing. The split is organized as the following
| | train | test |
|---------- |-------:|------:|
|data split | 11,760 | 2,935 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
downloaded over 220,000 reviews from the
book readers social network www.goodreads.com
during the month of March 2013
#### Who are the source language producers?
Reviews.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{aly2013labr,
title={Labr: A large scale arabic book reviews dataset},
author={Aly, Mohamed and Atiya, Amir},
booktitle={Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={494--498},
year={2013}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset. |
lama | ---
pretty_name: 'LAMA: LAnguage Model Analysis'
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
- expert-generated
- machine-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- extended|conceptnet5
- extended|squad
task_categories:
- text-retrieval
- text-classification
task_ids:
- fact-checking-retrieval
- text-scoring
paperswithcode_id: lama
configs:
- conceptnet
- google_re
- squad
- trex
tags:
- probing
dataset_info:
- config_name: trex
features:
- name: uuid
dtype: string
- name: obj_uri
dtype: string
- name: obj_label
dtype: string
- name: sub_uri
dtype: string
- name: sub_label
dtype: string
- name: predicate_id
dtype: string
- name: sub_surface
dtype: string
- name: obj_surface
dtype: string
- name: masked_sentence
dtype: string
- name: template
dtype: string
- name: template_negated
dtype: string
- name: label
dtype: string
- name: description
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 656913189
num_examples: 1304391
download_size: 74652201
dataset_size: 656913189
- config_name: squad
features:
- name: id
dtype: string
- name: sub_label
dtype: string
- name: obj_label
dtype: string
- name: negated
dtype: string
- name: masked_sentence
dtype: string
splits:
- name: train
num_bytes: 57188
num_examples: 305
download_size: 74639115
dataset_size: 57188
- config_name: google_re
features:
- name: pred
dtype: string
- name: sub
dtype: string
- name: obj
dtype: string
- name: evidences
dtype: string
- name: judgments
dtype: string
- name: sub_w
dtype: string
- name: sub_label
dtype: string
- name: sub_aliases
dtype: string
- name: obj_w
dtype: string
- name: obj_label
dtype: string
- name: obj_aliases
dtype: string
- name: uuid
dtype: string
- name: masked_sentence
dtype: string
- name: template
dtype: string
- name: template_negated
dtype: string
splits:
- name: train
num_bytes: 7638657
num_examples: 6106
download_size: 74639115
dataset_size: 7638657
- config_name: conceptnet
features:
- name: uuid
dtype: string
- name: sub
dtype: string
- name: obj
dtype: string
- name: pred
dtype: string
- name: obj_label
dtype: string
- name: masked_sentence
dtype: string
- name: negated
dtype: string
splits:
- name: train
num_bytes: 4130000
num_examples: 29774
download_size: 74639115
dataset_size: 4130000
---
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://github.com/facebookresearch/LAMA
- **Repository:**
https://github.com/facebookresearch/LAMA
- **Paper:**
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Dataset Summary
This dataset provides the data for LAMA. The dataset include a subset
of Google_RE
(https://code.google.com/archive/p/relation-extraction-corpus/), TRex
(subset of wikidata triples), Conceptnet
(https://github.com/commonsense/conceptnet5/wiki) and Squad. There are
configs for each of "google_re", "trex", "conceptnet" and "squad",
respectively.
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version of the dataset includes "negated" sentences as well as
the masked sentence. Also, certain of the config includes "template"
and "template_negated" fields of the form "[X] some text [Y]", where
[X] and [Y] are the subject and object slots respectively of certain
relations.
See the paper for more details. For more information, also see:
https://github.com/facebookresearch/LAMA
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
``
{'description': 'the item (an institution, law, public office ...) or statement belongs to or has power over or applies to the value (a territorial jurisdiction: a country, state, municipality, ...)', 'label': 'applies to jurisdiction', 'masked_sentence': 'It is known as a principality as it is a monarchy headed by two Co-Princes – the Spanish/Roman Catholic Bishop of Urgell and the President of [MASK].', 'obj_label': 'France', 'obj_surface': 'France', 'obj_uri': 'Q142', 'predicate_id': 'P1001', 'sub_label': 'president of the French Republic', 'sub_surface': 'President', 'sub_uri': 'Q191954', 'template': '[X] is a legal term in [Y] .', 'template_negated': '[X] is not a legal term in [Y] .', 'type': 'N-M', 'uuid': '3fe3d4da-9df9-45ba-8109-784ce5fba38a'}
``
The conceptnet config has the following fields:
``
{'masked_sentence': 'One of the things you do when you are alive is [MASK].', 'negated': '', 'obj': 'think', 'obj_label': 'think', 'pred': 'HasSubevent', 'sub': 'alive', 'uuid': 'd4f11631dde8a43beda613ec845ff7d1'}
``
The squad config has the following fields:
``
{'id': '56be4db0acb8001400a502f0_0', 'masked_sentence': 'To emphasize the 50th anniversary of the Super Bowl the [MASK] color was used.', 'negated': "['To emphasize the 50th anniversary of the Super Bowl the [MASK] color was not used.']", 'obj_label': 'gold', 'sub_label': 'Squad'}
``
The google_re config has the following fields:
``
{'evidences': '[{\'url\': \'http://en.wikipedia.org/wiki/Peter_F._Martin\', \'snippet\': "Peter F. Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives. He has represented the 75th District Newport since 6 January 2009. He is currently serves on the House Committees on Judiciary, Municipal Government, and Veteran\'s Affairs. During his first term of office he served on the House Committees on Small Business and Separation of Powers & Government Oversight. In August 2010, Representative Martin was appointed as a Commissioner on the Atlantic States Marine Fisheries Commission", \'considered_sentences\': [\'Peter F Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives .\']}]', 'judgments': "[{'rater': '18349444711114572460', 'judgment': 'yes'}, {'rater': '17595829233063766365', 'judgment': 'yes'}, {'rater': '4593294093459651288', 'judgment': 'yes'}, {'rater': '7387074196865291426', 'judgment': 'yes'}, {'rater': '17154471385681223613', 'judgment': 'yes'}]", 'masked_sentence': 'Peter F Martin (born [MASK]) is an American politician who is a Democratic member of the Rhode Island House of Representatives .', 'obj': '1941', 'obj_aliases': '[]', 'obj_label': '1941', 'obj_w': 'None', 'pred': '/people/person/date_of_birth', 'sub': '/m/09gb0bw', 'sub_aliases': '[]', 'sub_label': 'Peter F. Martin', 'sub_w': 'None', 'template': '[X] (born [Y]).', 'template_negated': '[X] (not born [Y]).', 'uuid': '18af2dac-21d3-4c42-aff5-c247f245e203'}
``
### Data Fields
The trex config has the following fields:
* uuid: the id
* obj_uri: a uri for the object slot
* obj_label: a label for the object slot
* sub_uri: a uri for the subject slot
* sub_label: a label for the subject slot
* predicate_id: the predicate/relationship
* sub_surface: the surface text for the subject
* obj_surface: The surface text for the object. This is the word that should be predicted by the [MASK] token.
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively. template may be missing and replaced with an empty string.
* template_negated: Same as above, except the [Y] is not the object. template_negated may be missing and replaced with empty strings.
* label: the label for the relationship/predicate. label may be missing and replaced with an empty string.
* description': a description of the relationship/predicate. description may be missing and replaced with an empty string.
* type: a type id for the relationship/predicate. type may be missing and replaced with an empty string.
The conceptnet config has the following fields:
* uuid: the id
* sub: the subject. subj may be missing and replaced with an empty string.
* obj: the object to be predicted. obj may be missing and replaced with an empty string.
* pred: the predicate/relationship
* obj_label: the object label
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings.
The squad config has the following fields:
* id: the id
* sub_label: the subject label
* obj_label: the object label that is being predicted
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings.
The google_re config has the following fields:
* uuid: the id
* pred: the predicate
* sub: the subject. subj may be missing and replaced with an empty string.
* obj: the object. obj may be missing and replaced with an empty string.
* evidences: flattened json string that provides evidence for predicate. parse this json string to get more 'snippet' information.
* judgments: data about judgments
* sub_q: unknown
* sub_label: label for the subject
* sub_aliases: unknown
* obj_w: unknown
* obj_label: label for the object
* obj_aliases: unknown
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively.
* template_negated: Same as above, except the [Y] is not the object.
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
[More Information Needed]
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
lambada | ---
task_categories:
- text2text-generation
task_ids: []
multilinguality:
- monolingual
language:
- en
language_creators:
- found
annotations_creators:
- expert-generated
source_datasets:
- extended|bookcorpus
size_categories:
- 10K<n<100K
license:
- cc-by-4.0
paperswithcode_id: lambada
pretty_name: LAMBADA
tags:
- long-range-dependency
dataset_info:
features:
- name: text
dtype: string
- name: domain
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 978174122
num_examples: 2662
- name: test
num_bytes: 1791823
num_examples: 5153
- name: validation
num_bytes: 1703482
num_examples: 4869
download_size: 334527694
dataset_size: 981669427
---
# Dataset Card for LAMBADA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LAMBADA homepage](https://zenodo.org/record/2630551#.X8UP76pKiIa)
- **Paper:** [The LAMBADA dataset: Word prediction requiring a broad discourse context∗](https://www.aclweb.org/anthology/P16-1144.pdf)
### Dataset Summary
The LAMBADA evaluates the capabilities of computational models
for text understanding by means of a word prediction task.
LAMBADA is a collection of narrative passages sharing the characteristic
that human subjects are able to guess their last word if
they are exposed to the whole passage, but not if they
only see the last sentence preceding the target word.
To succeed on LAMBADA, computational models cannot
simply rely on local context, but must be able to
keep track of information in the broader discourse.
The LAMBADA dataset is extracted from BookCorpus and
consists of 10'022 passages, divided into 4'869 development
and 5'153 test passages. The training data for language
models to be tested on LAMBADA include the full text
of 2'662 novels (disjoint from those in dev+test),
comprising 203 million words.
### Supported Tasks and Leaderboards
Long range dependency evaluated as (last) word prediction
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A data point is a text sequence (passage) including the context, the target sentence (the last one) and the target word. For each passage in the dev and the test splits, the word to be guessed is the last one.
The training data include the full text of 2'662 novels (disjoint from
those in dev+test), comprising more than 200M words. It consists of text from the same domain as the dev+test passages, but not filtered in any way.
Each training instance has a `category` field indicating which sub-category the book was extracted from. This field is not given for the dev and test splits.
An example looks like this:
```
{"category": "Mystery",
"text": "bob could have been called in at this point , but he was n't miffed at his exclusion at all . he was relieved at not being brought into this initial discussion with central command . `` let 's go make some grub , '' said bob as he turned to danny . danny did n't keep his stoic expression , but with a look of irritation got up and left the room with bob",
}
```
### Data Fields
- `category`: the sub-category of books from which the book was extracted from. Only available for the training split.
- `text`: the text (concatenation of context, target sentence and target word). The word to be guessed is the last one.
### Data Splits
- train: 2'662 novels
- dev: 4'869 passages
- test: 5'153 passages
## Dataset Creation
### Curation Rationale
The dataset aims at evaluating the ability of language models to hold long-term contextual memories. Instances are extracted from books because they display long-term dependencies. In particular, the data are curated such that the target words are easy to guess by human subjects when they can look at the whole passage they come from, but nearly impossible if only the last sentence is considered.
### Source Data
#### Initial Data Collection and Normalization
The corpus was duplicated and potentially offensive material were filtered out with a stop word list.
#### Who are the source language producers?
The passages are extracted from novels from [Book Corpus](https://github.com/huggingface/datasets/tree/master/datasets/bookcorpus).
### Annotations
#### Annotation process
The authors required two consecutive subjects (paid crowdsourcers) to exactly match the missing word based on the whole passage (comprising the context and the target sentence), and made sure that no subject (out of ten) was able to provide it based on local context only, even when given 3 guesses.
#### Who are the annotators?
The text is self-annotated but was curated by asking (paid) crowdsourcers to guess the last word.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is released under the [CC BY 4.0](Creative Commons Attribution 4.0 International) license.
### Citation Information
```
@InProceedings{paperno-EtAl:2016:P16-1,
author = {Paperno, Denis and Kruszewski, Germ\'{a}n and Lazaridou,
Angeliki and Pham, Ngoc Quan and Bernardi, Raffaella and Pezzelle,
Sandro and Baroni, Marco and Boleda, Gemma and Fernandez, Raquel},
title = {The {LAMBADA} dataset: Word prediction requiring a broad
discourse context},
booktitle = {Proceedings of the 54th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers)},
month = {August},
year = {2016},
address = {Berlin, Germany},
publisher = {Association for Computational Linguistics},
pages = {1525--1534},
url = {http://www.aclweb.org/anthology/P16-1144}
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
large_spanish_corpus | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- es
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 100M<n<1B
- 10K<n<100K
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: null
pretty_name: The Large Spanish Corpus
configs:
- DGT
- DOGC
- ECB
- EMEA
- EUBookShop
- Europarl
- GlobalVoices
- JRC
- NewsCommentary11
- OpenSubtitles2018
- ParaCrawl
- TED
- UN
- all_wikis
- combined
- multiUN
tags: []
dataset_info:
- config_name: JRC
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 380895504
num_examples: 3410620
download_size: 4099166669
dataset_size: 380895504
- config_name: EMEA
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 100259598
num_examples: 1221233
download_size: 4099166669
dataset_size: 100259598
- config_name: GlobalVoices
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 114435784
num_examples: 897075
download_size: 4099166669
dataset_size: 114435784
- config_name: ECB
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 336285757
num_examples: 1875738
download_size: 4099166669
dataset_size: 336285757
- config_name: DOGC
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 898279656
num_examples: 10917053
download_size: 4099166669
dataset_size: 898279656
- config_name: all_wikis
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3782280549
num_examples: 28109484
download_size: 4099166669
dataset_size: 3782280549
- config_name: TED
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15858148
num_examples: 157910
download_size: 4099166669
dataset_size: 15858148
- config_name: multiUN
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2327269369
num_examples: 13127490
download_size: 4099166669
dataset_size: 2327269369
- config_name: Europarl
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 359897865
num_examples: 2174141
download_size: 4099166669
dataset_size: 359897865
- config_name: NewsCommentary11
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 48350573
num_examples: 288771
download_size: 4099166669
dataset_size: 48350573
- config_name: UN
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23654590
num_examples: 74067
download_size: 4099166669
dataset_size: 23654590
- config_name: EUBookShop
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1326861077
num_examples: 8214959
download_size: 4099166669
dataset_size: 1326861077
- config_name: ParaCrawl
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1840430234
num_examples: 15510649
download_size: 4099166669
dataset_size: 1840430234
- config_name: OpenSubtitles2018
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7477281776
num_examples: 213508602
download_size: 4099166669
dataset_size: 7477281776
- config_name: DGT
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 396217351
num_examples: 3168368
download_size: 4099166669
dataset_size: 396217351
- config_name: combined
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 19428257807
num_examples: 302656160
download_size: 4099166669
dataset_size: 19428257807
---
# Dataset Card for The Large Spanish Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora)
- **Repository:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [José Cañete](mailto:jose.canete@ug.uchile.cl) (corpus creator) or [Lewis Tunstall](mailto:lewis.c.tunstall@gmail.com) (corpus submitter)
### Dataset Summary
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, `all_wiki` only includes examples from Spanish Wikipedia:
```python
from datasets import load_dataset
all_wiki = load_dataset('large_spanish_corpus', name='all_wiki')
```
By default, the config is set to "combined" which loads all the corpora.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Spanish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The following is taken from the corpus' source repsository:
* Spanish Wikis: Which include Wikipedia, Wikinews, Wikiquotes and more. These were first processed with wikiextractor (https://github.com/josecannete/wikiextractorforBERT) using the wikis dump of 20/04/2019.
* ParaCrawl: Spanish portion of ParaCrawl (http://opus.nlpl.eu/ParaCrawl.php)
* EUBookshop: Spanish portion of EUBookshop (http://opus.nlpl.eu/EUbookshop.php)
* MultiUN: Spanish portion of MultiUN (http://opus.nlpl.eu/MultiUN.php)
* OpenSubtitles: Spanish portion of OpenSubtitles2018 (http://opus.nlpl.eu/OpenSubtitles-v2018.php)
* DGC: Spanish portion of DGT (http://opus.nlpl.eu/DGT.php)
* DOGC: Spanish portion of DOGC (http://opus.nlpl.eu/DOGC.php)
* ECB: Spanish portion of ECB (http://opus.nlpl.eu/ECB.php)
* EMEA: Spanish portion of EMEA (http://opus.nlpl.eu/EMEA.php)
* Europarl: Spanish portion of Europarl (http://opus.nlpl.eu/Europarl.php)
* GlobalVoices: Spanish portion of GlobalVoices (http://opus.nlpl.eu/GlobalVoices.php)
* JRC: Spanish portion of JRC (http://opus.nlpl.eu/JRC-Acquis.php)
* News-Commentary11: Spanish portion of NCv11 (http://opus.nlpl.eu/News-Commentary-v11.php)
* TED: Spanish portion of TED (http://opus.nlpl.eu/TED2013.php)
* UN: Spanish portion of UN (http://opus.nlpl.eu/UN.php)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. |
laroseda | ---
annotations_creators:
- found
language_creators:
- found
language:
- ro
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: null
pretty_name: LaRoSeDa
dataset_info:
features:
- name: index
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: starRating
dtype: int64
config_name: laroseda
splits:
- name: train
num_bytes: 2932819
num_examples: 12000
- name: test
num_bytes: 700834
num_examples: 3000
download_size: 5257183
dataset_size: 3633653
---
# Dataset Card for LaRoSeDa
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ancatache/LaRoSeDa)
- **Repository:** [Github](https://github.com/ancatache/LaRoSeDa)
- **Paper:** [Arxiv](https://arxiv.org/pdf/2101.04197.pdf)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** raducu.ionescu@gmail.com
### Dataset Summary
LaRoSeDa - A **La**rge and **Ro**manian **Se**ntiment **Da**ta Set. LaRoSeDa contains 15,000 reviews written in Romanian, of which 7,500 are positive and 7,500 negative.
The samples have one of four star ratings: 1 or 2 - for reviews that can be considered of negative polarity, and 4 or 5 for the positive ones.
The 15,000 samples featured in the corpus and labelled with the star rating, are splitted in a train and test subsets, with 12,000 and 3,000 samples in each subset.
### Supported Tasks and Leaderboards
[LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/)
### Languages
The text dataset is in Romanian (`ro`).
## Dataset Structure
### Data Instances
Below we have an example of sample from LaRoSeDa:
```
{
"index": "9675",
"title": "Nu recomand",
"content": "probleme cu localizarea, mari...",
"starRating": 1,
}
```
where "9675" is the sample index, followed by the title of the review, review content and then the star rating given by the user.
### Data Fields
- `index`: string, the unique indentifier of a sample.
- `title`: string, the review title.
- `content`: string, the content of the review.
- `starRating`: integer, with values in the following set {1, 2, 4, 5}.
### Data Splits
The train/test split contains 12,000/3,000 samples tagged with the star rating assigned to each sample in the dataset.
## Dataset Creation
### Curation Rationale
The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics.
For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543).
### Source Data
#### Data Collection and Normalization
For the data collection, one of the largest Romanian e-commerce platform was targetted. Along with the textual content of each review, the associated star ratings was also collected in order to automatically assign labels to
the collected text samples.
#### Who are the source language producers?
The original text comes from one of the largest e-commerce platforms in Romania.
### Annotations
#### Annotation process
As mentioned above, LaRoSeDa is composed of product reviews from one of the largest e-commerce websites in Romania. The resulting samples are automatically tagged with the star rating assigned by the users.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The textual data collected for LaRoSeDa consists in product reviews freely available on the Internet.
To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures.
In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language.
### Discussion of Biases
*We note that most of the negative reviews (5,561) are rated with one star. Similarly, most of the positive reviews (6,238) are rated with five stars. Hence, the corpus is highly polarized.*
### Other Known Limitations
*The star rating might not always reflect the polarity of the text. We thus acknowledge that the automatic labeling process is not optimal, i.e. some labels might be noisy.*
## Additional Information
### Dataset Curators
Published and managed by Anca Tache, Mihaela Gaman and Radu Tudor Ionescu.
### Licensing Information
CC BY-SA 4.0 License
### Citation Information
```
@article{
tache2101clustering,
title={Clustering Word Embeddings with Self-Organizing Maps. Application on LaRoSeDa -- A Large Romanian Sentiment Data Set},
author={Anca Maria Tache and Mihaela Gaman and Radu Tudor Ionescu},
journal={ArXiv},
year = {2021}
}
```
### Contributions
Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset. |
lc_quad | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-3.0
multilinguality:
- monolingual
pretty_name: 'LC-QuAD 2.0: Large-scale Complex Question Answering Dataset'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: lc-quad-2-0
tags:
- knowledge-base-qa
dataset_info:
features:
- name: NNQT_question
dtype: string
- name: uid
dtype: int32
- name: subgraph
dtype: string
- name: template_index
dtype: int32
- name: question
dtype: string
- name: sparql_wikidata
dtype: string
- name: sparql_dbpedia18
dtype: string
- name: template
dtype: string
- name: paraphrased_question
dtype: string
splits:
- name: train
num_bytes: 16637751
num_examples: 19293
- name: test
num_bytes: 4067092
num_examples: 4781
download_size: 3959901
dataset_size: 20704843
---
# Dataset Card for LC-QuAD 2.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://lc-quad.sda.tech/](http://lc-quad.sda.tech/)
- **Repository:** https://github.com/AskNowQA/LC-QuAD2.0
- **Paper:** [LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia](https://api.semanticscholar.org/CorpusID:198166992)
- **Point of Contact:** [Mohnish Dubey](mailto:dubey@cs.uni-bonn.de) or [Mohnish Dubey](mailto:dubey.mohnish5@gmail.com)
- **Size of downloaded dataset files:** 3.87 MB
- **Size of the generated dataset:** 20.73 MB
- **Total amount of disk used:** 24.60 MB
### Dataset Summary
LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.87 MB
- **Size of the generated dataset:** 20.73 MB
- **Total amount of disk used:** 24.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"NNQT_question": "What is the {periodical literature} for {mouthpiece} of {Delta Air Lines}",
"paraphrased_question": "What is Delta Air Line's periodical literature mouthpiece?",
"question": "What periodical literature does Delta Air Lines use as a moutpiece?",
"sparql_dbpedia18": "\"select distinct ?obj where { ?statement <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> <http://wikidata.dbpedia.org/resou...",
"sparql_wikidata": " select distinct ?obj where { wd:Q188920 wdt:P2813 ?obj . ?obj wdt:P31 wd:Q1002697 } ",
"subgraph": "simple question right",
"template": " <S P ?O ; ?O instanceOf Type>",
"template_index": 65,
"uid": 19719
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `NNQT_question`: a `string` feature.
- `uid`: a `int32` feature.
- `subgraph`: a `string` feature.
- `template_index`: a `int32` feature.
- `question`: a `string` feature.
- `sparql_wikidata`: a `string` feature.
- `sparql_dbpedia18`: a `string` feature.
- `template`: a `string` feature.
- `paraphrased_question`: a `string` feature.
### Data Splits
| name |train|test|
|-------|----:|---:|
|default|19293|4781|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
LC-QuAD 2.0 is licensed under a [Creative Commons Attribution 3.0 Unported License](http://creativecommons.org/licenses/by/3.0/deed.en_US).
### Citation Information
```
@inproceedings{dubey2017lc2,
title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia},
author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens},
booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)},
year={2019},
organization={Springer}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
lener_br | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: lener-br
pretty_name: leNER-br
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ORGANIZACAO
'2': I-ORGANIZACAO
'3': B-PESSOA
'4': I-PESSOA
'5': B-TEMPO
'6': I-TEMPO
'7': B-LOCAL
'8': I-LOCAL
'9': B-LEGISLACAO
'10': I-LEGISLACAO
'11': B-JURISPRUDENCIA
'12': I-JURISPRUDENCIA
config_name: lener_br
splits:
- name: train
num_bytes: 3984189
num_examples: 7828
- name: validation
num_bytes: 719433
num_examples: 1177
- name: test
num_bytes: 823708
num_examples: 1390
download_size: 2983137
dataset_size: 5527330
---
# Dataset Card for leNER-br
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [leNER-BR homepage](https://cic.unb.br/~teodecampos/LeNER-Br/)
- **Repository:** [leNER-BR repository](https://github.com/peluz/lener-br)
- **Paper:** [leNER-BR: Long Form Question Answering](https://cic.unb.br/~teodecampos/LeNER-Br/luz_etal_propor2018.pdf)
- **Point of Contact:** [Pedro H. Luz de Araujo](mailto:pedrohluzaraujo@gmail.com)
### Dataset Summary
LeNER-Br is a Portuguese language dataset for named entity recognition
applied to legal documents. LeNER-Br consists entirely of manually annotated
legislation and legal cases texts and contains tags for persons, locations,
time entities, organizations, legislation and legal cases.
To compose the dataset, 66 legal documents from several Brazilian Courts were
collected. Courts of superior and state levels were considered, such as Supremo
Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
Gerais and Tribunal de Contas da União. In addition, four legislation documents
were collected, such as "Lei Maria da Penha", giving a total of 70 documents
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Portuguese.
## Dataset Structure
### Data Instances
An example from the dataset looks as follows:
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0],
"tokens": [
"EMENTA", ":", "APELAÇÃO", "CÍVEL", "-", "AÇÃO", "DE", "INDENIZAÇÃO", "POR", "DANOS", "MORAIS", "-", "PRELIMINAR", "-", "ARGUIDA", "PELO", "MINISTÉRIO", "PÚBLICO", "EM", "GRAU", "RECURSAL"]
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-PESSOA", "I-PESSOA", "B-TEMPO", "I-TEMPO", "B-LOCAL", "I-LOCAL", "B-LEGISLACAO", "I-LEGISLACAO", "B-JURISPRUDENCIA", "I-JURISPRUDENCIA"
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.
### Data Splits
The data is split into train, validation and test set. The split sizes are as follow:
| Train | Val | Test |
| ------ | ----- | ---- |
| 7828 | 1177 | 1390 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{luz_etal_propor2018,
author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
Renato R. R. {de Oliveira} and Matheus Stauffer and
Samuel Couto and Paulo Bermejo},
title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
publisher = {Springer},
series = {Lecture Notes on Computer Science ({LNCS})},
pages = {313--323},
year = {2018},
month = {September 24-26},
address = {Canela, RS, Brazil},
doi = {10.1007/978-3-319-99722-3_32},
url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
}
```
### Contributions
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset. |
lex_glue | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended
task_categories:
- question-answering
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
- multiple-choice-qa
- topic-classification
pretty_name: LexGLUE
configs:
- case_hold
- ecthr_a
- ecthr_b
- eurlex
- ledgar
- scotus
- unfair_tos
dataset_info:
- config_name: ecthr_a
features:
- name: text
sequence: string
- name: labels
sequence:
class_label:
names:
'0': '2'
'1': '3'
'2': '5'
'3': '6'
'4': '8'
'5': '9'
'6': '10'
'7': '11'
'8': '14'
'9': P1-1
splits:
- name: train
num_bytes: 89637461
num_examples: 9000
- name: test
num_bytes: 11884180
num_examples: 1000
- name: validation
num_bytes: 10985180
num_examples: 1000
download_size: 32852475
dataset_size: 112506821
- config_name: ecthr_b
features:
- name: text
sequence: string
- name: labels
sequence:
class_label:
names:
'0': '2'
'1': '3'
'2': '5'
'3': '6'
'4': '8'
'5': '9'
'6': '10'
'7': '11'
'8': '14'
'9': P1-1
splits:
- name: train
num_bytes: 89657661
num_examples: 9000
- name: test
num_bytes: 11886940
num_examples: 1000
- name: validation
num_bytes: 10987828
num_examples: 1000
download_size: 32852475
dataset_size: 112532429
- config_name: eurlex
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100163'
'1': '100168'
'2': '100169'
'3': '100170'
'4': '100171'
'5': '100172'
'6': '100173'
'7': '100174'
'8': '100175'
'9': '100176'
'10': '100177'
'11': '100179'
'12': '100180'
'13': '100183'
'14': '100184'
'15': '100185'
'16': '100186'
'17': '100187'
'18': '100189'
'19': '100190'
'20': '100191'
'21': '100192'
'22': '100193'
'23': '100194'
'24': '100195'
'25': '100196'
'26': '100197'
'27': '100198'
'28': '100199'
'29': '100200'
'30': '100201'
'31': '100202'
'32': '100204'
'33': '100205'
'34': '100206'
'35': '100207'
'36': '100212'
'37': '100214'
'38': '100215'
'39': '100220'
'40': '100221'
'41': '100222'
'42': '100223'
'43': '100224'
'44': '100226'
'45': '100227'
'46': '100229'
'47': '100230'
'48': '100231'
'49': '100232'
'50': '100233'
'51': '100234'
'52': '100235'
'53': '100237'
'54': '100238'
'55': '100239'
'56': '100240'
'57': '100241'
'58': '100242'
'59': '100243'
'60': '100244'
'61': '100245'
'62': '100246'
'63': '100247'
'64': '100248'
'65': '100249'
'66': '100250'
'67': '100252'
'68': '100253'
'69': '100254'
'70': '100255'
'71': '100256'
'72': '100257'
'73': '100258'
'74': '100259'
'75': '100260'
'76': '100261'
'77': '100262'
'78': '100263'
'79': '100264'
'80': '100265'
'81': '100266'
'82': '100268'
'83': '100269'
'84': '100270'
'85': '100271'
'86': '100272'
'87': '100273'
'88': '100274'
'89': '100275'
'90': '100276'
'91': '100277'
'92': '100278'
'93': '100279'
'94': '100280'
'95': '100281'
'96': '100282'
'97': '100283'
'98': '100284'
'99': '100285'
splits:
- name: train
num_bytes: 390770289
num_examples: 55000
- name: test
num_bytes: 59739102
num_examples: 5000
- name: validation
num_bytes: 41544484
num_examples: 5000
download_size: 125413277
dataset_size: 492053875
- config_name: scotus
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
'5': '6'
'6': '7'
'7': '8'
'8': '9'
'9': '10'
'10': '11'
'11': '12'
'12': '13'
splits:
- name: train
num_bytes: 178959320
num_examples: 5000
- name: test
num_bytes: 76213283
num_examples: 1400
- name: validation
num_bytes: 75600247
num_examples: 1400
download_size: 104763335
dataset_size: 330772850
- config_name: ledgar
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Adjustments
'1': Agreements
'2': Amendments
'3': Anti-Corruption Laws
'4': Applicable Laws
'5': Approvals
'6': Arbitration
'7': Assignments
'8': Assigns
'9': Authority
'10': Authorizations
'11': Base Salary
'12': Benefits
'13': Binding Effects
'14': Books
'15': Brokers
'16': Capitalization
'17': Change In Control
'18': Closings
'19': Compliance With Laws
'20': Confidentiality
'21': Consent To Jurisdiction
'22': Consents
'23': Construction
'24': Cooperation
'25': Costs
'26': Counterparts
'27': Death
'28': Defined Terms
'29': Definitions
'30': Disability
'31': Disclosures
'32': Duties
'33': Effective Dates
'34': Effectiveness
'35': Employment
'36': Enforceability
'37': Enforcements
'38': Entire Agreements
'39': Erisa
'40': Existence
'41': Expenses
'42': Fees
'43': Financial Statements
'44': Forfeitures
'45': Further Assurances
'46': General
'47': Governing Laws
'48': Headings
'49': Indemnifications
'50': Indemnity
'51': Insurances
'52': Integration
'53': Intellectual Property
'54': Interests
'55': Interpretations
'56': Jurisdictions
'57': Liens
'58': Litigations
'59': Miscellaneous
'60': Modifications
'61': No Conflicts
'62': No Defaults
'63': No Waivers
'64': Non-Disparagement
'65': Notices
'66': Organizations
'67': Participations
'68': Payments
'69': Positions
'70': Powers
'71': Publicity
'72': Qualifications
'73': Records
'74': Releases
'75': Remedies
'76': Representations
'77': Sales
'78': Sanctions
'79': Severability
'80': Solvency
'81': Specific Performance
'82': Submission To Jurisdiction
'83': Subsidiaries
'84': Successors
'85': Survival
'86': Tax Withholdings
'87': Taxes
'88': Terminations
'89': Terms
'90': Titles
'91': Transactions With Affiliates
'92': Use Of Proceeds
'93': Vacations
'94': Venues
'95': Vesting
'96': Waiver Of Jury Trials
'97': Waivers
'98': Warranties
'99': Withholdings
splits:
- name: train
num_bytes: 43358315
num_examples: 60000
- name: test
num_bytes: 6845585
num_examples: 10000
- name: validation
num_bytes: 7143592
num_examples: 10000
download_size: 16255623
dataset_size: 57347492
- config_name: unfair_tos
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Limitation of liability
'1': Unilateral termination
'2': Unilateral change
'3': Content removal
'4': Contract by using
'5': Choice of law
'6': Jurisdiction
'7': Arbitration
splits:
- name: train
num_bytes: 1041790
num_examples: 5532
- name: test
num_bytes: 303107
num_examples: 1607
- name: validation
num_bytes: 452119
num_examples: 2275
download_size: 511342
dataset_size: 1797016
- config_name: case_hold
features:
- name: context
dtype: string
- name: endings
sequence: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 74781766
num_examples: 45000
- name: test
num_bytes: 5989964
num_examples: 3600
- name: validation
num_bytes: 6474615
num_examples: 3900
download_size: 30422703
dataset_size: 87246345
---
# Dataset Card for "LexGLUE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lex-glue
- **Repository:** https://github.com/coastalcph/lex-glue
- **Paper:** https://arxiv.org/abs/2110.00976
- **Leaderboard:** https://github.com/coastalcph/lex-glue
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the *Legal General Language Understanding Evaluation (LexGLUE) benchmark*, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.
As in GLUE and SuperGLUE (Wang et al., 2019b,a), one of our goals is to push towards generic (or ‘foundation’) models that can cope with multiple NLP tasks, in our case legal NLP tasks possibly with limited task-specific fine-tuning. Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways to make it easier for newcomers and generic models to address all tasks.
LexGLUE benchmark is accompanied by experimental infrastructure that relies on Hugging Face Transformers library and resides at: https://github.com/coastalcph/lex-glue.
### Supported Tasks and Leaderboards
The supported tasks are the following:
<table>
<tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Task Type</td><td>Classes</td><tr>
<tr><td>ECtHR (Task A)</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>Multi-label classification</td><td>10+1</td></tr>
<tr><td>ECtHR (Task B)</td><td> <a href="https://aclanthology.org/2021.naacl-main.22/">Chalkidis et al. (2021a)</a> </td><td>ECHR</td><td>Multi-label classification </td><td>10+1</td></tr>
<tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>Multi-class classification</td><td>14</td></tr>
<tr><td>EUR-LEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. (2021b)</a></td><td>EU Law</td><td>Multi-label classification</td><td>100</td></tr>
<tr><td>LEDGAR</td><td> <a href="https://aclanthology.org/2020.lrec-1.155/">Tuggener et al. (2020)</a></td><td>Contracts</td><td>Multi-class classification</td><td>100</td></tr>
<tr><td>UNFAIR-ToS</td><td><a href="https://arxiv.org/abs/1805.01217"> Lippi et al. (2019)</a></td><td>Contracts</td><td>Multi-label classification</td><td>8+1</td></tr>
<tr><td>CaseHOLD</td><td><a href="https://arxiv.org/abs/2104.08671">Zheng et al. (2021)</a></td><td>US Law</td><td>Multiple choice QA</td><td>n/a</td></tr>
</table>
#### ecthr_a
The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of factual paragraphs (facts) from the case description. Each case is mapped to articles of the ECHR that were violated (if any).
#### ecthr_b
The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of factual paragraphs (facts) from the case description. Each case is mapped to articles of ECHR that were allegedly violated (considered by the court).
#### scotus
The US Supreme Court (SCOTUS) is the highest federal court in the United States of America and generally hears only the most controversial or otherwise complex cases which have not been sufficiently well solved by lower courts. This is a single-label multi-class classification task, where given a document (court opinion), the task is to predict the relevant issue areas. The 14 issue areas cluster 278 issues whose focus is on the subject matter of the controversy (dispute).
#### eurlex
European Union (EU) legislation is published in EUR-Lex portal. All EU laws are annotated by EU's Publications Office with multiple concepts from the EuroVoc thesaurus, a multilingual thesaurus maintained by the Publications Office. The current version of EuroVoc contains more than 7k concepts referring to various activities of the EU and its Member States (e.g., economics, health-care, trade). Given a document, the task is to predict its EuroVoc labels (concepts).
#### ledgar
LEDGAR dataset aims contract provision (paragraph) classification. The contract provisions come from contracts obtained from the US Securities and Exchange Commission (SEC) filings, which are publicly available from EDGAR. Each label represents the single main topic (theme) of the corresponding contract provision.
#### unfair_tos
The UNFAIR-ToS dataset contains 50 Terms of Service (ToS) from on-line platforms (e.g., YouTube, Ebay, Facebook, etc.). The dataset has been annotated on the sentence-level with 8 types of unfair contractual terms (sentences), meaning terms that potentially violate user rights according to the European consumer law.
#### case_hold
The CaseHOLD (Case Holdings on Legal Decisions) dataset includes multiple choice questions about holdings of US court cases from the Harvard Law Library case law corpus. Holdings are short summaries of legal rulings accompany referenced decisions relevant for the present case. The input consists of an excerpt (or prompt) from a court decision, containing a reference to a particular case, while the holding statement is masked out. The model must identify the correct (masked) holding statement from a selection of five choices.
The current leaderboard includes several Transformer-based (Vaswaniet al., 2017) pre-trained language models, which achieve state-of-the-art performance in most NLP tasks (Bommasani et al., 2021) and NLU benchmarks (Wang et al., 2019a). Results reported by [Chalkidis et al. (2021)](https://arxiv.org/abs/2110.00976):
*Task-wise Test Results*
<table>
<tr><td><b>Dataset</b></td><td><b>ECtHR A</b></td><td><b>ECtHR B</b></td><td><b>SCOTUS</b></td><td><b>EUR-LEX</b></td><td><b>LEDGAR</b></td><td><b>UNFAIR-ToS</b></td><td><b>CaseHOLD</b></td></tr>
<tr><td><b>Model</b></td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1</td><td>μ-F1 / m-F1 </td></tr>
<tr><td>TFIDF+SVM</td><td> 64.7 / 51.7 </td><td>74.6 / 65.1 </td><td> <b>78.2</b> / <b>69.5</b> </td><td>71.3 / 51.4 </td><td>87.2 / 82.4 </td><td>95.4 / 78.8</td><td>n/a </td></tr>
<tr><td colspan="8" style='text-align:center'><b>Medium-sized Models (L=12, H=768, A=12)</b></td></tr>
<td>BERT</td> <td> 71.2 / 63.6 </td> <td> 79.7 / 73.4 </td> <td> 68.3 / 58.3 </td> <td> 71.4 / 57.2 </td> <td> 87.6 / 81.8 </td> <td> 95.6 / 81.3 </td> <td> 70.8 </td> </tr>
<td>RoBERTa</td> <td> 69.2 / 59.0 </td> <td> 77.3 / 68.9 </td> <td> 71.6 / 62.0 </td> <td> 71.9 / <b>57.9</b> </td> <td> 87.9 / 82.3 </td> <td> 95.2 / 79.2 </td> <td> 71.4 </td> </tr>
<td>DeBERTa</td> <td> 70.0 / 60.8 </td> <td> 78.8 / 71.0 </td> <td> 71.1 / 62.7 </td> <td> <b>72.1</b> / 57.4 </td> <td> 88.2 / 83.1 </td> <td> 95.5 / 80.3 </td> <td> 72.6 </td> </tr>
<td>Longformer</td> <td> 69.9 / 64.7 </td> <td> 79.4 / 71.7 </td> <td> 72.9 / 64.0 </td> <td> 71.6 / 57.7 </td> <td> 88.2 / 83.0 </td> <td> 95.5 / 80.9 </td> <td> 71.9 </td> </tr>
<td>BigBird</td> <td> 70.0 / 62.9 </td> <td> 78.8 / 70.9 </td> <td> 72.8 / 62.0 </td> <td> 71.5 / 56.8 </td> <td> 87.8 / 82.6 </td> <td> 95.7 / 81.3 </td> <td> 70.8 </td> </tr>
<td>Legal-BERT</td> <td> 70.0 / 64.0 </td> <td> <b>80.4</b> / <b>74.7</b> </td> <td> 76.4 / 66.5 </td> <td> <b>72.1</b> / 57.4 </td> <td> 88.2 / 83.0 </td> <td> <b>96.0</b> / <b>83.0</b> </td> <td> 75.3 </td> </tr>
<td>CaseLaw-BERT</td> <td> 69.8 / 62.9 </td> <td> 78.8 / 70.3 </td> <td> 76.6 / 65.9 </td> <td> 70.7 / 56.6 </td> <td> 88.3 / 83.0 </td> <td> <b>96.0</b> / 82.3 </td> <td> <b>75.4</b> </td> </tr>
<tr><td colspan="8" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
<tr><td>RoBERTa</td> <td> <b>73.8</b> / <b>67.6</b> </td> <td> 79.8 / 71.6 </td> <td> 75.5 / 66.3 </td> <td> 67.9 / 50.3 </td> <td> <b>88.6</b> / <b>83.6</b> </td> <td> 95.8 / 81.6 </td> <td> 74.4 </td> </tr>
</table>
*Averaged (Mean over Tasks) Test Results*
<table>
<tr><td><b>Averaging</b></td><td><b>Arithmetic</b></td><td><b>Harmonic</b></td><td><b>Geometric</b></td></tr>
<tr><td><b>Model</b></td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td></tr>
<tr><td colspan="4" style='text-align:center'><b>Medium-sized Models (L=12, H=768, A=12)</b></td></tr>
<tr><td>BERT</td><td> 77.8 / 69.5 </td><td> 76.7 / 68.2 </td><td> 77.2 / 68.8 </td></tr>
<tr><td>RoBERTa</td><td> 77.8 / 68.7 </td><td> 76.8 / 67.5 </td><td> 77.3 / 68.1 </td></tr>
<tr><td>DeBERTa</td><td> 78.3 / 69.7 </td><td> 77.4 / 68.5 </td><td> 77.8 / 69.1 </td></tr>
<tr><td>Longformer</td><td> 78.5 / 70.5 </td><td> 77.5 / 69.5 </td><td> 78.0 / 70.0 </td></tr>
<tr><td>BigBird</td><td> 78.2 / 69.6 </td><td> 77.2 / 68.5 </td><td> 77.7 / 69.0 </td></tr>
<tr><td>Legal-BERT</td><td> <b>79.8</b> / <b>72.0</b> </td><td> <b>78.9</b> / <b>70.8</b> </td><td> <b>79.3</b> / <b>71.4</b> </td></tr>
<tr><td>CaseLaw-BERT</td><td> 79.4 / 70.9 </td><td> 78.5 / 69.7 </td><td> 78.9 / 70.3 </td></tr>
<tr><td colspan="4" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
<tr><td>RoBERTa</td><td> 79.4 / 70.8 </td><td> 78.4 / 69.1 </td><td> 78.9 / 70.0 </td></tr>
</table>
### Languages
We only consider English datasets, to make experimentation easier for researchers across the globe.
## Dataset Structure
### Data Instances
#### ecthr_a
An example of 'train' looks as follows.
```json
{
"text": ["8. The applicant was arrested in the early morning of 21 October 1990 ...", ...],
"labels": [6]
}
```
#### ecthr_b
An example of 'train' looks as follows.
```json
{
"text": ["8. The applicant was arrested in the early morning of 21 October 1990 ...", ...],
"label": [5, 6]
}
```
#### scotus
An example of 'train' looks as follows.
```json
{
"text": "Per Curiam\nSUPREME COURT OF THE UNITED STATES\nRANDY WHITE, WARDEN v. ROGER L. WHEELER\n Decided December 14, 2015\nPER CURIAM.\nA death sentence imposed by a Kentucky trial court and\naffirmed by the ...",
"label": 8
}
```
#### eurlex
An example of 'train' looks as follows.
```json
{
"text": "COMMISSION REGULATION (EC) No 1629/96 of 13 August 1996 on an invitation to tender for the refund on export of wholly milled round grain rice to certain third countries ...",
"labels": [4, 20, 21, 35, 68]
}
```
#### ledgar
An example of 'train' looks as follows.
```json
{
"text": "All Taxes shall be the financial responsibility of the party obligated to pay such Taxes as determined by applicable law and neither party is or shall be liable at any time for any of the other party ...",
"label": 32
}
```
#### unfair_tos
An example of 'train' looks as follows.
```json
{
"text": "tinder may terminate your account at any time without notice if it believes that you have violated this agreement.",
"label": 2
}
```
#### casehold
An example of 'test' looks as follows.
```json
{
"context": "In Granato v. City and County of Denver, No. CIV 11-0304 MSK/BNB, 2011 WL 3820730 (D.Colo. Aug. 20, 2011), the Honorable Marcia S. Krieger, now-Chief United States District Judge for the District of Colorado, ruled similarly: At a minimum, a party asserting a Mo-nell claim must plead sufficient facts to identify ... to act pursuant to City or State policy, custom, decision, ordinance, re d 503, 506-07 (3d Cir.l985)(<HOLDING>).",
"endings": ["holding that courts are to accept allegations in the complaint as being true including monell policies and writing that a federal court reviewing the sufficiency of a complaint has a limited task",
"holding that for purposes of a class certification motion the court must accept as true all factual allegations in the complaint and may draw reasonable inferences therefrom",
"recognizing that the allegations of the complaint must be accepted as true on a threshold motion to dismiss",
"holding that a court need not accept as true conclusory allegations which are contradicted by documents referred to in the complaint",
"holding that where the defendant was in default the district court correctly accepted the fact allegations of the complaint as true"
],
"label": 0
}
```
### Data Fields
#### ecthr_a
- `text`: a list of `string` features (list of factual paragraphs (facts) from the case description).
- `labels`: a list of classification labels (a list of violated ECHR articles, if any) .
<details>
<summary>List of ECHR articles</summary>
"Article 2", "Article 3", "Article 5", "Article 6", "Article 8", "Article 9", "Article 10", "Article 11", "Article 14", "Article 1 of Protocol 1"
</details>
#### ecthr_b
- `text`: a list of `string` features (list of factual paragraphs (facts) from the case description)
- `labels`: a list of classification labels (a list of articles considered).
<details>
<summary>List of ECHR articles</summary>
"Article 2", "Article 3", "Article 5", "Article 6", "Article 8", "Article 9", "Article 10", "Article 11", "Article 14", "Article 1 of Protocol 1"
</details>
#### scotus
- `text`: a `string` feature (the court opinion).
- `label`: a classification label (the relevant issue area).
<details>
<summary>List of issue areas</summary>
(1, Criminal Procedure), (2, Civil Rights), (3, First Amendment), (4, Due Process), (5, Privacy), (6, Attorneys), (7, Unions), (8, Economic Activity), (9, Judicial Power), (10, Federalism), (11, Interstate Relations), (12, Federal Taxation), (13, Miscellaneous), (14, Private Action)
</details>
#### eurlex
- `text`: a `string` feature (an EU law).
- `labels`: a list of classification labels (a list of relevant EUROVOC concepts).
<details>
<summary>List of EUROVOC concepts</summary>
The list is very long including 100 EUROVOC concepts. You can find the EUROVOC concepts descriptors <a href="https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json">here</a>.
</details>
#### ledgar
- `text`: a `string` feature (a contract provision/paragraph).
- `label`: a classification label (the type of contract provision).
<details>
<summary>List of contract provision types</summary>
"Adjustments", "Agreements", "Amendments", "Anti-Corruption Laws", "Applicable Laws", "Approvals", "Arbitration", "Assignments", "Assigns", "Authority", "Authorizations", "Base Salary", "Benefits", "Binding Effects", "Books", "Brokers", "Capitalization", "Change In Control", "Closings", "Compliance With Laws", "Confidentiality", "Consent To Jurisdiction", "Consents", "Construction", "Cooperation", "Costs", "Counterparts", "Death", "Defined Terms", "Definitions", "Disability", "Disclosures", "Duties", "Effective Dates", "Effectiveness", "Employment", "Enforceability", "Enforcements", "Entire Agreements", "Erisa", "Existence", "Expenses", "Fees", "Financial Statements", "Forfeitures", "Further Assurances", "General", "Governing Laws", "Headings", "Indemnifications", "Indemnity", "Insurances", "Integration", "Intellectual Property", "Interests", "Interpretations", "Jurisdictions", "Liens", "Litigations", "Miscellaneous", "Modifications", "No Conflicts", "No Defaults", "No Waivers", "Non-Disparagement", "Notices", "Organizations", "Participations", "Payments", "Positions", "Powers", "Publicity", "Qualifications", "Records", "Releases", "Remedies", "Representations", "Sales", "Sanctions", "Severability", "Solvency", "Specific Performance", "Submission To Jurisdiction", "Subsidiaries", "Successors", "Survival", "Tax Withholdings", "Taxes", "Terminations", "Terms", "Titles", "Transactions With Affiliates", "Use Of Proceeds", "Vacations", "Venues", "Vesting", "Waiver Of Jury Trials", "Waivers", "Warranties", "Withholdings",
</details>
#### unfair_tos
- `text`: a `string` feature (a ToS sentence)
- `labels`: a list of classification labels (a list of unfair types, if any).
<details>
<summary>List of unfair types</summary>
"Limitation of liability", "Unilateral termination", "Unilateral change", "Content removal", "Contract by using", "Choice of law", "Jurisdiction", "Arbitration"
</details>
#### casehold
- `context`: a `string` feature (a context sentence incl. a masked holding statement).
- `holdings`: a list of `string` features (a list of candidate holding statements).
- `label`: a classification label (the id of the original/correct holding).
### Data Splits
<table>
<tr><td>Dataset </td><td>Training</td><td>Development</td><td>Test</td><td>Total</td></tr>
<tr><td>ECtHR (Task A)</td><td>9,000</td><td>1,000</td><td>1,000</td><td>11,000</td></tr>
<tr><td>ECtHR (Task B)</td><td>9,000</td><td>1,000</td><td>1,000</td><td>11,000</td></tr>
<tr><td>SCOTUS</td><td>5,000</td><td>1,400</td><td>1,400</td><td>7,800</td></tr>
<tr><td>EUR-LEX</td><td>55,000</td><td>5,000</td><td>5,000</td><td>65,000</td></tr>
<tr><td>LEDGAR</td><td>60,000</td><td>10,000</td><td>10,000</td><td>80,000</td></tr>
<tr><td>UNFAIR-ToS</td><td>5,532</td><td>2,275</td><td>1,607</td><td>9,414</td></tr>
<tr><td>CaseHOLD</td><td>45,000</td><td>3,900</td><td>3,900</td><td>52,800</td></tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
<table>
<tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Task Type</td><tr>
<tr><td>ECtHR (Task A)</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>Multi-label classification</td></tr>
<tr><td>ECtHR (Task B)</td><td> <a href="https://aclanthology.org/2021.naacl-main.22/">Chalkidis et al. (2021a)</a> </td><td>ECHR</td><td>Multi-label classification </td></tr>
<tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>Multi-class classification</td></tr>
<tr><td>EUR-LEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. (2021b)</a></td><td>EU Law</td><td>Multi-label classification</td></tr>
<tr><td>LEDGAR</td><td> <a href="https://aclanthology.org/2020.lrec-1.155/">Tuggener et al. (2020)</a></td><td>Contracts</td><td>Multi-class classification</td></tr>
<tr><td>UNFAIR-ToS</td><td><a href="https://arxiv.org/abs/1805.01217"> Lippi et al. (2019)</a></td><td>Contracts</td><td>Multi-label classification</td></tr>
<tr><td>CaseHOLD</td><td><a href="https://arxiv.org/abs/2104.08671">Zheng et al. (2021)</a></td><td>US Law</td><td>Multiple choice QA</td></tr>
</table>
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Curators
*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
*LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
*2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
[*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
*LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
*2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*](https://arxiv.org/abs/2110.00976)
```
@inproceedings{chalkidis-etal-2021-lexglue,
title={LexGLUE: A Benchmark Dataset for Legal Language Understanding in English},
author={Chalkidis, Ilias and Jana, Abhik and Hartung, Dirk and
Bommarito, Michael and Androutsopoulos, Ion and Katz, Daniel Martin and
Aletras, Nikolaos},
year={2022},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
address={Dubln, Ireland},
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
liar | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: liar
pretty_name: LIAR
tags:
- fake-news-detection
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': half-true
'2': mostly-true
'3': 'true'
'4': barely-true
'5': pants-fire
- name: statement
dtype: string
- name: subject
dtype: string
- name: speaker
dtype: string
- name: job_title
dtype: string
- name: state_info
dtype: string
- name: party_affiliation
dtype: string
- name: barely_true_counts
dtype: float32
- name: false_counts
dtype: float32
- name: half_true_counts
dtype: float32
- name: mostly_true_counts
dtype: float32
- name: pants_on_fire_counts
dtype: float32
- name: context
dtype: string
splits:
- name: train
num_bytes: 2730651
num_examples: 10269
- name: test
num_bytes: 341414
num_examples: 1283
- name: validation
num_bytes: 341592
num_examples: 1284
download_size: 1013571
dataset_size: 3413657
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
statement: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.cs.ucsb.edu/~william/
- **Repository:**
- **Paper:** https://arxiv.org/abs/1705.00648
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LIAR is a dataset for fake news detection with 12.8K human labeled short statements from politifact.com's API, and each statement is evaluated by a politifact.com editor for its truthfulness. The distribution of labels in the LIAR dataset is relatively well-balanced: except for 1,050 pants-fire cases, the instances for all other labels range from 2,063 to 2,638. In each case, the labeler provides a lengthy analysis report to ground each judgment.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
librispeech_asr | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
librispeech_lm | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: LibrispeechLm
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: null
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4418577129
num_examples: 40418260
download_size: 1507274412
dataset_size: 4418577129
---
# Dataset Card for "librispeech_lm"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.openslr.org/11](http://www.openslr.org/11)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.51 GB
- **Size of the generated dataset:** 4.42 GB
- **Total amount of disk used:** 5.93 GB
### Dataset Summary
Language modeling resources to be used in conjunction with the LibriSpeech ASR corpus.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.51 GB
- **Size of the generated dataset:** 4.42 GB
- **Total amount of disk used:** 5.93 GB
An example of 'train' looks as follows.
```
{
"text": "This is a test file"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
### Data Splits
| name | train |
|-------|-------:|
|default|40418260|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
limit | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|net-activities-captions
- original
task_categories:
- token-classification
- text-classification
task_ids:
- multi-class-classification
- named-entity-recognition
paperswithcode_id: limit
pretty_name: LiMiT
dataset_info:
features:
- name: id
dtype: int32
- name: sentence
dtype: string
- name: motion
dtype: string
- name: motion_entities
list:
- name: entity
dtype: string
- name: start_index
dtype: int32
splits:
- name: train
num_bytes: 3064208
num_examples: 23559
- name: test
num_bytes: 139742
num_examples: 1000
download_size: 4214925
dataset_size: 3203950
---
# Dataset Card for LiMiT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** [github](https://github.com/ilmgut/limit_dataset)
- **Paper:** [LiMiT: The Literal Motion in Text Dataset](https://www.aclweb.org/anthology/2020.findings-emnlp.88/)
- **Leaderboard:** N/A
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Motion recognition is one of the basic cognitive capabilities of many life forms, yet identifying
motion of physical entities in natural language have not been explored extensively and empirically.
Literal-Motion-in-Text (LiMiT) dataset, is a large human-annotated collection of English text sentences
describing physical occurrence of motion, with annotated physical entities in motion.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
Example of one instance in the dataset
```
{
"id": 0,
"motion": "yes",
"motion_entities": [
{
"entity": "little boy",
"start_index": 2
},
{
"entity": "ball",
"start_index": 30
}
],
"sentence": " A little boy holding a yellow ball walks by."
}
```
### Data Fields
- `id`: intger index of the example
- `motion`: indicates whether the sentence is literal motion i.e. describes the movement of a physical entity or not
- `motion_entities`: A `list` of `dicts` with following keys
- `entity`: the extracted entity in motion
- `start_index`: index in the sentence for the first char of the entity text
### Data Splits
The dataset is split into a `train`, and `test` split with the following sizes:
| | train | validation |
| ----- |------:|-----------:|
| Number of examples | 23559 | 1000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{manotas-etal-2020-limit,
title = "{L}i{M}i{T}: The Literal Motion in Text Dataset",
author = "Manotas, Irene and
Vo, Ngoc Phuoc An and
Sheinin, Vadim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.88",
doi = "10.18653/v1/2020.findings-emnlp.88",
pages = "991--1000",
abstract = "Motion recognition is one of the basic cognitive capabilities of many life forms, yet identifying motion of physical entities in natural language have not been explored extensively and empirically. We present the Literal-Motion-in-Text (LiMiT) dataset, a large human-annotated collection of English text sentences describing physical occurrence of motion, with annotated physical entities in motion. We describe the annotation process for the dataset, analyze its scale and diversity, and report results of several baseline models. We also present future research directions and applications of the LiMiT dataset and share it publicly as a new resource for the research community.",
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
lince | ---
paperswithcode_id: lince
pretty_name: Linguistic Code-switching Evaluation Dataset
dataset_info:
- config_name: lid_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 4745003
num_examples: 21030
- name: validation
num_bytes: 739950
num_examples: 3332
- name: test
num_bytes: 1337727
num_examples: 8289
download_size: 1188861
dataset_size: 6822680
- config_name: lid_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 1662284
num_examples: 4823
- name: validation
num_bytes: 268930
num_examples: 744
- name: test
num_bytes: 456850
num_examples: 1854
download_size: 432854
dataset_size: 2388064
- config_name: lid_msaea
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 3804156
num_examples: 8464
- name: validation
num_bytes: 490566
num_examples: 1116
- name: test
num_bytes: 590488
num_examples: 1663
download_size: 803806
dataset_size: 4885210
- config_name: lid_nepeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 2239014
num_examples: 8451
- name: validation
num_bytes: 351649
num_examples: 1332
- name: test
num_bytes: 620512
num_examples: 3228
download_size: 545342
dataset_size: 3211175
- config_name: pos_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: pos
sequence: string
splits:
- name: train
num_bytes: 5467832
num_examples: 27893
- name: validation
num_bytes: 840593
num_examples: 4298
- name: test
num_bytes: 1758626
num_examples: 10720
download_size: 819657
dataset_size: 8067051
- config_name: pos_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: pos
sequence: string
splits:
- name: train
num_bytes: 537541
num_examples: 1030
- name: validation
num_bytes: 80886
num_examples: 160
- name: test
num_bytes: 131192
num_examples: 299
download_size: 113872
dataset_size: 749619
- config_name: ner_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 9836312
num_examples: 33611
- name: validation
num_bytes: 2980990
num_examples: 10085
- name: test
num_bytes: 6530956
num_examples: 23527
download_size: 3075520
dataset_size: 19348258
- config_name: ner_msaea
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 3887684
num_examples: 10103
- name: validation
num_bytes: 431414
num_examples: 1122
- name: test
num_bytes: 367310
num_examples: 1110
download_size: 938671
dataset_size: 4686408
- config_name: ner_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 474639
num_examples: 1243
- name: validation
num_bytes: 121403
num_examples: 314
- name: test
num_bytes: 185220
num_examples: 522
download_size: 141285
dataset_size: 781262
- config_name: sa_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: sa
dtype: string
splits:
- name: train
num_bytes: 3587783
num_examples: 12194
- name: validation
num_bytes: 546692
num_examples: 1859
- name: test
num_bytes: 1349407
num_examples: 4736
download_size: 1031412
dataset_size: 5483882
---
# Dataset Card for "lince"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ritual.uh.edu/lince](http://ritual.uh.edu/lince)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 9.09 MB
- **Size of the generated dataset:** 56.42 MB
- **Total amount of disk used:** 65.52 MB
### Dataset Summary
LinCE is a centralized Linguistic Code-switching Evaluation benchmark
(https://ritual.uh.edu/lince/) that contains data for training and evaluating
NLP systems on code-switching tasks.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### lid_hineng
- **Size of downloaded dataset files:** 0.43 MB
- **Size of the generated dataset:** 2.39 MB
- **Total amount of disk used:** 2.82 MB
An example of 'validation' looks as follows.
```
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "mixed", "lang1", "lang1", "other"],
"words": ["@ZahirJ", "@BinyavangaW", "Loved", "the", "ending", "!", "I", "could", "have", "offered", "you", "some", "ironic", "chai-tea", "for", "it", ";)"]
}
```
#### lid_msaea
- **Size of downloaded dataset files:** 0.81 MB
- **Size of the generated dataset:** 4.89 MB
- **Total amount of disk used:** 5.69 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"idx": 0,
"lid": ["ne", "lang2", "other", "lang2", "lang2", "other", "other", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "other", "lang2", "lang2", "lang2", "ne", "lang2", "lang2"],
"words": "[\"علاء\", \"بخير\", \"،\", \"معنوياته\", \"كويسة\", \".\", \"..\", \"اسخف\", \"حاجة\", \"بس\", \"ان\", \"كل\", \"واحد\", \"منهم\", \"بييقى\", \"مقفول\", \"عليه\"..."
}
```
#### lid_nepeng
- **Size of downloaded dataset files:** 0.55 MB
- **Size of the generated dataset:** 3.21 MB
- **Total amount of disk used:** 3.75 MB
An example of 'validation' looks as follows.
```
{
"idx": 1,
"lid": ["other", "lang2", "lang2", "lang2", "lang2", "lang1", "lang1", "lang1", "lang1", "lang1", "lang2", "lang2", "other", "mixed", "lang2", "lang2", "other", "other", "other", "other"],
"words": ["@nirvikdada", "la", "hamlai", "bhetna", "paayeko", "will", "be", "your", "greatest", "gift", "ni", "dada", ";P", "#TreatChaiyo", "j", "hos", ";)", "@zappylily", "@AsthaGhm", "@ayacs_asis"]
}
```
#### lid_spaeng
- **Size of downloaded dataset files:** 1.18 MB
- **Size of the generated dataset:** 6.83 MB
- **Total amount of disk used:** 8.01 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1"],
"words": ["11:11", ".....", "make", "a", "wish", ".......", "night", "night"]
}
```
#### ner_hineng
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.79 MB
- **Total amount of disk used:** 0.92 MB
An example of 'train' looks as follows.
```
{
"idx": 1,
"lid": ["en", "en", "en", "en", "en", "en", "hi", "hi", "hi", "hi", "hi", "hi", "hi", "en", "en", "en", "en", "rest"],
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "O", "O", "O", "B-PERSON", "I-PERSON"],
"words": ["I", "liked", "a", "@YouTube", "video", "https://t.co/DmVqhZbdaI", "Kabhi", "Palkon", "Pe", "Aasoon", "Hai-", "Kishore", "Kumar", "-Vocal", "Cover", "By", "Stephen", "Qadir"]
}
```
### Data Fields
The data fields are the same among all splits.
#### lid_hineng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_msaea
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_nepeng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_spaeng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### ner_hineng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
|lid_hineng| 4823| 744|1854|
|lid_msaea | 8464| 1116|1663|
|lid_nepeng| 8451| 1332|3228|
|lid_spaeng|21030| 3332|8289|
|ner_hineng| 1243| 314| 522|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{aguilar-etal-2020-lince,
title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
author = "Aguilar, Gustavo and
Kar, Sudipta and
Solorio, Thamar",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
pages = "1803--1813",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
Note that each LinCE dataset has its own citation too. Please see [here](https://ritual.uh.edu/lince/datasets)
for the correct citation on each dataset.
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@gaguilar](https://github.com/gaguilar) for adding this dataset. |
linnaeus | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: linnaeus
pretty_name: LINNAEUS
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
config_name: linnaeus
splits:
- name: train
num_bytes: 4772417
num_examples: 11936
- name: validation
num_bytes: 1592823
num_examples: 4079
- name: test
num_bytes: 2802877
num_examples: 7143
download_size: 18204624
dataset_size: 9168117
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [linnaeus](http://linnaeus.sourceforge.net/)
- **Repository:**
- **Paper:** [BMC Bioinformatics](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-85)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LINNAEUS is a general-purpose dictionary matching software, capable of processing multiple types of document formats in the biomedical domain (MEDLINE, PMC, BMC, OTMI, text, etc.). It can produce multiple types of output (XML, HTML, tab-separated-value file, or save to a database). It also contains methods for acting as a server (including load balancing across several servers), allowing clients to request matching over a network. A package with files for recognizing and identifying species names is available for LINNAEUS, showing 94% recall and 97% precision compared to LINNAEUS-species-corpus.
### Supported Tasks and Leaderboards
This dataset is used for species Named Entity Recognition.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example from the dataset is:
```
{'id': '2',
'tokens': ['Scp160p', 'is', 'a', '160', 'kDa', 'protein', 'in', 'the', 'yeast', 'Saccharomyces', 'cerevisiae', 'that', 'contains', '14', 'repeats', 'of', 'the', 'hnRNP', 'K', '-', 'homology', '(', 'KH', ')', 'domain', ',', 'and', 'demonstrates', 'significant', 'sequence', 'homology', 'to', 'a', 'family', 'of', 'proteins', 'collectively', 'known', 'as', 'vigilins', '.'],
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
```
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no species mentioned, `1` signals the first token of a species and `2` the subsequent tokens of the species.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
| linnaeus |11936| 4079|7143|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{Gerner2010,
abstract = {The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles.},
author = {Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
doi = {10.1186/1471-2105-11-85},
issn = {1471-2105},
journal = {BMC Bioinformatics},
number = {1},
pages = {85},
title = {{LINNAEUS: A species name identification system for biomedical literature}},
url = {https://doi.org/10.1186/1471-2105-11-85},
volume = {11},
year = {2010}
}
```
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset. |
liveqa | ---
annotations_creators:
- found
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: liveqa
pretty_name: LiveQA
dataset_info:
features:
- name: id
dtype: int64
- name: passages
sequence:
- name: is_question
dtype: bool
- name: text
dtype: string
- name: candidate1
dtype: string
- name: candidate2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 112187507
num_examples: 1670
download_size: 114704569
dataset_size: 112187507
---
# Dataset Card for LiveQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/PKU-TANGENT/LiveQA)
- **Repository:** [Github](https://github.com/PKU-TANGENT/LiveQA)
- **Paper:** [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** Qianying Liu
### Dataset Summary
The LiveQA dataset is a Chinese question-answering resource constructed from playby-play live broadcasts. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu website.
### Supported Tasks and Leaderboards
Question Answering.
[More Information Needed]
### Languages
Chinese.
## Dataset Structure
### Data Instances
Each instance represents a timeline (i.e., a game) with an identifier. The passages field comprise an array of text or question segments. In the following truncated example, user comments about the game is followed by a question about which team will be the first to reach 60 points.
```python
{
'id': 1,
'passages': [
{
"is_question": False,
"text": "'我希望两位球员都能做到!!",
"candidate1": "",
"candidate2": "",
"answer": "",
},
{
"is_question": False,
"text": "新年给我们送上精彩比赛!",
"candidate1": "",
"candidate2": "",
"answer": "",
},
{
"is_question": True,
"text": "先达到60分?",
"candidate1": "火箭",
"candidate2": "勇士",
"answer": "勇士",
},
{
"is_question": False,
"text": "自己急停跳投!!!",
"candidate1": "",
"candidate2": "",
"answer": "",
}
]
}
```
### Data Fields
- id: identifier for the game
- passages: collection of text/question segments
- text: real-time text comment or binary question related to the context
- candidate1/2: one of the two answer options to the question
- answer: correct answer to the question in text
### Data Splits
There is no predefined split in this dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
This resource is developed by [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf).
```
@inproceedings{qianying-etal-2020-liveqa,
title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
author = "Qianying, Liu and
Sicong, Jiang and
Yizhong, Wang and
Sujian, Li",
booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
month = oct,
year = "2020",
address = "Haikou, China",
publisher = "Chinese Information Processing Society of China",
url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
pages = "1057--1067"
}
```
### Contributions
Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset. |
lj_speech | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unlicense
multilinguality:
- monolingual
paperswithcode_id: ljspeech
pretty_name: LJ Speech
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
train-eval-index:
- config: main
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
col_mapping:
file: path
text: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: file
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
config_name: main
splits:
- name: train
num_bytes: 4667022
num_examples: 13100
download_size: 2748572632
dataset_size: 4667022
---
# Dataset Card for lj_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The LJ Speech Dataset](https://keithito.com/LJ-Speech-Dataset/)
- **Repository:** [N/A]
- **Paper:** [N/A]
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech)
- **Point of Contact:** [Keith Ito](mailto:kito@kito.us)
### Dataset Summary
This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books in English. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Automatic Speech Recognition (ASR) or Text-to-Speech (TTS).
- `other:automatic-speech-recognition`: An ASR model is presented with an audio file and asked to transcribe the audio file to written text.
The most common ASR evaluation metric is the word error rate (WER).
- `other:text-to-speech`: A TTS model is given a written text in natural language and asked to generate a speech audio file.
A reasonable evaluation metric is the mean opinion score (MOS) of audio quality.
The dataset has an active leaderboard which can be found at https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech
### Languages
The transcriptions and audio are in English.
## Dataset Structure
### Data Instances
A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
A normalized version of the text is also provided.
```
{
'id': 'LJ002-0026',
'file': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
'audio': {'path': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 22050},
'text': 'in the three years between 1813 and 1816,'
'normalized_text': 'in the three years between eighteen thirteen and eighteen sixteen,',
}
```
Each audio file is a single-channel 16-bit PCM WAV with a sample rate of 22050 Hz.
### Data Fields
- id: unique id of the data sample.
- file: a path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- normalized_text: the transcription with numbers, ordinals, and monetary units expanded into full words.
### Data Splits
The dataset is not pre-split. Some statistics:
- Total Clips: 13,100
- Total Words: 225,715
- Total Characters: 1,308,678
- Total Duration: 23:55:17
- Mean Clip Duration: 6.57 sec
- Min Clip Duration: 1.11 sec
- Max Clip Duration: 10.10 sec
- Mean Words per Clip: 17.23
- Distinct Words: 13,821
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
This dataset consists of excerpts from the following works:
- Morris, William, et al. Arts and Crafts Essays. 1893.
- Griffiths, Arthur. The Chronicles of Newgate, Vol. 2. 1884.
- Roosevelt, Franklin D. The Fireside Chats of Franklin Delano Roosevelt. 1933-42.
- Harland, Marion. Marion Harland's Cookery for Beginners. 1893.
- Rolt-Wheeler, Francis. The Science - History of the Universe, Vol. 5: Biology. 1910.
- Banks, Edgar J. The Seven Wonders of the Ancient World. 1916.
- President's Commission on the Assassination of President Kennedy. Report of the President's Commission on the Assassination of President Kennedy. 1964.
Some details about normalization:
- The normalized transcription has the numbers, ordinals, and monetary units expanded into full words (UTF-8)
- 19 of the transcriptions contain non-ASCII characters (for example, LJ016-0257 contains "raison d'être").
- The following abbreviations appear in the text. They may be expanded as follows:
| Abbreviation | Expansion |
|--------------|-----------|
| Mr. | Mister |
| Mrs. | Misess (*) |
| Dr. | Doctor |
| No. | Number |
| St. | Saint |
| Co. | Company |
| Jr. | Junior |
| Maj. | Major |
| Gen. | General |
| Drs. | Doctors |
| Rev. | Reverend |
| Lt. | Lieutenant |
| Hon. | Honorable |
| Sgt. | Sergeant |
| Capt. | Captain |
| Esq. | Esquire |
| Ltd. | Limited |
| Col. | Colonel |
| Ft. | Fort |
(*) there's no standard expansion for "Mrs."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
- The audio clips range in length from approximately 1 second to 10 seconds. They were segmented automatically based on silences in the recording. Clip boundaries generally align with sentence or clause boundaries, but not always.
- The text was matched to the audio manually, and a QA pass was done to ensure that the text accurately matched the words spoken in the audio.
#### Who are the annotators?
Recordings by Linda Johnson from LibriVox. Alignment and annotation by Keith Ito.
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
- The original LibriVox recordings were distributed as 128 kbps MP3 files. As a result, they may contain artifacts introduced by the MP3 encoding.
## Additional Information
### Dataset Curators
The dataset was initially created by Keith Ito and Linda Johnson.
### Licensing Information
Public Domain ([LibriVox](https://librivox.org/pages/public-domain/))
### Citation Information
```
@misc{ljspeech17,
author = {Keith Ito and Linda Johnson},
title = {The LJ Speech Dataset},
howpublished = {\url{https://keithito.com/LJ-Speech-Dataset/}},
year = 2017
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset. |
lm1b | ---
pretty_name: One Billion Word Language Model Benchmark
paperswithcode_id: billion-word-benchmark
dataset_info:
features:
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 4238206516
num_examples: 30301028
- name: test
num_bytes: 42942045
num_examples: 306688
download_size: 1792209805
dataset_size: 4281148561
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for One Billion Word Language Model Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [statmt](http://www.statmt.org/lm-benchmark/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [arxiv](https://arxiv.org/pdf/1312.3005v3.pdf)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.79 GB
- **Size of the generated dataset:** 4.28 GB
- **Total amount of disk used:** 6.07 GB
### Dataset Summary
A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 1.79 GB
- **Size of the generated dataset:** 4.28 GB
- **Total amount of disk used:** 6.07 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "While athletes in different professions dealt with doping scandals and other controversies , Woods continued to do what he did best : dominate the field of professional golf and rake in endorsements ."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
### Data Splits
| name | train | test |
|------------|----------|--------|
| plain_text | 30301028 | 306688 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needeate this repository accordingly.
### Citation Information
```bibtex
@misc{chelba2014billion,
title={One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling},
author={Ciprian Chelba and Tomas Mikolov and Mike Schuster and Qi Ge and Thorsten Brants and Phillipp Koehn and Tony Robinson},
year={2014},
eprint={1312.3005},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
lst20 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
pretty_name: LST20
tags:
- word-segmentation
- clause-segmentation
- sentence-segmentation
dataset_info:
features:
- name: id
dtype: string
- name: fname
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NN
'1': VV
'2': PU
'3': CC
'4': PS
'5': AX
'6': AV
'7': FX
'8': NU
'9': AJ
'10': CL
'11': PR
'12': NG
'13': PA
'14': XX
'15': IJ
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B_BRN
'2': B_DES
'3': B_DTM
'4': B_LOC
'5': B_MEA
'6': B_NUM
'7': B_ORG
'8': B_PER
'9': B_TRM
'10': B_TTL
'11': I_BRN
'12': I_DES
'13': I_DTM
'14': I_LOC
'15': I_MEA
'16': I_NUM
'17': I_ORG
'18': I_PER
'19': I_TRM
'20': I_TTL
'21': E_BRN
'22': E_DES
'23': E_DTM
'24': E_LOC
'25': E_MEA
'26': E_NUM
'27': E_ORG
'28': E_PER
'29': E_TRM
'30': E_TTL
- name: clause_tags
sequence:
class_label:
names:
'0': O
'1': B_CLS
'2': I_CLS
'3': E_CLS
config_name: lst20
splits:
- name: train
num_bytes: 107725145
num_examples: 63310
- name: validation
num_bytes: 9646167
num_examples: 5620
- name: test
num_bytes: 8217425
num_examples: 5250
download_size: 0
dataset_size: 125588737
---
# Dataset Card for LST20
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://aiforthai.in.th/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email](thepchai@nectec.or.th)
### Dataset Summary
LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php
See `LST20 Annotation Guideline.pdf` and `LST20 Brief Specification.pdf` within the downloaded `AIFORTHAI-LST20Corpus.tar.gz` for more details.
### Supported Tasks and Leaderboards
- POS tagging
- NER tagging
- clause segmentation
- sentence segmentation
- word tokenization
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '0', 'ner_tags': [8, 0, 0, 0, 0, 0, 0, 0, 25], 'pos_tags': [0, 0, 0, 1, 0, 8, 8, 8, 0], 'tokens': ['ธรรมนูญ', 'แชมป์', 'สิงห์คลาสสิก', 'กวาด', 'รางวัล', 'แสน', 'สี่', 'หมื่น', 'บาท']}
{'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '1', 'ner_tags': [8, 18, 28, 0, 0, 0, 0, 6, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 15, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 6], 'pos_tags': [0, 2, 0, 2, 1, 1, 2, 8, 2, 10, 2, 8, 2, 1, 0, 1, 0, 4, 7, 1, 0, 2, 8, 2, 10, 1, 10, 4, 2, 8, 2, 4, 0, 4, 0, 2, 8, 2, 10, 2, 8], 'tokens': ['ธรรมนูญ', '_', 'ศรีโรจน์', '_', 'เก็บ', 'เพิ่ม', '_', '4', '_', 'อันเดอร์พาร์', '_', '68', '_', 'เข้า', 'ป้าย', 'รับ', 'แชมป์', 'ใน', 'การ', 'เล่น', 'อาชีพ', '_', '19', '_', 'ปี', 'เป็น', 'ครั้ง', 'ที่', '_', '8', '_', 'ใน', 'ชีวิต', 'ด้วย', 'สกอร์', '_', '18', '_', 'อันเดอร์พาร์', '_', '270']}
```
### Data Fields
- `id`: nth sentence in each set, starting at 0
- `fname`: text file from which the sentence comes from
- `tokens`: word tokens
- `pos_tags`: POS tags
- `ner_tags`: NER tags
- `clause_tags`: clause tags
### Data Splits
| | train | eval | test | all |
|----------------------|-----------|-------------|-------------|-----------|
| words | 2,714,848 | 240,891 | 207,295 | 3,163,034 |
| named entities | 246,529 | 23,176 | 18,315 | 288,020 |
| clauses | 214,645 | 17,486 | 16,050 | 246,181 |
| sentences | 63,310 | 5,620 | 5,250 | 74,180 |
| distinct words | 42,091 | (oov) 2,595 | (oov) 2,006 | 46,692 |
| breaking spaces※ | 63,310 | 5,620 | 5,250 | 74,180 |
| non-breaking spaces※※| 402,380 | 39,920 | 32,204 | 475,504 |
※ Breaking space = space that is used as a sentence boundary marker
※※ Non-breaking space = space that is not used as a sentence boundary marker
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Respective authors of the news articles
### Annotations
#### Annotation process
Detailed annotation guideline can be found in `LST20 Annotation Guideline.pdf`.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
All texts are from public news. No personal and sensitive information is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
- Large-scale Thai NER & POS tagging, clause & sentence segmentatation, word tokenization
### Discussion of Biases
- All 3,745 texts are from news domain:
- politics: 841
- crime and accident: 592
- economics: 512
- entertainment: 472
- sports: 402
- international: 279
- science, technology and education: 216
- health: 92
- general: 75
- royal: 54
- disaster: 52
- development: 45
- environment: 40
- culture: 40
- weather forecast: 33
- Word tokenization is done accoding to InterBEST 2009 Guideline.
### Other Known Limitations
- Some NER tags do not correspond with given labels (`B`, `I`, and so on)
## Additional Information
### Dataset Curators
[NECTEC](https://www.nectec.or.th/en/)
### Licensing Information
1. Non-commercial use, research, and open source
Any non-commercial use of the dataset for research and open-sourced projects is encouraged and free of charge. Please cite our technical report for reference.
If you want to perpetuate your models trained on our dataset and share them to the research community in Thailand, please send your models, code, and APIs to the AI for Thai Project. Please contact Dr. Thepchai Supnithi via thepchai@nectec.or.th for more information.
Note that modification and redistribution of the dataset by any means are strictly prohibited unless authorized by the corpus authors.
2. Commercial use
In any commercial use of the dataset, there are two options.
- Option 1 (in kind): Contributing a dataset of 50,000 words completely annotated with our annotation scheme within 1 year. Your data will also be shared and recognized as a dataset co-creator in the research community in Thailand.
- Option 2 (in cash): Purchasing a lifetime license for the entire dataset is required. The purchased rights of use cover only this dataset.
In both options, please contact Dr. Thepchai Supnithi via thepchai@nectec.or.th for more information.
### Citation Information
```
@article{boonkwan2020annotation,
title={The Annotation Guideline of LST20 Corpus},
author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},
journal={arXiv preprint arXiv:2008.05055},
year={2020}
}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
m_lama | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
- expert-generated
- machine-generated
language:
- af
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- ga
- gl
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ka
- ko
- la
- lt
- lv
- ms
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- ta
- th
- tr
- uk
- ur
- vi
- zh
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- extended|lama
task_categories:
- question-answering
- text-classification
task_ids:
- open-domain-qa
- text-scoring
paperswithcode_id: null
pretty_name: MLama
tags:
- probing
dataset_info:
features:
- name: uuid
dtype: string
- name: lineid
dtype: uint32
- name: obj_uri
dtype: string
- name: obj_label
dtype: string
- name: sub_uri
dtype: string
- name: sub_label
dtype: string
- name: template
dtype: string
- name: language
dtype: string
- name: predicate_id
dtype: string
config_name: all
splits:
- name: test
num_bytes: 125919995
num_examples: 843143
download_size: 40772287
dataset_size: 125919995
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Multilingual LAMA](http://cistern.cis.lmu.de/mlama/)
- **Repository:** [Github](https://github.com/norakassner/mlama)
- **Paper:** [Arxiv](https://arxiv.org/abs/2102.00894)
- **Point of Contact:** [Contact section](http://cistern.cis.lmu.de/mlama/)
### Dataset Summary
This dataset provides the data for mLAMA, a multilingual version of LAMA.
Regarding LAMA see https://github.com/facebookresearch/LAMA. For mLAMA
the TREx and GoogleRE part of LAMA was considered and machine translated using
Google Translate, and the Wikidata and Google Knowledge Graph API. The machine
translated templates were checked for validity, i.e., whether they contain
exactly one '[X]' and one '[Y]'.
This data can be used for creating fill-in-the-blank queries like
"Paris is the capital of [MASK]" across 53 languages. For more details see
the website http://cistern.cis.lmu.de/mlama/ or the github repo https://github.com/norakassner/mlama.
### Supported Tasks and Leaderboards
Language model knowledge probing.
### Languages
This dataset contains data in 53 languages:
af,ar,az,be,bg,bn,ca,ceb,cs,cy,da,de,el,en,es,et,eu,fa,fi,fr,ga,gl,he,hi,hr,hu,hy,id,it,ja,ka,ko,la,lt,lv,ms,nl,pl,pt,ro,ru,sk,sl,sq,sr,sv,ta,th,tr,uk,ur,vi,zh
## Dataset Structure
For each of the 53 languages and each of the 43 relations/predicates there is a set of triples.
### Data Instances
For each language and relation there are triples, that consists of an object, a predicate and a subject. For each predicate there is a template available. An example for `dataset["test"][0]` is given here:
```python
{
'language': 'af',
'lineid': 0,
'obj_label': 'Frankryk',
'obj_uri': 'Q142',
'predicate_id': 'P1001',
'sub_label': 'President van Frankryk',
'sub_uri': 'Q191954',
'template': "[X] is 'n wettige term in [Y].",
'uuid': '3fe3d4da-9df9-45ba-8109-784ce5fba38a'
}
```
### Data Fields
Each instance has the following fields
* "uuid": a unique identifier
* "lineid": a identifier unique to mlama
* "obj_id": knowledge graph id of the object
* "obj_label": surface form of the object
* "sub_id": knowledge graph id of the subject
* "sub_label": surface form of the subject
* "template": template
* "language": language code
* "predicate_id": relation id
### Data Splits
There is only one partition that is labelled as 'test data'.
## Dataset Creation
### Curation Rationale
The dataset was translated into 53 languages to investigate knowledge in pretrained language models
multilingually.
### Source Data
#### Initial Data Collection and Normalization
The data has several sources:
LAMA (https://github.com/facebookresearch/LAMA) licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
T-REx (https://hadyelsahar.github.io/t-rex/) licensed under Creative Commons Attribution-ShareAlike 4.0 International License
Google-RE (https://github.com/google-research-datasets/relation-extraction-corpus)
Wikidata (https://www.wikidata.org/) licensed under Creative Commons CC0 License and Creative Commons Attribution-ShareAlike License
#### Who are the source language producers?
See links above.
### Annotations
#### Annotation process
Crowdsourced (wikidata) and machine translated.
#### Who are the annotators?
Unknown.
### Personal and Sensitive Information
Names of (most likely) famous people who have entries in Google Knowledge Graph or Wikidata.
## Considerations for Using the Data
Data was created through machine translation and automatic processes.
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Not all triples are available in all languages.
## Additional Information
### Dataset Curators
The authors of the mLAMA paper and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
```
@article{kassner2021multilingual,
author = {Nora Kassner and
Philipp Dufter and
Hinrich Sch{\"{u}}tze},
title = {Multilingual {LAMA:} Investigating Knowledge in Multilingual Pretrained
Language Models},
journal = {CoRR},
volume = {abs/2102.00894},
year = {2021},
url = {https://arxiv.org/abs/2102.00894},
archivePrefix = {arXiv},
eprint = {2102.00894},
timestamp = {Tue, 09 Feb 2021 13:35:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2102-00894.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
note = {to appear in EACL2021}
}
```
### Contributions
Thanks to [@pdufter](https://github.com/pdufter) for adding this dataset. |
mac_morpho | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- part-of-speech
pretty_name: Mac-Morpho
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': PREP+PROADJ
'1': IN
'2': PREP+PRO-KS
'3': NPROP
'4': PREP+PROSUB
'5': KC
'6': PROPESS
'7': NUM
'8': PROADJ
'9': PREP+ART
'10': KS
'11': PRO-KS
'12': ADJ
'13': ADV-KS
'14': N
'15': PREP
'16': PROSUB
'17': PREP+PROPESS
'18': PDEN
'19': V
'20': PREP+ADV
'21': PCP
'22': CUR
'23': ADV
'24': PU
'25': ART
splits:
- name: train
num_bytes: 12635011
num_examples: 37948
- name: test
num_bytes: 3095292
num_examples: 9987
- name: validation
num_bytes: 671356
num_examples: 1997
download_size: 2463485
dataset_size: 16401659
---
# Dataset Card for Mac-Morpho
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Mac-Morpho homepage](http://nilc.icmc.usp.br/macmorpho/)
- **Repository:** [Mac-Morpho repository](http://nilc.icmc.usp.br/macmorpho/)
- **Paper:** [Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese](https://journal-bcs.springeropen.com/articles/10.1186/s13173-014-0020-x)
- **Point of Contact:** [Erick R Fonseca](mailto:erickrfonseca@gmail.com)
### Dataset Summary
Mac-Morpho is a corpus of Brazilian Portuguese texts annotated with part-of-speech tags.
Its first version was released in 2003 [1], and since then, two revisions have been made in order
to improve the quality of the resource [2, 3].
The corpus is available for download split into train, development and test sections.
These are 76%, 4% and 20% of the corpus total, respectively (the reason for the unusual numbers
is that the corpus was first split into 80%/20% train/test, and then 5% of the train section was
set aside for development). This split was used in [3], and new POS tagging research with Mac-Morpho
is encouraged to follow it in order to make consistent comparisons possible.
[1] Aluísio, S., Pelizzoni, J., Marchi, A.R., de Oliveira, L., Manenti, R., Marquiafável, V. 2003.
An account of the challenge of tagging a reference corpus for brazilian portuguese.
In: Proceedings of the 6th International Conference on Computational Processing of the Portuguese Language. PROPOR 2003
[2] Fonseca, E.R., Rosa, J.L.G. 2013. Mac-morpho revisited: Towards robust part-of-speech.
In: Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology – STIL
[3] Fonseca, E.R., Aluísio, Sandra Maria, Rosa, J.L.G. 2015.
Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese.
Journal of the Brazilian Computer Society.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Portuguese
## Dataset Structure
### Data Instances
An example from the Mac-Morpho dataset looks as follows:
```
{
"id": "0",
"pos_tags": [14, 19, 14, 15, 22, 7, 14, 9, 14, 9, 3, 15, 3, 3, 24],
"tokens": ["Jersei", "atinge", "média", "de", "Cr$", "1,4", "milhão", "na", "venda", "da", "Pinhal", "em", "São", "Paulo", "."]
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `pos`: the PoS tags of each token
The PoS tags correspond to this list:
```
"PREP+PROADJ", "IN", "PREP+PRO-KS", "NPROP", "PREP+PROSUB", "KC", "PROPESS", "NUM", "PROADJ", "PREP+ART", "KS",
"PRO-KS", "ADJ", "ADV-KS", "N", "PREP", "PROSUB", "PREP+PROPESS", "PDEN", "V", "PREP+ADV", "PCP", "CUR", "ADV", "PU", "ART"
```
### Data Splits
The data is split into train, validation and test set. The split sizes are as follow:
| Train | Val | Test |
| ------ | ----- | ----- |
| 37948 | 1997 | 9987 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{fonseca2015evaluating,
title={Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese},
author={Fonseca, Erick R and Rosa, Jo{\~a}o Lu{\'\i}s G and Alu{\'\i}sio, Sandra Maria},
journal={Journal of the Brazilian Computer Society},
volume={21},
number={1},
pages={2},
year={2015},
publisher={Springer}
}
```
### Contributions
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset. |
makhzan | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ur
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: makhzan
dataset_info:
features:
- name: file_id
dtype: string
- name: metadata
dtype: string
- name: title
dtype: string
- name: num-words
dtype: int64
- name: contains-non-urdu-languages
dtype: string
- name: document_body
dtype: string
splits:
- name: train
num_bytes: 35637310
num_examples: 5522
download_size: 15187763
dataset_size: 35637310
---
# Dataset Card for makhzan
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://matnsaz.net/en/makhzan
- **Repository:** https://github.com/zeerakahmed/makhzan
- **Paper:** [More Information Needed]
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** Zeerak Ahmed
### Dataset Summary
An Urdu text corpus for machine learning, natural language processing and linguistic analysis.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
ur
## Dataset Structure
### Data Instances
```
{
"contains-non-urdu-languages": "No",
"document_body":
"
<body>
<section>
<p>بنگلہ دیش کی عدالتِ عالیہ نے طلاق کے ایک مقدمے کا فیصلہ کرتے ہوئے علما کے فتووں کو غیر قانونی قرار دیا ہے۔ عدالت نے پارلیمنٹ سے یہ درخواست کی ہے کہ وہ جلد ایسا قانون وضع کرے کہ جس کے بعد فتویٰ بازی قابلِ دست اندازیِ پولیس جرم بن جائے۔ بنگلہ دیش کے علما نے اس فیصلے پر بھر پور ردِ عمل ظاہرکرتے ہوئے اس کے خلاف ملک گیر تحریک چلانے کا اعلان کیا ہے۔ اس ضمن میں علما کی ایک تنظیم ”اسلامک یونٹی الائنس“ نے متعلقہ ججوں کو مرتد یعنی دین سے منحرف اور دائرۂ اسلام سے خارج قرار دیا ہے۔</p>
<p>فتوے کا لفظ دو موقعوں پر استعمال ہوتا ہے۔ ایک اس موقع پر جب کوئی صاحبِ علم شریعت کے کسی مئلے کے بارے میں اپنی رائے پیش کرتا ہے۔ دوسرے اس موقع پر جب کوئی عالمِ دین کسی خاص واقعے کے حوالے سے اپنا قانونی فیصلہ صادر کرتا ہے۔ ایک عرصے سے ہمارے علما کے ہاں اس دوسرے موقعِ استعمال کا غلبہ ہو گیا ہے۔ اس کا نتیجہ یہ نکلا ہے کہ اس لفظ کا رائے یا نقطۂ نظر کے مفہوم میں استعمال کم و بیش متروک ہو گیا ہے۔ چنانچہ اب فتوے کا مطلب ہی علما کی طرف سے کسی خاص مألے یا واقعے کے بارے میں حتمی فیصلے کا صدور سمجھا جاتا ہے۔ علما اسی حیثیت سے فتویٰ دیتے ہیں اور عوام الناس اسی اعتبار سے اسے قبول کرتے ہیں۔ اس صورتِ حال میں ہمارے نزدیک، چند مسائل پیدا ہوتے ہیں۔ اس سے پہلے کہ ہم مذکورہ فیصلے کے بارے میں اپنا تاثر بیان کریں، یہ ضروری معلوم ہوتا ہے کہ مختصر طور پر ان مسائل کا جائزہ لے لیا جائے۔</p>
<p>پہلا مألہ یہ پیدا ہوتا ہے کہ قانون سازی اور شرعی فیصلوں کا اختیار ایسے لوگوں کے ہاتھ میں آجاتا ہے جو قانون کی رو سے اس کے مجاز ہی نہیں ہوتے۔ کسی میاں بیوی کے مابین طلاق کے مألے میں کیا طلاق واقع ہوئی ہے یا نہیں ہوئی؟ ان کا نکاح قائم ہے یا باطل ہو گیا ہے؟ رمضان یا عید کا چاند نظر آیا ہے یا نہیں آیا؟کوئی مسلمان اپنے کسی قول یا اقدام کی وجہ سے کہیں دائرۂ اسلام سے خارج اورنتیجۃً مسلم شہریت کے قانونی حقوق سے محروم تو نہیں ہو گیا؟ یہ اور اس نوعیت کے بہت سے دوسرے معاملات سر تا سر قانون اور عدالت سے متعلق ہوتے ہیں۔ علما کی فتویٰ سازی کے نتیجے میںیہ امور گویا حکومت اورعدلیہ کے ہاتھ سے نکل کر غیر متعلق افراد کے ہاتھوں میں آجاتے ہیں۔</p>
<p>دوسرا مألہ یہ پیدا ہوتا ہے کہ قانون کی حاکمیت کا تصور مجروح ہوتا ہے اور لوگوں میں قانون سے روگردانی کے رجحانات کو تقویت ملتی ہے۔ اس کی وجہ یہ ہے کہ قانون اپنی روح میں نفاذ کا متقاضی ہوتا ہے۔ اگر اسے نفاذ سے محروم رکھا جائے تو اس کی حیثیت محض رائے اور نقطۂ نظر کی سی ہوتی ہے۔ غیر مجاز فرد سے صادر ہونے والا فتویٰ یا قانون حکومت کی قوتِ نافذہ سے محروم ہوتا ہے۔ اس کی خلاف ورزی پر کسی قسم کی سزا کا خوف نہیں ہوتا۔ چنانچہ فتویٰ اگر مخاطب کی پسند کے مطابق نہ ہو تو اکثر وہ اسے ماننے سے انکار کر دیتا ہے۔ اس طرح وہ فتویٰ یا قانون بے توقیر ہوتا ہے۔ ایسے ماحول میں رہنے والے شہریوں میں قانون ناپسندی کا رجحان فروغ پاتا ہے اور جیسے ہی انھیں موقع ملتا ہے وہ بے دریغ قانون کی خلاف ورزی کر ڈالتے ہیں۔</p>
<p>تیسرامسئلہ یہ پیدا ہوتا ہے کہ اگرغیر مجاز افراد سے صادر ہونے والے فیصلوں کو نافذ کرنے کی کوشش کی جائے تو ملک میں بد نظمی اور انارکی کا شدید اندیشہ پیدا ہو جاتا ہے۔ جب غیر مجازافراد سے صادر ہونے والے قانونی فیصلوں کو حکومتی سرپرستی کے بغیر نافذ کرنے کی کوشش کی جاتی ہے تو اپنے عمل سے یہ اس بات کا اعلان ہوتا ہے کہ مرجعِ قانون و اقتدارتبدیل ہو چکا ہے۔ جب کوئی عالمِ دین مثال کے طور پر، یہ فتویٰ صادر کرتا ہے کہ سینما گھروں اور ٹی وی اسٹیشنوں کو مسمار کرنامسلمانوں کی ذمہ داری ہے، یا کسی خاص قوم کے خلاف جہاد فرض ہو چکا ہے، یا فلاں کی دی گئی طلاق واقع ہو گئی ہے اور فلاں کی نہیں ہوئی، یا فلاں شخص یا گروہ اپنا اسلامی تشخص کھو بیٹھا ہے تو وہ درحقیقت قانونی فیصلہ جاری کر رہا ہوتا ہے۔ دوسرے الفاظ میں، وہ ریاست کے اندر اپنی ایک الگ ریاست بنانے کا اعلان کر رہا ہوتا ہے۔ اس کا نتیجہ سوائے انتشار اور انارکی کے اور کچھ نہیں نکلتا۔ یہی وجہ ہے کہ جن علاقوں میں حکومت کی گرفت کمزور ہوتی ہے وہاں اس طرح کے فیصلوں کا نفاذ بھی ہو جاتا ہے اور حکومت منہ دیکھتی رہتی ہے۔</p>
<p>چوتھا مسئلہ یہ پیدا ہوتا ہے کہ مختلف مذہبی مسالک کی وجہ سے ایک ہی معاملے میں مختلف اور متضاد فتوے منظرِ عام پر آتے ہیں۔ یہ تو ہمارے روز مرہ کی بات ہے کہ ایک ہی گروہ کو بعض علماے دین کافر قرار دیتے ہیں اور بعض مسلمان سمجھتے ہیں۔ کسی شخص کے منہ سے اگر ایک موقع پر طلاق کے الفاظ تین بار نکلتے ہیں تو بعض علما اس پر ایک طلاق کا حکم لگا کر رجوع کا حق باقی رکھتے ہیں اور بعض تین قرار دے کررجوع کو باطل قرار دیتے ہیں۔ یہ صورتِ حال ایک عام آدمی کے لیے نہایت دشواریاں پیدا کر دیتی ہے۔</p>
<p>پانچواں مسئلہ یہ پیدا ہوتا ہے کہ حکمران اگر دین و شریعت سے کچھ خاص دلچسپی نہ رکھتے ہوں تو وہ اس صورتِ حال میں شریعت کی روشنی میں قانون سازی کی طرف متوجہ نہیں ہوتے۔ کام چل رہا ہے کے اصول پر وہ اس طریقِ قانون سازی سے سمجھوتاکیے رہتے ہیں۔ اس کا نتیجہ یہ نکلتا ہے کہ حکومتی ادارے ضروری قانون سازی کے بارے میں بے پروائی کا رویہ اختیار کرتے ہیں اور قوانین اپنے فطری ارتقا سے محروم رہتے ہیں۔</p>
<p>چھٹا مألہ یہ پیدا ہوتا ہے کہ رائج الوقت قانون اور عدالتوں کی توہین کے امکانات پیدا ہو جاتے ہیں۔ جب کسی مسئلے میں عدالتیں اپنا فیصلہ سنائیں اور علما اسے باطل قرار دیتے ہوئے اس کے برعکس اپنا فیصلہ صادر کریں تو اس سے عدالتوں کا وقار مجروح ہوتا ہے۔ اس کا مطلب یہ ہوتا ہے کہ کوئی شہری عدلیہ کو چیلنج کرنے کے لیے کھڑا ہو گیا ہے۔</p>
<p>ان مسائل کے تناظر میں بنگلہ دیش کی عدالتِ عالیہ کا فیصلہ ہمارے نزدیک، امت کی تاریخ میں ایک عظیم فیصلہ ہے۔ جناب جاوید احمد صاحب غامدی نے اسے بجا طور پر صدی کا بہترین فیصلہ قرار دیا ہے۔ بنگلہ دیش کی عدالت اگر علما کے فتووں اور قانونی فیصلوں پر پابندی لگانے کے بجائے، ان کے اظہارِ رائے پر پابندی عائدکرتی تو ہم اسے صدی کا بدترین فیصلہ قرار دیتے اور انھی صفحات میں بے خوفِ لومۃ و لائم اس پر نقد کر رہے ہوتے۔</p>
<p>موجودہ زمانے میں امتِ مسلمہ کا ایک بڑا المیہ یہ ہے کہ اس کے علما اپنی اصل ذمہ داری کو ادا کرنے کے بجائے ان ذمہ داریوں کو ادا کرنے پر مصر ہیں جن کے نہ وہ مکلف ہیں اور نہ اہل ہیں۔ قرآن و سنت کی رو سے علما کی اصل ذمہ داری دعوت و تبلیغ، انذار و تبشیر اور تعلیم و تحقیق ہے۔ ان کا کام سیاست نہیں، بلکہ سیاست دانوں کو دین کی رہنمائی سے آگاہی ہے؛ ان کا کام حکومت نہیں، بلکہ حکمرانوں کی اصلاح کی کوشش ہے؛ ان کا کام جہاد و قتال نہیں، بلکہ جہادکی تعلیم اور جذبۂ جہاد کی بیداری ہے؛ اسی طرح ان کا کام قانون سازی اور فتویٰ بازی نہیں بلکہ تحقیق و اجتہاد ہے۔ گویا انھیں قرآنِ مجیدکامفہوم سمجھنے، سنتِ ثابتہ کا مدعا متعین کرنے اور قولِ پیغمبر کا منشامعلوم کرنے کے لیے تحقیق کرنی ہے اور جن امور میں قرآن و سنت خاموش ہیں ان میں اپنی عقل و بصیرت سے اجتہادی آراقائم کرنی ہیں۔ ان کی کسی تحقیق یا اجتہاد کو جب عدلیہ یا پارلیمنٹ قبول کرے گی تو وہ قانون قرار پائے گا۔ اس سے پہلے اس کی حیثیت محض ایک رائے کی ہوگی۔ اس لیے اسے اسی حیثیت سے پیش کیا جائے گا۔</p>
<p>اس کا مطلب یہ ہے کہ کوئی حکم نہیں لگایا جائے گا، کوئی فیصلہ نہیں سنایا جائے گا، کوئی فتویٰ نہیں دیا جائے گا، بلکہ طالبِ علمانہ لب و لہجے میں محض علم و استدلال کی بنا پر اپنا نقطۂ نظر پیش کیا جائے گا۔ یہ نہیں کہا جائے گا کہ فلاں شخص کافر ہے، بلکہ اس کی اگر ضرورت پیش آئے تو یہ کہا جائے گا کہ فلاں شخص کا فلاں عقیدہ کفر ہے۔ یہ نہیں کہا جائے گا کہ فلاں آدمی دائرۂ اسلام سے خارج ہو گیا ہے، بلکہ یہ کہا جائے گا کہ فلاں آدمی کا فلاں نقطۂ نظر اسلام کے دائرے میں نہیں آتا۔ یہ نہیں کہا جائے گا فلاں آدمی مشرک ہے، بلکہ یہ کہا جائے گا فلاں نظریہ یا فلاں طرزِ عمل شرک ہے۔ یہ نہیں کہا جائے گا کہ زید کی طرف سے دی گئی ایک وقت کی تین طلاقیں واقع ہو گئی ہیں، بلکہ یہ کہا جائے گا کہ ایک وقت کی تین طلاقیں واقع ہو نی چاہییں۔</p>
<p>حکم لگانا، فیصلہ سنانا، قانون وضع کرنا اورفتویٰ جاری کرنا درحقیقت، عدلیہ اور حکومت کا کام ہے کسی عالمِ دین یا کسی اور غیر مجاز فرد کی طرف سے اس کام کو انجام دینے کی کوشش سراسر تجاوز ہے۔ خلافتِ راشدہ کے زمانے میں اس اصول کو ہمیشہ ملحوظ رکھا گیا۔ شاہ ولی اللہ محدث دہلوی اپنی کتاب ”ازالتہ الخفا ء“ میں لکھتے ہیں:</p>
<blockquote>
<p>”اس زمانے تک وعظ اور فتویٰ خلیفہ کی رائے پر موقوف تھا۔ خلیفہ کے حکم کے بغیر نہ وعظ کہتے تھے اور نہ فتویٰ دیتے تھے۔ بعد میں خلیفہ کے حکم کے بغیر وعظ کہنے اور فتویٰ دینے لگے اور فتویٰ کے معاملے میں جماعت (مجلسِ شوریٰ) کے مشورہ کی جو صورت پہلے تھی وہ باقی نہ رہی——- (اس زمانے میں) جب کوئی اختلافی صورت نمودار ہوتی، خلیفہ کے سامنے معاملہ پیش کرتے، خلیفہ اہلِ علم و تقویٰ سے مشورہ کرنے کے بعد ایک رائے قائم کرتا اور وہی سب لوگوں کی رائے بن جاتی۔ حضرت عثمان کی شہادت کے بعد ہر عالم بطورِ خود فتویٰ دینے لگا اور اس طرح مسلمانوں میں اختلاف برپا ہوا۔“ (بحوالہ ”اسلامی ریاست میں فقہی اختلافات کا حل“، مولاناامین احسن اصلاحی، ص۳۲)</p>
</blockquote>
</section>
</body>
",
"file_id": "0001.xml",
"metadata":
"
<meta>
<title>بنگلہ دیش کی عدالت کا تاریخی فیصلہ</title>
<author>
<name>سید منظور الحسن</name>
<gender>Male</gender>
</author>
<publication>
<name>Mahnama Ishraq February 2001</name>
<year>2001</year>
<city>Lahore</city>
<link>https://www.javedahmedghamidi.org/#!/ishraq/5adb7341b7dd1138372db999?articleId=5adb7452b7dd1138372dd6fb&year=2001&decade=2000</link>
<copyright-holder>Al-Mawrid</copyright-holder>
</publication>
<num-words>1694</num-words>
<contains-non-urdu-languages>No</contains-non-urdu-languages>
</meta>
",
"num-words": 1694,
"title": "بنگلہ دیش کی عدالت کا تاریخی فیصلہ"
}
```
### Data Fields
```file_id (str)```: Document file_id corresponding to filename in repository.
```metadata(str)```: XML formatted string containing metadata on the document such as the document's title, information about the author and publication, as well as other potentially useful facts such as the number of Urdu words in the document and whether the document contains text in any other languages.
```title (str)```: Title of the document.
```num-words (int)```: Number of words in document.
```contains-non-urdu-languages (str)```: ```Yes``` if document contains words other than urdu, ```No``` otherwise.
```document_body```: XML formatted body of the document. Details below:
The document is divided into ```<section>``` elements. In general the rule is that a clear visual demarkation in the original text (such as a page break, or a horizontal rule) is used to indicate a section break. A heading does not automatically create a new section.
Each paragraph is a ```<p>``` element.
Headings are wrapped in an ```<heading>``` element.
Blockquotes are wrapped in a ```<blockquote>``` element. Blockquotes may themselves contain other elements.
Lists are wrapped in an ```<list>```. Individual items in each list are wrapped in an ```<li>``` element.
Poetic verses are wrapped in a ```<verse>``` element. Each verse is on a separate line but is not wrapped in an individual element.
Tables are wrapped in a ```<table>``` element. A table is divided into rows marked by ```<tr>``` and columns marked by ```<td>```.
Text not in the Urdu language is wrapped in an ```<annotation>``` tag (more below).
```<p>, <heading>, <li>, <td>``` and ```<annotation>``` tags are inline with the text (i.e. there is no new line character before and after the tag). Other tags have a new line after the opening and before the closing tag.
Due to the use of XML syntax, ```<```, ```>``` and ```&``` characters have been escaped as ```<```;, ```>```;, and ```&```; respectively. This includes the use of these characters in URLs inside metadata.
### Data Splits
All the data is in one split ```train```
## Dataset Creation
### Curation Rationale
All text in this repository has been selected for quality of language, upholding high editorial standards. Given the poor quality of most published Urdu text in digital form, this selection criteria allows the use of this text for natural language processing, and machine learning applications without the need to address fundamental quality issues in the text.
We have made efforts to ensure this text is as broadly representative as possible. Specifically we have attempted to select for as many authors as possible, and diversity in the gender of the author, as well as years and city of publication. This effort is imperfect, and we appreciate any attempts at pointing us to resources that can help diversify this text further.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Makhzan has been started with generous initial donations of text from two renowned journals Bunyad, from the Gurmani Center of Literature and Languages at the Lahore University of Management Sciences (LUMS), and Ishraq, from the Al-Mawrid Institute. This choice of sources allowed us to get a diversity of voices even in a small initial corpus, while ensuring the highest editorial standards available in published Urdu text. As a result your models can also maintain high linguistic standards.
### Annotations
#### Annotation process
Text is structured and annotated using XML syntax. The ontology of elements used is loosely based around HTML, with simplifications made when HTML's specificity is not needed, and additions made to express common occurences in this corpus that would be useful for linguistic analysis. The semantic tagging of text is editorial in nature, which is to say that another person semantically tagging the text may do so differently. Effort has been made however to ensure consistency, and to retain the original meaning of the text while making it easy to parse through linguistically different pieces of text for analysis.
Annotations have been made inline using an ```<annotation>``` element.
A language (lang) attribute is added to the ```<annotation>``` element to indicate text in other languages (such as quoted text or technical vocabulary presented in other languages and scripts). The attribute value a two-character ISO 639-1 code. So the resultant annotation for an Arabic quote for example, will be ```<annotation lang="ar"></annotation>```.
A type (type) attributed is added to indicate text that is not in a language per se but is not Urdu text. URLs for example are wrapped in an ```<annotation type="url">``` tag.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
A few of the files do not have valid XML and cannot be loaded. This issue is tracked [here](https://github.com/zeerakahmed/makhzan/issues/28)
## Additional Information
### Dataset Curators
Zeerak Ahmed
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{makhzan,
title={Maḵẖzan},
howpublished = "\url{https://github.com/zeerakahmed/makhzan/}",
}
```
### Contributions
Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset. |
masakhaner | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- am
- ha
- ig
- lg
- luo
- pcm
- rw
- sw
- wo
- yo
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MasakhaNER
configs:
- am
- ha
- ig
- lg
- luo
- pcm
- rw
- sw
- wo
- yo
dataset_info:
- config_name: amh
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 639911
num_examples: 1750
- name: validation
num_bytes: 92753
num_examples: 250
- name: test
num_bytes: 184271
num_examples: 500
download_size: 571951
dataset_size: 916935
- config_name: hau
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 929848
num_examples: 1912
- name: validation
num_bytes: 139503
num_examples: 276
- name: test
num_bytes: 282971
num_examples: 552
download_size: 633372
dataset_size: 1352322
- config_name: ibo
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 749196
num_examples: 2235
- name: validation
num_bytes: 110572
num_examples: 320
- name: test
num_bytes: 222192
num_examples: 638
download_size: 515415
dataset_size: 1081960
- config_name: kin
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 878746
num_examples: 2116
- name: validation
num_bytes: 120998
num_examples: 302
- name: test
num_bytes: 258638
num_examples: 605
download_size: 633024
dataset_size: 1258382
- config_name: lug
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 611917
num_examples: 1428
- name: validation
num_bytes: 70058
num_examples: 200
- name: test
num_bytes: 183063
num_examples: 407
download_size: 445755
dataset_size: 865038
- config_name: luo
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 314995
num_examples: 644
- name: validation
num_bytes: 43506
num_examples: 92
- name: test
num_bytes: 87716
num_examples: 186
download_size: 213281
dataset_size: 446217
- config_name: pcm
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 868229
num_examples: 2124
- name: validation
num_bytes: 126829
num_examples: 306
- name: test
num_bytes: 262185
num_examples: 600
download_size: 572054
dataset_size: 1257243
- config_name: swa
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 1001120
num_examples: 2109
- name: validation
num_bytes: 128563
num_examples: 300
- name: test
num_bytes: 272108
num_examples: 604
download_size: 686313
dataset_size: 1401791
- config_name: wol
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 602076
num_examples: 1871
- name: validation
num_bytes: 71535
num_examples: 267
- name: test
num_bytes: 191484
num_examples: 539
download_size: 364463
dataset_size: 865095
- config_name: yor
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
splits:
- name: train
num_bytes: 1016741
num_examples: 2171
- name: validation
num_bytes: 127415
num_examples: 305
- name: test
num_bytes: 359519
num_examples: 645
download_size: 751510
dataset_size: 1503675
---
# Dataset Card for MasakhaNER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-ner)
- **Repository:** [github](https://github.com/masakhane-io/masakhane-ner)
- **Paper:** [paper](https://arxiv.org/abs/2103.11811)
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or didelani@lsv.uni-saarland.de
### Dataset Summary
MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
MasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:
- Amharic
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Nigerian-Pidgin
- Swahili
- Wolof
- Yoruba
The train/validation/test sets are available for all the ten languages.
For more details see https://arxiv.org/abs/2103.11811
### Supported Tasks and Leaderboards
[More Information Needed]
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
### Languages
There are ten languages available :
- Amharic (amh)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Luganda (kin)
- Luo (luo)
- Nigerian-Pidgin (pcm)
- Swahili (swa)
- Wolof (wol)
- Yoruba (yor)
## Dataset Structure
### Data Instances
The examples look like this for Yorùbá:
```
from datasets import load_dataset
data = load_dataset('masakhaner', 'yor')
# Please, specify the language code
# A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [B-DATE, I-DATE, 0, 0, 0, 0, 0, B-PER, I-PER, I-PER, O, O, O, O],
'tokens': ['Wákàtí', 'méje', 'ti', 'ré', 'kọjá', 'lọ', 'tí', 'Luis', 'Carlos', 'Díaz', 'ti', 'di', 'awati', '.']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
```
In the NER tags, a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & time (DATE).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | validation | test |
|-----------------|------:|-----------:|-----:|
| Amharic | 1750 | 250 | 500 |
| Hausa | 1903 | 272 | 545 |
| Igbo | 2233 | 319 | 638 |
| Kinyarwanda | 2110 | 301 | 604 |
| Luganda | 2003 | 200 | 401 |
| Luo | 644 | 92 | 185 |
| Nigerian-Pidgin | 2100 | 300 | 600 |
| Swahili | 2104 | 300 | 602 |
| Wolof | 1871 | 267 | 536 |
| Yoruba | 2124 | 303 | 608 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to ten languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
The source of the data is from the news domain, details can be found here https://arxiv.org/abs/2103.11811
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
Details can be found here https://arxiv.org/abs/2103.11811
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{Adelani2021MasakhaNERNE,
title={MasakhaNER: Named Entity Recognition for African Languages},
author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos
and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and
Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and
Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and
Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and
Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and
C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and
Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and
Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and
Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and
Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},
journal={ArXiv},
year={2021},
volume={abs/2103.11811}
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. |
math_dataset | ---
pretty_name: Mathematics Dataset
language:
- en
paperswithcode_id: mathematics
dataset_info:
- config_name: algebra__linear_1d
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 516405
num_examples: 10000
- name: train
num_bytes: 92086245
num_examples: 1999998
download_size: 2333082954
dataset_size: 92602650
- config_name: algebra__linear_1d_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1018090
num_examples: 10000
- name: train
num_bytes: 199566926
num_examples: 1999998
download_size: 2333082954
dataset_size: 200585016
- config_name: algebra__linear_2d
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 666095
num_examples: 10000
- name: train
num_bytes: 126743526
num_examples: 1999998
download_size: 2333082954
dataset_size: 127409621
- config_name: algebra__linear_2d_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1184664
num_examples: 10000
- name: train
num_bytes: 234405885
num_examples: 1999998
download_size: 2333082954
dataset_size: 235590549
- config_name: algebra__polynomial_roots
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 868630
num_examples: 10000
- name: train
num_bytes: 163134199
num_examples: 1999998
download_size: 2333082954
dataset_size: 164002829
- config_name: algebra__polynomial_roots_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1281321
num_examples: 10000
- name: train
num_bytes: 251435312
num_examples: 1999998
download_size: 2333082954
dataset_size: 252716633
- config_name: algebra__sequence_next_term
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 752459
num_examples: 10000
- name: train
num_bytes: 138735194
num_examples: 1999998
download_size: 2333082954
dataset_size: 139487653
- config_name: algebra__sequence_nth_term
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 947764
num_examples: 10000
- name: train
num_bytes: 175945643
num_examples: 1999998
download_size: 2333082954
dataset_size: 176893407
- config_name: arithmetic__add_or_sub
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 483725
num_examples: 10000
- name: train
num_bytes: 89690356
num_examples: 1999998
download_size: 2333082954
dataset_size: 90174081
- config_name: arithmetic__add_or_sub_in_base
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 502221
num_examples: 10000
- name: train
num_bytes: 93779137
num_examples: 1999998
download_size: 2333082954
dataset_size: 94281358
- config_name: arithmetic__add_sub_multiple
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 498421
num_examples: 10000
- name: train
num_bytes: 90962782
num_examples: 1999998
download_size: 2333082954
dataset_size: 91461203
- config_name: arithmetic__div
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 421520
num_examples: 10000
- name: train
num_bytes: 78417908
num_examples: 1999998
download_size: 2333082954
dataset_size: 78839428
- config_name: arithmetic__mixed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 513364
num_examples: 10000
- name: train
num_bytes: 93989009
num_examples: 1999998
download_size: 2333082954
dataset_size: 94502373
- config_name: arithmetic__mul
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 394004
num_examples: 10000
- name: train
num_bytes: 73499093
num_examples: 1999998
download_size: 2333082954
dataset_size: 73893097
- config_name: arithmetic__mul_div_multiple
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 497308
num_examples: 10000
- name: train
num_bytes: 91406689
num_examples: 1999998
download_size: 2333082954
dataset_size: 91903997
- config_name: arithmetic__nearest_integer_root
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 705630
num_examples: 10000
- name: train
num_bytes: 137771237
num_examples: 1999998
download_size: 2333082954
dataset_size: 138476867
- config_name: arithmetic__simplify_surd
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1261753
num_examples: 10000
- name: train
num_bytes: 207753790
num_examples: 1999998
download_size: 2333082954
dataset_size: 209015543
- config_name: calculus__differentiate
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1025947
num_examples: 10000
- name: train
num_bytes: 199013993
num_examples: 1999998
download_size: 2333082954
dataset_size: 200039940
- config_name: calculus__differentiate_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1343416
num_examples: 10000
- name: train
num_bytes: 263757570
num_examples: 1999998
download_size: 2333082954
dataset_size: 265100986
- config_name: comparison__closest
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 681229
num_examples: 10000
- name: train
num_bytes: 132274822
num_examples: 1999998
download_size: 2333082954
dataset_size: 132956051
- config_name: comparison__closest_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1071089
num_examples: 10000
- name: train
num_bytes: 210658152
num_examples: 1999998
download_size: 2333082954
dataset_size: 211729241
- config_name: comparison__kth_biggest
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 797185
num_examples: 10000
- name: train
num_bytes: 149077463
num_examples: 1999998
download_size: 2333082954
dataset_size: 149874648
- config_name: comparison__kth_biggest_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1144556
num_examples: 10000
- name: train
num_bytes: 221547532
num_examples: 1999998
download_size: 2333082954
dataset_size: 222692088
- config_name: comparison__pair
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 452528
num_examples: 10000
- name: train
num_bytes: 85707543
num_examples: 1999998
download_size: 2333082954
dataset_size: 86160071
- config_name: comparison__pair_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 946187
num_examples: 10000
- name: train
num_bytes: 184702998
num_examples: 1999998
download_size: 2333082954
dataset_size: 185649185
- config_name: comparison__sort
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 712498
num_examples: 10000
- name: train
num_bytes: 131752705
num_examples: 1999998
download_size: 2333082954
dataset_size: 132465203
- config_name: comparison__sort_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1114257
num_examples: 10000
- name: train
num_bytes: 213871896
num_examples: 1999998
download_size: 2333082954
dataset_size: 214986153
- config_name: measurement__conversion
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 592904
num_examples: 10000
- name: train
num_bytes: 118650852
num_examples: 1999998
download_size: 2333082954
dataset_size: 119243756
- config_name: measurement__time
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 584278
num_examples: 10000
- name: train
num_bytes: 116962599
num_examples: 1999998
download_size: 2333082954
dataset_size: 117546877
- config_name: numbers__base_conversion
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 490881
num_examples: 10000
- name: train
num_bytes: 90363333
num_examples: 1999998
download_size: 2333082954
dataset_size: 90854214
- config_name: numbers__div_remainder
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 644523
num_examples: 10000
- name: train
num_bytes: 125046212
num_examples: 1999998
download_size: 2333082954
dataset_size: 125690735
- config_name: numbers__div_remainder_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1151347
num_examples: 10000
- name: train
num_bytes: 226341870
num_examples: 1999998
download_size: 2333082954
dataset_size: 227493217
- config_name: numbers__gcd
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 659492
num_examples: 10000
- name: train
num_bytes: 127914889
num_examples: 1999998
download_size: 2333082954
dataset_size: 128574381
- config_name: numbers__gcd_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1206805
num_examples: 10000
- name: train
num_bytes: 237534189
num_examples: 1999998
download_size: 2333082954
dataset_size: 238740994
- config_name: numbers__is_factor
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 396129
num_examples: 10000
- name: train
num_bytes: 75875988
num_examples: 1999998
download_size: 2333082954
dataset_size: 76272117
- config_name: numbers__is_factor_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 949828
num_examples: 10000
- name: train
num_bytes: 185369842
num_examples: 1999998
download_size: 2333082954
dataset_size: 186319670
- config_name: numbers__is_prime
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 385749
num_examples: 10000
- name: train
num_bytes: 73983639
num_examples: 1999998
download_size: 2333082954
dataset_size: 74369388
- config_name: numbers__is_prime_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 947888
num_examples: 10000
- name: train
num_bytes: 184808483
num_examples: 1999998
download_size: 2333082954
dataset_size: 185756371
- config_name: numbers__lcm
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 717978
num_examples: 10000
- name: train
num_bytes: 136826050
num_examples: 1999998
download_size: 2333082954
dataset_size: 137544028
- config_name: numbers__lcm_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1127744
num_examples: 10000
- name: train
num_bytes: 221148668
num_examples: 1999998
download_size: 2333082954
dataset_size: 222276412
- config_name: numbers__list_prime_factors
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 585749
num_examples: 10000
- name: train
num_bytes: 109982816
num_examples: 1999998
download_size: 2333082954
dataset_size: 110568565
- config_name: numbers__list_prime_factors_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1053510
num_examples: 10000
- name: train
num_bytes: 205379513
num_examples: 1999998
download_size: 2333082954
dataset_size: 206433023
- config_name: numbers__place_value
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 496977
num_examples: 10000
- name: train
num_bytes: 95180091
num_examples: 1999998
download_size: 2333082954
dataset_size: 95677068
- config_name: numbers__place_value_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1011130
num_examples: 10000
- name: train
num_bytes: 197187918
num_examples: 1999998
download_size: 2333082954
dataset_size: 198199048
- config_name: numbers__round_number
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 570636
num_examples: 10000
- name: train
num_bytes: 111472483
num_examples: 1999998
download_size: 2333082954
dataset_size: 112043119
- config_name: numbers__round_number_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1016754
num_examples: 10000
- name: train
num_bytes: 201057283
num_examples: 1999998
download_size: 2333082954
dataset_size: 202074037
- config_name: polynomials__add
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1308455
num_examples: 10000
- name: train
num_bytes: 257576092
num_examples: 1999998
download_size: 2333082954
dataset_size: 258884547
- config_name: polynomials__coefficient_named
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1137226
num_examples: 10000
- name: train
num_bytes: 219716251
num_examples: 1999998
download_size: 2333082954
dataset_size: 220853477
- config_name: polynomials__collect
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 774709
num_examples: 10000
- name: train
num_bytes: 143743260
num_examples: 1999998
download_size: 2333082954
dataset_size: 144517969
- config_name: polynomials__compose
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1209763
num_examples: 10000
- name: train
num_bytes: 233651887
num_examples: 1999998
download_size: 2333082954
dataset_size: 234861650
- config_name: polynomials__evaluate
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 599446
num_examples: 10000
- name: train
num_bytes: 114538250
num_examples: 1999998
download_size: 2333082954
dataset_size: 115137696
- config_name: polynomials__evaluate_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1148362
num_examples: 10000
- name: train
num_bytes: 226022455
num_examples: 1999998
download_size: 2333082954
dataset_size: 227170817
- config_name: polynomials__expand
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1057353
num_examples: 10000
- name: train
num_bytes: 202338235
num_examples: 1999998
download_size: 2333082954
dataset_size: 203395588
- config_name: polynomials__simplify_power
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1248040
num_examples: 10000
- name: train
num_bytes: 216407582
num_examples: 1999998
download_size: 2333082954
dataset_size: 217655622
- config_name: probability__swr_p_level_set
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1159050
num_examples: 10000
- name: train
num_bytes: 227540179
num_examples: 1999998
download_size: 2333082954
dataset_size: 228699229
- config_name: probability__swr_p_sequence
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1097442
num_examples: 10000
- name: train
num_bytes: 215865725
num_examples: 1999998
download_size: 2333082954
dataset_size: 216963167
---
# Dataset Card for "math_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/mathematics_dataset](https://github.com/deepmind/mathematics_dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 130.65 GB
- **Size of the generated dataset:** 9.08 GB
- **Total amount of disk used:** 139.73 GB
### Dataset Summary
Mathematics database.
This dataset code generates mathematical question and answer pairs,
from a range of question types at roughly school-level difficulty.
This is designed to test the mathematical learning and algebraic
reasoning skills of learning models.
Original paper: Analysing Mathematical Reasoning Abilities of Neural Models
(Saxton, Grefenstette, Hill, Kohli).
Example usage:
train_examples, val_examples = datasets.load_dataset(
'math_dataset/arithmetic__mul',
split=['train', 'test'],
as_supervised=True)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### algebra__linear_1d
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 92.60 MB
- **Total amount of disk used:** 2.43 GB
An example of 'train' looks as follows.
```
```
#### algebra__linear_1d_composed
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 200.58 MB
- **Total amount of disk used:** 2.53 GB
An example of 'train' looks as follows.
```
```
#### algebra__linear_2d
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 127.41 MB
- **Total amount of disk used:** 2.46 GB
An example of 'train' looks as follows.
```
```
#### algebra__linear_2d_composed
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 235.59 MB
- **Total amount of disk used:** 2.57 GB
An example of 'train' looks as follows.
```
```
#### algebra__polynomial_roots
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 164.01 MB
- **Total amount of disk used:** 2.50 GB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### algebra__linear_1d
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__linear_1d_composed
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__linear_2d
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__linear_2d_composed
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__polynomial_roots
- `question`: a `string` feature.
- `answer`: a `string` feature.
### Data Splits
| name | train |test |
|---------------------------|------:|----:|
|algebra__linear_1d |1999998|10000|
|algebra__linear_1d_composed|1999998|10000|
|algebra__linear_2d |1999998|10000|
|algebra__linear_2d_composed|1999998|10000|
|algebra__polynomial_roots |1999998|10000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2019arXiv,
author = {Saxton, Grefenstette, Hill, Kohli},
title = {Analysing Mathematical Reasoning Abilities of Neural Models},
year = {2019},
journal = {arXiv:1904.01557}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
math_qa | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: MathQA
size_categories:
- 10K<n<100K
source_datasets:
- extended|aqua_rat
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mathqa
dataset_info:
features:
- name: Problem
dtype: string
- name: Rationale
dtype: string
- name: options
dtype: string
- name: correct
dtype: string
- name: annotated_formula
dtype: string
- name: linear_formula
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 1844184
num_examples: 2985
- name: train
num_bytes: 18368826
num_examples: 29837
- name: validation
num_bytes: 2752969
num_examples: 4475
download_size: 7302821
dataset_size: 22965979
---
# Dataset Card for MathQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://math-qa.github.io/math-QA/](https://math-qa.github.io/math-QA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms](https://aclanthology.org/N19-1245/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 7.30 MB
- **Size of the generated dataset:** 22.96 MB
- **Total amount of disk used:** 30.27 MB
### Dataset Summary
We introduce a large-scale dataset of math word problems.
Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset with fully-specified operational programs.
AQuA-RAT has provided the questions, options, rationale, and the correct options.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 7.30 MB
- **Size of the generated dataset:** 22.96 MB
- **Total amount of disk used:** 30.27 MB
An example of 'train' looks as follows.
```
{
"Problem": "a multiple choice test consists of 4 questions , and each question has 5 answer choices . in how many r ways can the test be completed if every question is unanswered ?",
"Rationale": "\"5 choices for each of the 4 questions , thus total r of 5 * 5 * 5 * 5 = 5 ^ 4 = 625 ways to answer all of them . answer : c .\"",
"annotated_formula": "power(5, 4)",
"category": "general",
"correct": "c",
"linear_formula": "power(n1,n0)|",
"options": "a ) 24 , b ) 120 , c ) 625 , d ) 720 , e ) 1024"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `Problem`: a `string` feature.
- `Rationale`: a `string` feature.
- `options`: a `string` feature.
- `correct`: a `string` feature.
- `annotated_formula`: a `string` feature.
- `linear_formula`: a `string` feature.
- `category`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|29837| 4475|2985|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{amini-etal-2019-mathqa,
title = "{M}ath{QA}: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms",
author = "Amini, Aida and
Gabriel, Saadia and
Lin, Shanchuan and
Koncel-Kedziorski, Rik and
Choi, Yejin and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1245",
doi = "10.18653/v1/N19-1245",
pages = "2357--2367",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
matinf | ---
paperswithcode_id: matinf
pretty_name: Maternal and Infant Dataset
dataset_info:
- config_name: age_classification
features:
- name: question
dtype: string
- name: description
dtype: string
- name: label
dtype:
class_label:
names:
'0': 0-1岁
'1': 1-2岁
'2': 2-3岁
- name: id
dtype: int32
splits:
- name: train
num_bytes: 33901977
num_examples: 134852
- name: test
num_bytes: 9616194
num_examples: 38318
- name: validation
num_bytes: 4869685
num_examples: 19323
download_size: 0
dataset_size: 48387856
- config_name: topic_classification
features:
- name: question
dtype: string
- name: description
dtype: string
- name: label
dtype:
class_label:
names:
'0': 产褥期保健
'1': 儿童过敏
'2': 动作发育
'3': 婴幼保健
'4': 婴幼心理
'5': 婴幼早教
'6': 婴幼期喂养
'7': 婴幼营养
'8': 孕期保健
'9': 家庭教育
'10': 幼儿园
'11': 未准父母
'12': 流产和不孕
'13': 疫苗接种
'14': 皮肤护理
'15': 宝宝上火
'16': 腹泻
'17': 婴幼常见病
- name: id
dtype: int32
splits:
- name: train
num_bytes: 153326538
num_examples: 613036
- name: test
num_bytes: 43877443
num_examples: 175363
- name: validation
num_bytes: 21834951
num_examples: 87519
download_size: 0
dataset_size: 219038932
- config_name: summarization
features:
- name: description
dtype: string
- name: question
dtype: string
- name: id
dtype: int32
splits:
- name: train
num_bytes: 181245403
num_examples: 747888
- name: test
num_bytes: 51784189
num_examples: 213681
- name: validation
num_bytes: 25849900
num_examples: 106842
download_size: 0
dataset_size: 258879492
- config_name: qa
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: id
dtype: int32
splits:
- name: train
num_bytes: 188047511
num_examples: 747888
- name: test
num_bytes: 53708532
num_examples: 213681
- name: validation
num_bytes: 26931809
num_examples: 106842
download_size: 0
dataset_size: 268687852
---
# Dataset Card for "matinf"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/WHUIR/MATINF](https://github.com/WHUIR/MATINF)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 795.00 MB
- **Total amount of disk used:** 795.00 MB
### Dataset Summary
MATINF is the first jointly labeled large-scale dataset for classification, question answering and summarization.
MATINF contains 1.07 million question-answer pairs with human-labeled categories and user-generated question
descriptions. Based on such rich information, MATINF is applicable for three major NLP tasks, including classification,
question answering, and summarization. We benchmark existing methods and a novel multi-task baseline over MATINF to
inspire further research. Our comprehensive comparison and experiments over MATINF and other datasets demonstrate the
merits held by MATINF.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### age_classification
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 48.39 MB
- **Total amount of disk used:** 48.39 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"description": "\"6个月的时候去儿宝检查,医生说宝宝的分胯动作做的不好,说最好去儿童医院看看,但我家宝宝很好,感觉没有什么不正常啊,请教一下,分胯做的不好,有什么不好吗?\"...",
"id": 88016,
"label": 0,
"question": "医生说宝宝的分胯动作不好"
}
```
#### qa
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 268.69 MB
- **Total amount of disk used:** 268.69 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "\"我一个同学的孩子就是发现了肾积水,治疗了一段时间,结果还是越来越多,没办法就打掉了。虽然舍不得,但是还是要忍痛割爱,不然以后孩子真的有问题,大人和孩子都受罪。不过,这个最后的决定还要你自己做,毕竟是你的宝宝。,、、、、\"...",
"id": 536714,
"question": "孕5个月检查右侧肾积水孩子能要吗?"
}
```
#### summarization
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 258.88 MB
- **Total amount of disk used:** 258.88 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"description": "\"宝宝有中度HIE,但原因未查明,这是他出生后脸上红的几道,嘴唇深红近紫,请问这是像缺氧的表现吗?\"...",
"id": 173649,
"question": "宝宝脸上红的几道嘴唇深红近紫是像缺氧的表现吗?"
}
```
#### topic_classification
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 219.04 MB
- **Total amount of disk used:** 219.04 MB
An example of 'train' looks as follows.
```
{
"description": "媳妇怀孕五个月了经检查右侧肾积水、过了半月左侧也出现肾积水、她要拿掉孩子、怎么办?",
"id": 536714,
"label": 8,
"question": "孕5个月检查右侧肾积水孩子能要吗?"
}
```
### Data Fields
The data fields are the same among all splits.
#### age_classification
- `question`: a `string` feature.
- `description`: a `string` feature.
- `label`: a classification label, with possible values including `0-1岁` (0), `1-2岁` (1), `2-3岁` (2).
- `id`: a `int32` feature.
#### qa
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `id`: a `int32` feature.
#### summarization
- `description`: a `string` feature.
- `question`: a `string` feature.
- `id`: a `int32` feature.
#### topic_classification
- `question`: a `string` feature.
- `description`: a `string` feature.
- `label`: a classification label, with possible values including `产褥期保健` (0), `儿童过敏` (1), `动作发育` (2), `婴幼保健` (3), `婴幼心理` (4).
- `id`: a `int32` feature.
### Data Splits
| name |train |validation| test |
|--------------------|-----:|---------:|-----:|
|age_classification |134852| 19323| 38318|
|qa |747888| 106842|213681|
|summarization |747888| 106842|213681|
|topic_classification|613036| 87519|175363|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{xu-etal-2020-matinf,
title = "{MATINF}: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization",
author = "Xu, Canwen and
Pei, Jiaxin and
Wu, Hongtao and
Liu, Yiyu and
Li, Chenliang",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.330",
pages = "3586--3596",
}
```
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
mbpp | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Mostly Basic Python Problems
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- code-generation
dataset_info:
- config_name: full
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
splits:
- name: train
num_bytes: 176879
num_examples: 374
- name: test
num_bytes: 244104
num_examples: 500
- name: validation
num_bytes: 42405
num_examples: 90
- name: prompt
num_bytes: 4550
num_examples: 10
download_size: 563743
dataset_size: 467938
- config_name: sanitized
features:
- name: source_file
dtype: string
- name: task_id
dtype: int32
- name: prompt
dtype: string
- name: code
dtype: string
- name: test_imports
sequence: string
- name: test_list
sequence: string
splits:
- name: train
num_bytes: 63453
num_examples: 120
- name: test
num_bytes: 132720
num_examples: 257
- name: validation
num_bytes: 20050
num_examples: 43
- name: prompt
num_bytes: 3407
num_examples: 7
download_size: 255053
dataset_size: 219630
---
# Dataset Card for Mostly Basic Python Problems (mbpp)
## Table of Contents
- [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp))
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/google-research/tree/master/mbpp
- **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732)
### Dataset Summary
The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us.
Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732).
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Python code
## Dataset Structure
```python
dataset_full = load_dataset("mbpp")
DatasetDict({
test: Dataset({
features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'],
num_rows: 974
})
})
dataset_sanitized = load_dataset("mbpp", "sanitized")
DatasetDict({
test: Dataset({
features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'],
num_rows: 427
})
})
```
### Data Instances
#### mbpp - full
```
{
'task_id': 1,
'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].',
'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]',
'test_list': [
'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8',
'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12',
'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'],
'test_setup_code': '',
'challenge_test_list': []
}
```
#### mbpp - sanitized
```
{
'source_file': 'Benchmark Questions Verification V2.ipynb',
'task_id': 2,
'prompt': 'Write a function to find the shared elements from the given two lists.',
'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ',
'test_imports': [],
'test_list': [
'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))',
'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))',
'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))'
]
}
```
### Data Fields
- `source_file`: unknown
- `text`/`prompt`: description of programming task
- `code`: solution for programming task
- `test_setup_code`/`test_imports`: necessary code imports to execute tests
- `test_list`: list of tests to verify solution
- `challenge_test_list`: list of more challenging test to further probe solution
### Data Splits
There are two version of the dataset (full and sanitized), each with four splits:
- train
- evaluation
- test
- prompt
The `prompt` split corresponds to samples used for few-shot prompting and not for training.
## Dataset Creation
See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732).
### Curation Rationale
In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.
### Source Data
#### Initial Data Collection and Normalization
The dataset was manually created from scratch.
#### Who are the source language producers?
The dataset was created with an internal crowdsourcing effort at Google.
### Annotations
#### Annotation process
The full dataset was created first and a subset then underwent a second round to improve the task descriptions.
#### Who are the annotators?
The dataset was created with an internal crowdsourcing effort at Google.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
### Other Known Limitations
Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset.
## Additional Information
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{austin2021program,
title={Program Synthesis with Large Language Models},
author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
journal={arXiv preprint arXiv:2108.07732},
year={2021}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset. |
mc4 | ---
pretty_name: mC4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
language_bcp47:
- bg-Latn
- el-Latn
- hi-Latn
- ja-Latn
- ru-Latn
- zh-Latn
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: mc4
---
# Dataset Card for mC4
## Table of Contents
- [Dataset Card for mC4](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
108 languages are available and are reported in the table below.
Note that the languages that end with "-Latn" are simply romanized variants, i.e. written using the Latin script.
| language code | language name |
|:----------------|:---------------------|
| af | Afrikaans |
| am | Amharic |
| ar | Arabic |
| az | Azerbaijani |
| be | Belarusian |
| bg | Bulgarian |
| bg-Latn | Bulgarian (Latin) |
| bn | Bangla |
| ca | Catalan |
| ceb | Cebuano |
| co | Corsican |
| cs | Czech |
| cy | Welsh |
| da | Danish |
| de | German |
| el | Greek |
| el-Latn | Greek (Latin) |
| en | English |
| eo | Esperanto |
| es | Spanish |
| et | Estonian |
| eu | Basque |
| fa | Persian |
| fi | Finnish |
| fil | Filipino |
| fr | French |
| fy | Western Frisian |
| ga | Irish |
| gd | Scottish Gaelic |
| gl | Galician |
| gu | Gujarati |
| ha | Hausa |
| haw | Hawaiian |
| hi | Hindi |
| hi-Latn | Hindi (Latin script) |
| hmn | Hmong, Mong |
| ht | Haitian |
| hu | Hungarian |
| hy | Armenian |
| id | Indonesian |
| ig | Igbo |
| is | Icelandic |
| it | Italian |
| iw | former Hebrew |
| ja | Japanese |
| ja-Latn | Japanese (Latin) |
| jv | Javanese |
| ka | Georgian |
| kk | Kazakh |
| km | Khmer |
| kn | Kannada |
| ko | Korean |
| ku | Kurdish |
| ky | Kyrgyz |
| la | Latin |
| lb | Luxembourgish |
| lo | Lao |
| lt | Lithuanian |
| lv | Latvian |
| mg | Malagasy |
| mi | Maori |
| mk | Macedonian |
| ml | Malayalam |
| mn | Mongolian |
| mr | Marathi |
| ms | Malay |
| mt | Maltese |
| my | Burmese |
| ne | Nepali |
| nl | Dutch |
| no | Norwegian |
| ny | Nyanja |
| pa | Punjabi |
| pl | Polish |
| ps | Pashto |
| pt | Portuguese |
| ro | Romanian |
| ru | Russian |
| ru-Latn | Russian (Latin) |
| sd | Sindhi |
| si | Sinhala |
| sk | Slovak |
| sl | Slovenian |
| sm | Samoan |
| sn | Shona |
| so | Somali |
| sq | Albanian |
| sr | Serbian |
| st | Southern Sotho |
| su | Sundanese |
| sv | Swedish |
| sw | Swahili |
| ta | Tamil |
| te | Telugu |
| tg | Tajik |
| th | Thai |
| tr | Turkish |
| uk | Ukrainian |
| und | Unknown language |
| ur | Urdu |
| uz | Uzbek |
| vi | Vietnamese |
| xh | Xhosa |
| yi | Yiddish |
| yo | Yoruba |
| zh | Chinese |
| zh-Latn | Chinese (Latin) |
| zu | Zulu |
You can load the mC4 subset of any language like this:
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
```
And if you can even specify a list of languages:
```python
from datasets import load_dataset
mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
```
### Supported Tasks and Leaderboards
mC4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset supports 108 languages.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{'timestamp': '2018-06-24T01:32:39Z',
'text': 'Farm Resources in Plumas County\nShow Beginning Farmer Organizations & Professionals (304)\nThere are 304 resources serving Plumas County in the following categories:\nMap of Beginning Farmer Organizations & Professionals serving Plumas County\nVictoria Fisher - Office Manager - Loyalton, CA\nAmy Lynn Rasband - UCCE Plumas-Sierra Administrative Assistant II - Quincy , CA\nShow Farm Income Opportunities Organizations & Professionals (353)\nThere are 353 resources serving Plumas County in the following categories:\nFarm Ranch And Forest Retailers (18)\nMap of Farm Income Opportunities Organizations & Professionals serving Plumas County\nWarner Valley Wildlife Area - Plumas County\nShow Farm Resources Organizations & Professionals (297)\nThere are 297 resources serving Plumas County in the following categories:\nMap of Farm Resources Organizations & Professionals serving Plumas County\nThere are 57 resources serving Plumas County in the following categories:\nMap of Organic Certification Organizations & Professionals serving Plumas County',
'url': 'http://www.californialandcan.org/Plumas/Farm-Resources/'}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
To build mC4, the authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. The resulting mC4 subsets for each language are reported in this table:
| config | train | validation |
|:---------|:--------|:-------------|
| af | ? | ? |
| am | ? | ? |
| ar | ? | ? |
| az | ? | ? |
| be | ? | ? |
| bg | ? | ? |
| bg-Latn | ? | ? |
| bn | ? | ? |
| ca | ? | ? |
| ceb | ? | ? |
| co | ? | ? |
| cs | ? | ? |
| cy | ? | ? |
| da | ? | ? |
| de | ? | ? |
| el | ? | ? |
| el-Latn | ? | ? |
| en | ? | ? |
| eo | ? | ? |
| es | ? | ? |
| et | ? | ? |
| eu | ? | ? |
| fa | ? | ? |
| fi | ? | ? |
| fil | ? | ? |
| fr | ? | ? |
| fy | ? | ? |
| ga | ? | ? |
| gd | ? | ? |
| gl | ? | ? |
| gu | ? | ? |
| ha | ? | ? |
| haw | ? | ? |
| hi | ? | ? |
| hi-Latn | ? | ? |
| hmn | ? | ? |
| ht | ? | ? |
| hu | ? | ? |
| hy | ? | ? |
| id | ? | ? |
| ig | ? | ? |
| is | ? | ? |
| it | ? | ? |
| iw | ? | ? |
| ja | ? | ? |
| ja-Latn | ? | ? |
| jv | ? | ? |
| ka | ? | ? |
| kk | ? | ? |
| km | ? | ? |
| kn | ? | ? |
| ko | ? | ? |
| ku | ? | ? |
| ky | ? | ? |
| la | ? | ? |
| lb | ? | ? |
| lo | ? | ? |
| lt | ? | ? |
| lv | ? | ? |
| mg | ? | ? |
| mi | ? | ? |
| mk | ? | ? |
| ml | ? | ? |
| mn | ? | ? |
| mr | ? | ? |
| ms | ? | ? |
| mt | ? | ? |
| my | ? | ? |
| ne | ? | ? |
| nl | ? | ? |
| no | ? | ? |
| ny | ? | ? |
| pa | ? | ? |
| pl | ? | ? |
| ps | ? | ? |
| pt | ? | ? |
| ro | ? | ? |
| ru | ? | ? |
| ru-Latn | ? | ? |
| sd | ? | ? |
| si | ? | ? |
| sk | ? | ? |
| sl | ? | ? |
| sm | ? | ? |
| sn | ? | ? |
| so | ? | ? |
| sq | ? | ? |
| sr | ? | ? |
| st | ? | ? |
| su | ? | ? |
| sv | ? | ? |
| sw | ? | ? |
| ta | ? | ? |
| te | ? | ? |
| tg | ? | ? |
| th | ? | ? |
| tr | ? | ? |
| uk | ? | ? |
| und | ? | ? |
| ur | ? | ? |
| uz | ? | ? |
| vi | ? | ? |
| xh | ? | ? |
| yi | ? | ? |
| yo | ? | ? |
| zh | ? | ? |
| zh-Latn | ? | ? |
| zu | ? | ? |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
mc_taco | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mc-taco
pretty_name: MC-TACO
dataset_info:
features:
- name: sentence
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
- name: category
dtype:
class_label:
names:
'0': Event Duration
'1': Event Ordering
'2': Frequency
'3': Typical Time
'4': Stationarity
config_name: plain_text
splits:
- name: test
num_bytes: 1785553
num_examples: 9442
- name: validation
num_bytes: 713023
num_examples: 3783
download_size: 2385137
dataset_size: 2498576
---
# Dataset Card for MC-TACO
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MC-TACO](https://cogcomp.seas.upenn.edu/page/resource_view/125)
- **Repository:** [Github repository](https://github.com/CogComp/MCTACO)
- **Paper:** ["Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding](https://arxiv.org/abs/1909.03065)
- **Leaderboard:** [AI2 Leaderboard](https://leaderboard.allenai.org/mctaco)
### Dataset Summary
MC-TACO (Multiple Choice TemporAl COmmonsense) is a dataset of 13k question-answer pairs that require temporal commonsense comprehension. A system receives a sentence providing context information, a question designed to require temporal commonsense knowledge, and multiple candidate answers. More than one candidate answer can be plausible.
### Supported Tasks and Leaderboards
The task is framed as binary classification: givent he context, the question, and the candidate answer, the task is to determine whether the candidate answer is plausible ("yes") or not ("no").
Performance is measured using two metrics:
- Exact Match -- the average number of questions for which all the candidate answers are predicted correctly.
- F1 -- is slightly more relaxed than EM. It measures the overlap between one’s predictions and the ground truth, by computing the geometric mean of Precision and Recall.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example looks like this:
```
{
"sentence": "However, more recently, it has been suggested that it may date from earlier than Abdalonymus' death.",
"question": "How often did Abdalonymus die?",
"answer": "every two years",
"label": "no",
"category": "Frequency",
}
```
### Data Fields
All fields are strings:
- `sentence`: a sentence (or context) on which the question is based
- `question`: a question querying some temporal commonsense knowledge
- `answer`: a potential answer to the question (all lowercased)
- `label`: whether the answer is a correct. "yes" indicates the answer is correct/plaussible, "no" otherwise
- `category`: the temporal category the question belongs to (among "Event Ordering", "Event Duration", "Frequency", "Stationarity", and "Typical Time")
### Data Splits
The development set contains 561 questions and 3,783 candidate answers. The test set contains 1,332 questions and 9,442 candidate answers.
From the original repository:
*Note that there is no training data, and we provide the dev set as the only source of supervision. The rationale is that we believe a successful system has to bring in a huge amount of world knowledge and derive commonsense understandings prior to the current task evaluation. We therefore believe that it is not reasonable to expect a system to be trained solely on this data, and we think of the development data as only providing a definition of the task.*
## Dataset Creation
### Curation Rationale
MC-TACO is used as a testbed to study the temporal commonsense understanding on NLP systems.
### Source Data
From the original paper:
*The context sentences are randomly selected from [MultiRC](https://www.aclweb.org/anthology/N18-1023/) (from each of its 9 domains). For each sentence, we use crowdsourcing on Amazon Mechanical Turk to collect questions and candidate answers (both correct and wrong ones).*
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
From the original paper:
*To ensure the quality of the results, we limit the annotations to native speakers and use qualification tryouts.*
#### Annotation process
The crowdsourced construction/annotation of the dataset follows 4 steps described in Section 3 of the [paper](https://arxiv.org/abs/1909.03065): question generation, question verification, candidate answer expansion and answer labeling.
#### Who are the annotators?
Paid crowdsourcers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknwon
### Citation Information
```
@inproceedings{ZKNR19,
author = {Ben Zhou, Daniel Khashabi, Qiang Ning and Dan Roth},
title = {“Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding },
booktitle = {EMNLP},
year = {2019},
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
md_gender_bias | ---
annotations_creators:
- crowdsourced
- found
- machine-generated
language_creators:
- crowdsourced
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- extended|other-convai2
- extended|other-light
- extended|other-opensubtitles
- extended|other-yelp
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: md-gender
pretty_name: Multi-Dimensional Gender Bias Classification
configs:
- convai2_inferred
- funpedia
- gendered_words
- image_chat
- light_inferred
- name_genders
- new_data
- opensubtitles_inferred
- wizard
- yelp_inferred
tags:
- gender-bias
dataset_info:
- config_name: gendered_words
features:
- name: word_masculine
dtype: string
- name: word_feminine
dtype: string
splits:
- name: train
num_bytes: 4988
num_examples: 222
download_size: 232629010
dataset_size: 4988
- config_name: name_genders
features:
- name: name
dtype: string
- name: assigned_gender
dtype:
class_label:
names:
'0': M
'1': F
- name: count
dtype: int32
splits:
- name: yob1880
num_bytes: 43404
num_examples: 2000
- name: yob1881
num_bytes: 41944
num_examples: 1935
- name: yob1882
num_bytes: 46211
num_examples: 2127
- name: yob1883
num_bytes: 45221
num_examples: 2084
- name: yob1884
num_bytes: 49886
num_examples: 2297
- name: yob1885
num_bytes: 49810
num_examples: 2294
- name: yob1886
num_bytes: 51935
num_examples: 2392
- name: yob1887
num_bytes: 51458
num_examples: 2373
- name: yob1888
num_bytes: 57531
num_examples: 2651
- name: yob1889
num_bytes: 56177
num_examples: 2590
- name: yob1890
num_bytes: 58509
num_examples: 2695
- name: yob1891
num_bytes: 57767
num_examples: 2660
- name: yob1892
num_bytes: 63493
num_examples: 2921
- name: yob1893
num_bytes: 61525
num_examples: 2831
- name: yob1894
num_bytes: 63927
num_examples: 2941
- name: yob1895
num_bytes: 66346
num_examples: 3049
- name: yob1896
num_bytes: 67224
num_examples: 3091
- name: yob1897
num_bytes: 65886
num_examples: 3028
- name: yob1898
num_bytes: 71088
num_examples: 3264
- name: yob1899
num_bytes: 66225
num_examples: 3042
- name: yob1900
num_bytes: 81305
num_examples: 3730
- name: yob1901
num_bytes: 68723
num_examples: 3153
- name: yob1902
num_bytes: 73321
num_examples: 3362
- name: yob1903
num_bytes: 74019
num_examples: 3389
- name: yob1904
num_bytes: 77751
num_examples: 3560
- name: yob1905
num_bytes: 79802
num_examples: 3655
- name: yob1906
num_bytes: 79392
num_examples: 3633
- name: yob1907
num_bytes: 86342
num_examples: 3948
- name: yob1908
num_bytes: 87965
num_examples: 4018
- name: yob1909
num_bytes: 92591
num_examples: 4227
- name: yob1910
num_bytes: 101491
num_examples: 4629
- name: yob1911
num_bytes: 106787
num_examples: 4867
- name: yob1912
num_bytes: 139448
num_examples: 6351
- name: yob1913
num_bytes: 153110
num_examples: 6968
- name: yob1914
num_bytes: 175167
num_examples: 7965
- name: yob1915
num_bytes: 205921
num_examples: 9357
- name: yob1916
num_bytes: 213468
num_examples: 9696
- name: yob1917
num_bytes: 218446
num_examples: 9913
- name: yob1918
num_bytes: 229209
num_examples: 10398
- name: yob1919
num_bytes: 228656
num_examples: 10369
- name: yob1920
num_bytes: 237286
num_examples: 10756
- name: yob1921
num_bytes: 239616
num_examples: 10857
- name: yob1922
num_bytes: 237569
num_examples: 10756
- name: yob1923
num_bytes: 235046
num_examples: 10643
- name: yob1924
num_bytes: 240113
num_examples: 10869
- name: yob1925
num_bytes: 235098
num_examples: 10638
- name: yob1926
num_bytes: 230970
num_examples: 10458
- name: yob1927
num_bytes: 230004
num_examples: 10406
- name: yob1928
num_bytes: 224583
num_examples: 10159
- name: yob1929
num_bytes: 217057
num_examples: 9820
- name: yob1930
num_bytes: 216352
num_examples: 9791
- name: yob1931
num_bytes: 205361
num_examples: 9298
- name: yob1932
num_bytes: 207268
num_examples: 9381
- name: yob1933
num_bytes: 199031
num_examples: 9013
- name: yob1934
num_bytes: 202758
num_examples: 9180
- name: yob1935
num_bytes: 199614
num_examples: 9037
- name: yob1936
num_bytes: 196379
num_examples: 8894
- name: yob1937
num_bytes: 197757
num_examples: 8946
- name: yob1938
num_bytes: 199603
num_examples: 9032
- name: yob1939
num_bytes: 196979
num_examples: 8918
- name: yob1940
num_bytes: 198141
num_examples: 8961
- name: yob1941
num_bytes: 200858
num_examples: 9085
- name: yob1942
num_bytes: 208363
num_examples: 9425
- name: yob1943
num_bytes: 207940
num_examples: 9408
- name: yob1944
num_bytes: 202227
num_examples: 9152
- name: yob1945
num_bytes: 199478
num_examples: 9025
- name: yob1946
num_bytes: 214614
num_examples: 9705
- name: yob1947
num_bytes: 229327
num_examples: 10371
- name: yob1948
num_bytes: 226615
num_examples: 10241
- name: yob1949
num_bytes: 227278
num_examples: 10269
- name: yob1950
num_bytes: 227946
num_examples: 10303
- name: yob1951
num_bytes: 231613
num_examples: 10462
- name: yob1952
num_bytes: 235483
num_examples: 10646
- name: yob1953
num_bytes: 239654
num_examples: 10837
- name: yob1954
num_bytes: 242389
num_examples: 10968
- name: yob1955
num_bytes: 245652
num_examples: 11115
- name: yob1956
num_bytes: 250674
num_examples: 11340
- name: yob1957
num_bytes: 255370
num_examples: 11564
- name: yob1958
num_bytes: 254520
num_examples: 11522
- name: yob1959
num_bytes: 260051
num_examples: 11767
- name: yob1960
num_bytes: 263474
num_examples: 11921
- name: yob1961
num_bytes: 269493
num_examples: 12182
- name: yob1962
num_bytes: 270244
num_examples: 12209
- name: yob1963
num_bytes: 271872
num_examples: 12282
- name: yob1964
num_bytes: 274590
num_examples: 12397
- name: yob1965
num_bytes: 264889
num_examples: 11952
- name: yob1966
num_bytes: 269321
num_examples: 12151
- name: yob1967
num_bytes: 274867
num_examples: 12397
- name: yob1968
num_bytes: 286774
num_examples: 12936
- name: yob1969
num_bytes: 304909
num_examples: 13749
- name: yob1970
num_bytes: 328047
num_examples: 14779
- name: yob1971
num_bytes: 339657
num_examples: 15295
- name: yob1972
num_bytes: 342321
num_examples: 15412
- name: yob1973
num_bytes: 348414
num_examples: 15682
- name: yob1974
num_bytes: 361188
num_examples: 16249
- name: yob1975
num_bytes: 376491
num_examples: 16944
- name: yob1976
num_bytes: 386565
num_examples: 17391
- name: yob1977
num_bytes: 403994
num_examples: 18175
- name: yob1978
num_bytes: 405430
num_examples: 18231
- name: yob1979
num_bytes: 423423
num_examples: 19039
- name: yob1980
num_bytes: 432317
num_examples: 19452
- name: yob1981
num_bytes: 432980
num_examples: 19475
- name: yob1982
num_bytes: 437986
num_examples: 19694
- name: yob1983
num_bytes: 431531
num_examples: 19407
- name: yob1984
num_bytes: 434085
num_examples: 19506
- name: yob1985
num_bytes: 447113
num_examples: 20085
- name: yob1986
num_bytes: 460315
num_examples: 20657
- name: yob1987
num_bytes: 477677
num_examples: 21406
- name: yob1988
num_bytes: 499347
num_examples: 22367
- name: yob1989
num_bytes: 531020
num_examples: 23775
- name: yob1990
num_bytes: 552114
num_examples: 24716
- name: yob1991
num_bytes: 560932
num_examples: 25109
- name: yob1992
num_bytes: 568151
num_examples: 25427
- name: yob1993
num_bytes: 579778
num_examples: 25966
- name: yob1994
num_bytes: 580223
num_examples: 25997
- name: yob1995
num_bytes: 581949
num_examples: 26080
- name: yob1996
num_bytes: 589131
num_examples: 26423
- name: yob1997
num_bytes: 601284
num_examples: 26970
- name: yob1998
num_bytes: 621587
num_examples: 27902
- name: yob1999
num_bytes: 635355
num_examples: 28552
- name: yob2000
num_bytes: 662398
num_examples: 29772
- name: yob2001
num_bytes: 673111
num_examples: 30274
- name: yob2002
num_bytes: 679392
num_examples: 30564
- name: yob2003
num_bytes: 692931
num_examples: 31185
- name: yob2004
num_bytes: 711776
num_examples: 32048
- name: yob2005
num_bytes: 723065
num_examples: 32549
- name: yob2006
num_bytes: 757620
num_examples: 34088
- name: yob2007
num_bytes: 776893
num_examples: 34961
- name: yob2008
num_bytes: 779403
num_examples: 35079
- name: yob2009
num_bytes: 771032
num_examples: 34709
- name: yob2010
num_bytes: 756717
num_examples: 34073
- name: yob2011
num_bytes: 752804
num_examples: 33908
- name: yob2012
num_bytes: 748915
num_examples: 33747
- name: yob2013
num_bytes: 738288
num_examples: 33282
- name: yob2014
num_bytes: 737219
num_examples: 33243
- name: yob2015
num_bytes: 734183
num_examples: 33121
- name: yob2016
num_bytes: 731291
num_examples: 33010
- name: yob2017
num_bytes: 721444
num_examples: 32590
- name: yob2018
num_bytes: 708657
num_examples: 32033
download_size: 232629010
dataset_size: 43393095
- config_name: new_data
features:
- name: text
dtype: string
- name: original
dtype: string
- name: labels
list:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
'2': PARTNER:female
'3': PARTNER:male
'4': SELF:female
'5': SELF:male
- name: class_type
dtype:
class_label:
names:
'0': about
'1': partner
'2': self
- name: turker_gender
dtype:
class_label:
names:
'0': man
'1': woman
'2': nonbinary
'3': prefer not to say
'4': no answer
- name: episode_done
dtype: bool_
- name: confidence
dtype: string
splits:
- name: train
num_bytes: 369753
num_examples: 2345
download_size: 232629010
dataset_size: 369753
- config_name: funpedia
features:
- name: text
dtype: string
- name: title
dtype: string
- name: persona
dtype: string
- name: gender
dtype:
class_label:
names:
'0': gender-neutral
'1': female
'2': male
splits:
- name: train
num_bytes: 3225542
num_examples: 23897
- name: validation
num_bytes: 402205
num_examples: 2984
- name: test
num_bytes: 396417
num_examples: 2938
download_size: 232629010
dataset_size: 4024164
- config_name: image_chat
features:
- name: caption
dtype: string
- name: id
dtype: string
- name: male
dtype: bool_
- name: female
dtype: bool_
splits:
- name: train
num_bytes: 1061285
num_examples: 9997
- name: validation
num_bytes: 35868670
num_examples: 338180
- name: test
num_bytes: 530126
num_examples: 5000
download_size: 232629010
dataset_size: 37460081
- config_name: wizard
features:
- name: text
dtype: string
- name: chosen_topic
dtype: string
- name: gender
dtype:
class_label:
names:
'0': gender-neutral
'1': female
'2': male
splits:
- name: train
num_bytes: 1158785
num_examples: 10449
- name: validation
num_bytes: 57824
num_examples: 537
- name: test
num_bytes: 53126
num_examples: 470
download_size: 232629010
dataset_size: 1269735
- config_name: convai2_inferred
features:
- name: text
dtype: string
- name: binary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
- name: binary_score
dtype: float32
- name: ternary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
'2': ABOUT:gender-neutral
- name: ternary_score
dtype: float32
splits:
- name: train
num_bytes: 9853669
num_examples: 131438
- name: validation
num_bytes: 608046
num_examples: 7801
- name: test
num_bytes: 608046
num_examples: 7801
download_size: 232629010
dataset_size: 11069761
- config_name: light_inferred
features:
- name: text
dtype: string
- name: binary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
- name: binary_score
dtype: float32
- name: ternary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
'2': ABOUT:gender-neutral
- name: ternary_score
dtype: float32
splits:
- name: train
num_bytes: 10931355
num_examples: 106122
- name: validation
num_bytes: 679692
num_examples: 6362
- name: test
num_bytes: 1375745
num_examples: 12765
download_size: 232629010
dataset_size: 12986792
- config_name: opensubtitles_inferred
features:
- name: text
dtype: string
- name: binary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
- name: binary_score
dtype: float32
- name: ternary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
'2': ABOUT:gender-neutral
- name: ternary_score
dtype: float32
splits:
- name: train
num_bytes: 27966476
num_examples: 351036
- name: validation
num_bytes: 3363802
num_examples: 41957
- name: test
num_bytes: 3830528
num_examples: 49108
download_size: 232629010
dataset_size: 35160806
- config_name: yelp_inferred
features:
- name: text
dtype: string
- name: binary_label
dtype:
class_label:
names:
'0': ABOUT:female
'1': ABOUT:male
- name: binary_score
dtype: float32
splits:
- name: train
num_bytes: 260582945
num_examples: 2577862
- name: validation
num_bytes: 324349
num_examples: 4492
- name: test
num_bytes: 53887700
num_examples: 534460
download_size: 232629010
dataset_size: 314794994
---
# Dataset Card for Multi-Dimensional Gender Bias Classification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ParlAI MD Gender Project Page](https://parl.ai/projects/md_gender/)
- **Repository:** [ParlAI Github MD Gender Repository](https://github.com/facebookresearch/ParlAI/tree/master/projects/md_gender)
- **Paper:** [Multi-Dimensional Gender Bias Classification](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** edinan@fb.com
### Dataset Summary
The Multi-Dimensional Gender Bias Classification dataset is based on a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. It contains seven large scale datasets automatically annotated for gender information (there are eight in the original project but the Wikipedia set is not included in the HuggingFace distribution), one crowdsourced evaluation benchmark of utterance-level gender rewrites, a list of gendered names, and a list of gendered words in English.
### Supported Tasks and Leaderboards
- `text-classification-other-gender-bias`: The dataset can be used to train a model for classification of various kinds of gender bias. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. Dinan et al's (2020) Transformer model achieved an average of 67.13% accuracy in binary gender prediction across the ABOUT, TO, and AS tasks. See the paper for more results.
### Languages
The data is in English as spoken on the various sites where the data was collected. The associated BCP-47 code `en`.
## Dataset Structure
### Data Instances
The following are examples of data instances from the various configs in the dataset. See the [MD Gender Bias dataset viewer](https://huggingface.co/datasets/viewer/?dataset=md_gender_bias) to explore more examples.
An example from the `new_data` config:
```
{'class_type': 0,
'confidence': 'certain',
'episode_done': True,
'labels': [1],
'original': 'She designed monumental Loviisa war cemetery in 1920',
'text': 'He designed monumental Lovissa War Cemetery in 1920.',
'turker_gender': 4}
```
An example from the `funpedia` config:
```
{'gender': 2,
'persona': 'Humorous',
'text': 'Max Landis is a comic book writer who wrote Chronicle, American Ultra, and Victor Frankestein.',
'title': 'Max Landis'}
```
An example from the `image_chat` config:
```
{'caption': '<start> a young girl is holding a pink umbrella in her hand <eos>',
'female': True,
'id': '2923e28b6f588aff2d469ab2cccfac57',
'male': False}
```
An example from the `wizard` config:
```
{'chosen_topic': 'Krav Maga',
'gender': 2,
'text': 'Hello. I hope you might enjoy or know something about Krav Maga?'}
```
An example from the `convai2_inferred` config (the other `_inferred` configs have the same fields, with the exception of `yelp_inferred`, which does not have the `ternary_label` or `ternary_score` fields):
```
{'binary_label': 1,
'binary_score': 0.6521999835968018,
'ternary_label': 2,
'ternary_score': 0.4496000111103058,
'text': "hi , how are you doing ? i'm getting ready to do some cheetah chasing to stay in shape ."}
```
An example from the `gendered_words` config:
```
{'word_feminine': 'countrywoman',
'word_masculine': 'countryman'}
```
An example from the `name_genders` config:
```
{'assigned_gender': 1,
'count': 7065,
'name': 'Mary'}
```
### Data Fields
The following are the features for each of the configs.
For the `new_data` config:
- `text`: the text to be classified
- `original`: the text before reformulation
- `labels`: a `list` of classification labels, with possible values including `ABOUT:female`, `ABOUT:male`, `PARTNER:female`, `PARTNER:male`, `SELF:female`.
- `class_type`: a classification label, with possible values including `about` (0), `partner` (1), `self` (2).
- `turker_gender`: a classification label, with possible values including `man` (0), `woman` (1), `nonbinary` (2), `prefer not to say` (3), `no answer` (4).
- `episode_done`: a boolean indicating whether the conversation was completed.
- `confidence`: a string indicating the confidence of the annotator in response to the instance label being ABOUT/TO/AS a man or woman. Possible values are `certain`, `pretty sure`, and `unsure`.
For the `funpedia` config:
- `text`: the text to be classified.
- `gender`: a classification label, with possible values including `gender-neutral` (0), `female` (1), `male` (2), indicating the gender of the person being talked about.
- `persona`: a string describing the persona assigned to the user when talking about the entity.
- `title`: a string naming the entity the text is about.
For the `image_chat` config:
- `caption`: a string description of the contents of the original image.
- `female`: a boolean indicating whether the gender of the person being talked about is female, if the image contains a person.
- `id`: a string indicating the id of the image.
- `male`: a boolean indicating whether the gender of the person being talked about is male, if the image contains a person.
For the `wizard` config:
- `text`: the text to be classified.
- `chosen_topic`: a string indicating the topic of the text.
- `gender`: a classification label, with possible values including `gender-neutral` (0), `female` (1), `male` (2), indicating the gender of the person being talked about.
For the `_inferred` configurations (again, except the `yelp_inferred` split, which does not have the `ternary_label` or `ternary_score` fields):
- `text`: the text to be classified.
- `binary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`.
- `binary_score`: a float indicating a score between 0 and 1.
- `ternary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`, `ABOUT:gender-neutral`.
- `ternary_score`: a float indicating a score between 0 and 1.
For the word list:
- `word_masculine`: a string indicating the masculine version of the word.
- `word_feminine`: a string indicating the feminine version of the word.
For the gendered name list:
- `assigned_gender`: an integer, 1 for female, 0 for male.
- `count`: an integer.
- `name`: a string of the name.
### Data Splits
The different parts of the data can be accessed through the different configurations:
- `gendered_words`: A list of common nouns with a masculine and feminine variant.
- `new_data`: Sentences reformulated and annotated along all three axes.
- `funpedia`, `wizard`: Sentences from Funpedia and Wizards of Wikipedia annotated with ABOUT gender with entity gender information.
- `image_chat`: sentences about images annotated with ABOUT gender based on gender information from the entities in the image
- `convai2_inferred`, `light_inferred`, `opensubtitles_inferred`, `yelp_inferred`: Data from several source datasets with ABOUT annotations inferred by a trined classifier.
| Split | M | F | N | U | Dimension |
| ---------- | ---- | --- | ---- | ---- | --------- |
| Image Chat | 39K | 15K | 154K | - | ABOUT |
| Funpedia | 19K | 3K | 1K | - | ABOUT |
| Wizard | 6K | 1K | 1K | - | ABOUT |
| Yelp | 1M | 1M | - | - | AS |
| ConvAI2 | 22K | 22K | - | 86K | AS |
| ConvAI2 | 22K | 22K | - | 86K | TO |
| OpenSub | 149K | 69K | - | 131K | AS |
| OpenSub | 95K | 45K | - | 209K | TO |
| LIGHT | 13K | 8K | - | 83K | AS |
| LIGHT | 13K | 8K | - | 83K | TO |
| ---------- | ---- | --- | ---- | ---- | --------- |
| MDGender | 384 | 401 | - | - | ABOUT |
| MDGender | 396 | 371 | - | - | AS |
| MDGender | 411 | 382 | - | - | TO |
## Dataset Creation
### Curation Rationale
The curators chose to annotate the existing corpora to make their classifiers reliable on all dimensions (ABOUT/TO/AS) and across multiple domains. However, none of the existing datasets cover all three dimensions at the same time, and many of the gender labels are noisy. To enable reliable evaluation, the curators collected a specialized corpus, found in the `new_data` config, which acts as a gold-labeled dataset for the masculine and feminine classes.
### Source Data
#### Initial Data Collection and Normalization
For the `new_data` config, the curators collected conversations between two speakers. Each speaker was provided with a persona description containing gender information, then tasked with adopting that persona and having a conversation. They were also provided with small sections of a biography from Wikipedia as the conversation topic in order to encourage crowdworkers to discuss ABOUT/TO/AS gender information. To ensure there is ABOUT/TO/AS gender information contained in each utterance, the curators asked a second set of annotators to rewrite each utterance to make it very clear that they are speaking ABOUT a man or a woman, speaking AS a man or a woman, and speaking TO a man or a woman.
#### Who are the source language producers?
This dataset was collected from crowdworkers from Amazon’s Mechanical Turk. All workers are English-speaking and located in the United States.
| Reported Gender | Percent of Total |
| ----------------- | ---------------- |
| Man | 67.38 |
| Woman | 18.34 |
| Non-binary | 0.21 |
| Prefer not to say | 14.07 |
### Annotations
#### Annotation process
For the `new_data` config, annotators were asked to label how confident they are that someone else could predict the given gender label, allowing for flexibility between explicit genderedness (like the use of "he" or "she") and statistical genderedness.
Many of the annotated datasets contain cases where the ABOUT, AS, TO labels are not provided (i.e. unknown). In such instances, the curators apply one of two strategies. They apply the imputation strategy for data for which the ABOUT label is unknown using a classifier trained only on other Wikipedia data for which this label is provided. Data without a TO or AS label was assigned one at random, choosing between masculine and feminine with equal probability. Details of how each of the eight training datasets was annotated are as follows:
1. Wikipedia- to annotate ABOUT, the curators used a Wikipedia dump and extract biography pages using named entity recognition. They labeled pages with a gender based on the number of gendered pronouns (he vs. she vs. they) and labeled each paragraph in the page with this label for the ABOUT dimension.
2. Funpedia- Funpedia ([Miller et al., 2017](https://www.aclweb.org/anthology/D17-2014/)) contains rephrased Wikipedia sentences in a more conversational way. The curators retained only biography related sentences and annotate similar to Wikipedia, to give ABOUT labels.
3. Wizard of Wikipedia- [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) contains two people discussing a topic in Wikipedia. The curators retain only the conversations on Wikipedia biographies and annotate to create ABOUT labels.
4. ImageChat- [ImageChat](https://klshuster.github.io/image_chat/) contains conversations discussing the contents of an image. The curators used the [Xu et al. image captioning system](https://github.com/AaronCCWong/Show-Attend-and-Tell) to identify the contents of an image and select gendered examples.
5. Yelp- The curators used the Yelp reviewer gender predictor developed by ([Subramanian et al., 2018](https://arxiv.org/pdf/1811.00552.pdf)) and retain reviews for which the classifier is very confident – this creates labels for the content creator of the review (AS). They impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
6. ConvAI2- [ConvAI2](https://parl.ai/projects/convai2/) contains persona-based conversations. Many personas contain sentences such as 'I am a old woman' or 'My name is Bob' which allows annotators to annotate the gender of the speaker (AS) and addressee (TO) with some confidence. Many of the personas have unknown gender. The curators impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
7. OpenSubtitles- [OpenSubtitles](http://www.opensubtitles.org/) contains subtitles for movies in different languages. The curators retained English subtitles that contain a character name or identity. They annotated the character’s gender using gender kinship terms such as daughter and gender probability distribution calculated by counting the masculine and feminine names of baby names in the United States. Using the character’s gender, they produced labels for the AS dimension. They produced labels for the TO dimension by taking the gender of the next character to speak if there is another utterance in the conversation; otherwise, they take the gender of the last character to speak. They impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
8. LIGHT- [LIGHT](https://parl.ai/projects/light/) contains persona-based conversation. Similarly to ConvAI2, annotators labeled the gender of each persona, giving labels for the speaker (AS) and speaking partner (TO). The curators impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
#### Who are the annotators?
This dataset was annotated by crowdworkers from Amazon’s Mechanical Turk. All workers are English-speaking and located in the United States.
### Personal and Sensitive Information
For privacy reasons the curators did not associate the self-reported gender of the annotator with the labeled examples in the dataset and only report these statistics in aggregate.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for applications such as controlling for gender bias in generative models, detecting gender bias in arbitrary text, and classifying text as offensive based on its genderedness.
### Discussion of Biases
Over two thirds of annotators identified as men, which may introduce biases into the dataset.
Wikipedia is also well known to have gender bias in equity of biographical coverage and lexical bias in noun references to women (see the paper's appendix for citations).
### Other Known Limitations
The limitations of the Multi-Dimensional Gender Bias Classification dataset have not yet been investigated, but the curators acknowledge that more work is required to address the intersectionality of gender identities, i.e., when gender non-additively interacts with other identity characteristics. The curators point out that negative gender stereotyping is known to be alternatively weakened or reinforced by the presence of social attributes like dialect, class and race and that these differences have been found to affect gender classification in images and sentences encoders. See the paper for references.
## Additional Information
### Dataset Curators
Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams at Facebook AI Research. Angela Fan is also affiliated with Laboratoire Lorrain d’Informatique et Applications (LORIA).
### Licensing Information
The Multi-Dimensional Gender Bias Classification dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```
@inproceedings{dinan-etal-2020-multi,
title = "Multi-Dimensional Gender Bias Classification",
author = "Dinan, Emily and
Fan, Angela and
Wu, Ledell and
Weston, Jason and
Kiela, Douwe and
Williams, Adina",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.23",
doi = "10.18653/v1/2020.emnlp-main.23",
pages = "314--331",
abstract = "Machine learning models are trained to find patterns in data. NLP models can inadvertently learn socially undesirable patterns when training on gender biased text. In this work, we propose a novel, general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information. In addition, we collect a new, crowdsourced evaluation benchmark. Distinguishing between gender bias along multiple dimensions enables us to train better and more fine-grained gender bias classifiers. We show our classifiers are valuable for a variety of applications, like controlling for gender bias in generative models, detecting gender bias in arbitrary text, and classifying text as offensive based on its genderedness.",
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) and [@mcmillanmajora](https://github.com/mcmillanmajora)for adding this dataset. |
mdd | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: mdd
pretty_name: Movie Dialog dataset (MDD)
configs:
- task1_qa
- task2_recs
- task3_qarecs
- task4_reddit
dataset_info:
- config_name: task1_qa
features:
- name: dialogue_turns
sequence:
- name: speaker
dtype: int32
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 8621120
num_examples: 96185
- name: test
num_bytes: 894590
num_examples: 9952
- name: validation
num_bytes: 892540
num_examples: 9968
download_size: 135614957
dataset_size: 10408250
- config_name: task2_recs
features:
- name: dialogue_turns
sequence:
- name: speaker
dtype: int32
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 205936579
num_examples: 1000000
- name: test
num_bytes: 2064509
num_examples: 10000
- name: validation
num_bytes: 2057290
num_examples: 10000
download_size: 135614957
dataset_size: 210058378
- config_name: task3_qarecs
features:
- name: dialogue_turns
sequence:
- name: speaker
dtype: int32
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 356789364
num_examples: 952125
- name: test
num_bytes: 1730291
num_examples: 4915
- name: validation
num_bytes: 1776506
num_examples: 5052
download_size: 135614957
dataset_size: 360296161
- config_name: task4_reddit
features:
- name: dialogue_turns
sequence:
- name: speaker
dtype: int32
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 497864160
num_examples: 945198
- name: test
num_bytes: 5220295
num_examples: 10000
- name: validation
num_bytes: 5372702
num_examples: 10000
- name: cand_valid
num_bytes: 1521633
num_examples: 10000
- name: cand_test
num_bytes: 1567235
num_examples: 10000
download_size: 192209920
dataset_size: 511546025
---
# Dataset Card for MDD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[The bAbI project](https://research.fb.com/downloads/babi/)
- **Repository:**
- **Paper:** [arXiv Paper](https://arxiv.org/pdf/1511.06931.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The data is present in English language as written by users on OMDb and MovieLens websites.
## Dataset Structure
### Data Instances
An instance from the `task3_qarecs` config's `train` split:
```
{'dialogue_turns': {'speaker': [0, 1, 0, 1, 0, 1], 'utterance': ["I really like Jaws, Bottle Rocket, Saving Private Ryan, Tommy Boy, The Muppet Movie, Face/Off, and Cool Hand Luke. I'm looking for a Documentary movie.", 'Beyond the Mat', 'Who is that directed by?', 'Barry W. Blaustein', 'I like Jon Fauer movies more. Do you know anything else?', 'Cinematographer Style']}}
```
An instance from the `task4_reddit` config's `cand-valid` split:
```
{'dialogue_turns': {'speaker': [0], 'utterance': ['MORTAL KOMBAT !']}}
```
### Data Fields
For all configurations:
- `dialogue_turns`: a dictionary feature containing:
- `speaker`: an integer with possible values including `0`, `1`, indicating which speaker wrote the utterance.
- `utterance`: a `string` feature containing the text utterance.
### Data Splits
The splits and corresponding sizes are:
|config |train |test |validation|cand_valid|cand_test|
|:--|------:|----:|---------:|----:|----:|
|task1_qa|96185|9952|9968|-|-|
|task2_recs|1000000|10000|10000|-|-|
|task3_qarecs|952125|4915|5052|-|-|
|task4_reddit|945198|10000|10000|10000|10000|
The `cand_valid` and `cand_test` are negative candidates for the `task4_reddit` configuration which is used in ranking true positive against these candidates and hits@k (or another ranking metric) is reported. (See paper)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The construction of the tasks depended on some existing datasets:
1) MovieLens. The data was downloaded from: http://grouplens.org/datasets/movielens/20m/ on May 27th, 2015.
2) OMDB. The data was downloaded from: http://beforethecode.com/projects/omdb/download.aspx on May 28th, 2015.
3) For `task4_reddit`, the data is a processed subset (movie subreddit only) of the data available at:
https://www.reddit.com/r/datasets/comments/3bxlg7
#### Who are the source language producers?
Users on MovieLens, OMDB website and reddit websites, among others.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston (at Facebook Research).
### Licensing Information
```
Creative Commons Attribution 3.0 License
```
### Citation Information
```
@misc{dodge2016evaluating,
title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},
author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},
year={2016},
eprint={1511.06931},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
med_hop | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: medhop
pretty_name: MedHop
tags:
- multi-hop
dataset_info:
- config_name: original
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: candidates
sequence: string
- name: supports
sequence: string
splits:
- name: train
num_bytes: 93937322
num_examples: 1620
- name: validation
num_bytes: 16461640
num_examples: 342
download_size: 339843061
dataset_size: 110398962
- config_name: masked
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: candidates
sequence: string
- name: supports
sequence: string
splits:
- name: train
num_bytes: 95813584
num_examples: 1620
- name: validation
num_bytes: 16800570
num_examples: 342
download_size: 339843061
dataset_size: 112614154
---
# Dataset Card for MedHop
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [QAngaroo](http://qangaroo.cs.ucl.ac.uk/)
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [Constructing Datasets for Multi-hop Reading Comprehension Across Documents](https://arxiv.org/abs/1710.06481)
- **Leaderboard:** [leaderboard](http://qangaroo.cs.ucl.ac.uk/leaderboard.html)
- **Point of Contact:** [Johannes Welbl](j.welbl@cs.ucl.ac.uk)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
medal | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: medal
pretty_name: MeDAL
tags:
- disambiguation
dataset_info:
features:
- name: abstract_id
dtype: int32
- name: text
dtype: string
- name: location
sequence: int32
- name: label
sequence: string
splits:
- name: train
num_bytes: 3573399948
num_examples: 3000000
- name: test
num_bytes: 1190766821
num_examples: 1000000
- name: validation
num_bytes: 1191410723
num_examples: 1000000
- name: full
num_bytes: 15536883723
num_examples: 14393619
download_size: 21060929078
dataset_size: 21492461215
---
# Dataset Card for the MeDAL dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** [https://github.com/BruceWen120/medal]()
- **Paper:** [https://www.aclweb.org/anthology/2020.clinicalnlp-1.15/]()
- **Dataset (Kaggle):** [https://www.kaggle.com/xhlulu/medal-emnlp]()
- **Dataset (Zenodo):** [https://zenodo.org/record/4265632]()
- **Pretrained model:** [https://huggingface.co/xhlu/electra-medal]()
- **Leaderboard:** []()
- **Point of Contact:** []()
### Dataset Summary
A large medical text dataset (14Go) curated to 4Go for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. For example, DHF can be disambiguated to dihydrofolate, diastolic heart failure, dengue hemorragic fever or dihydroxyfumarate
### Supported Tasks and Leaderboards
Medical abbreviation disambiguation
### Languages
English (en)
## Dataset Structure
Each file is a table consisting of three columns:
* text: The normalized content of an abstract
* location: The location (index) of each abbreviation that was substituted
* label: The word at that was substituted at the given location
### Data Instances
An example from the train split is:
```
{'abstract_id': 14145090,
'text': 'velvet antlers vas are commonly used in traditional chinese medicine and invigorant and contain many PET components for health promotion the velvet antler peptide svap is one of active components in vas based on structural study the svap interacts with tgfβ receptors and disrupts the tgfβ pathway we hypothesized that svap prevents cardiac fibrosis from pressure overload by blocking tgfβ signaling SDRs underwent TAC tac or a sham operation T3 one month rats received either svap mgkgday or vehicle for an additional one month tac surgery induced significant cardiac dysfunction FB activation and fibrosis these effects were improved by treatment with svap in the heart tissue tac remarkably increased the expression of tgfβ and connective tissue growth factor ctgf ROS species C2 and the phosphorylation C2 of smad and ERK kinases erk svap inhibited the increases in reactive oxygen species C2 ctgf expression and the phosphorylation of smad and erk but not tgfβ expression in cultured cardiac fibroblasts angiotensin ii ang ii had similar effects compared to tac surgery such as increases in αsmapositive CFs and collagen synthesis svap eliminated these effects by disrupting tgfβ IB to its receptors and blocking ang iitgfβ downstream signaling these results demonstrated that svap has antifibrotic effects by blocking the tgfβ pathway in CFs',
'location': [63],
'label': ['transverse aortic constriction']}
```
### Data Fields
The column types are:
* text: content of the abstract as a string
* location: index of the substitution as an integer
* label: substitued word as a string
### Data Splits
The following files are present:
* `full_data.csv`: The full dataset with all 14M abstracts.
* `train.csv`: The subset used to train the baseline and proposed models.
* `valid.csv`: The subset used to validate the model during training for hyperparameter selection.
* `test.csv`: The subset used to evaluate the model and report the results in the tables.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The original dataset was retrieved and modified from the [NLM website](https://www.nlm.nih.gov/databases/download/pubmed_medline.html).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
Details on how the abbreviations were created can be found in section 2.2 (Dataset Creation) of the [ACL ClinicalNLP paper](https://aclanthology.org/2020.clinicalnlp-1.15.pdf).
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Since the abstracts are written in English, the data is biased towards anglo-centric medical research. If you plan to use a model pre-trained on this dataset for a predominantly non-English community, it is important to verify whether there are negative biases present in your model, and ensure that they are correctly mitigated. For instance, you could fine-tune your dataset on a multilingual medical disambiguation dataset, or collect a dataset specific to your use case.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The ELECTRA model is licensed under [Apache 2.0](https://github.com/google-research/electra/blob/master/LICENSE). The license for the libraries used in this project (`transformers`, `pytorch`, etc.) can be found in their respective GitHub repository. Our model is released under a MIT license.
The original dataset was retrieved and modified from the [NLM website](https://www.nlm.nih.gov/databases/download/pubmed_medline.html). By using this dataset, you are bound by the [terms and conditions](https://www.nlm.nih.gov/databases/download/terms_and_conditions_pubmed.html) specified by NLM:
> INTRODUCTION
>
> Downloading data from the National Library of Medicine FTP servers indicates your acceptance of the following Terms and Conditions: No charges, usage fees or royalties are paid to NLM for this data.
>
> MEDLINE/PUBMED SPECIFIC TERMS
>
> NLM freely provides PubMed/MEDLINE data. Please note some PubMed/MEDLINE abstracts may be protected by copyright.
>
> GENERAL TERMS AND CONDITIONS
>
> * Users of the data agree to:
> * acknowledge NLM as the source of the data by including the phrase "Courtesy of the U.S. National Library of Medicine" in a clear and conspicuous manner,
> * properly use registration and/or trademark symbols when referring to NLM products, and
> * not indicate or imply that NLM has endorsed its products/services/applications.
>
> * Users who republish or redistribute the data (services, products or raw data) agree to:
> * maintain the most current version of all distributed data, or
> * make known in a clear and conspicuous manner that the products/services/applications do not reflect the most current/accurate data available from NLM.
>
> * These data are produced with a reasonable standard of care, but NLM makes no warranties express or implied, including no warranty of merchantability or fitness for particular purpose, regarding the accuracy or completeness of the data. Users agree to hold NLM and the U.S. Government harmless from any liability resulting from errors in the data. NLM disclaims any liability for any consequences due to use, misuse, or interpretation of information contained or not contained in the data.
>
> * NLM does not provide legal advice regarding copyright, fair use, or other aspects of intellectual property rights. See the NLM Copyright page.
>
> * NLM reserves the right to change the type and format of its machine-readable data. NLM will take reasonable steps to inform users of any changes to the format of the data before the data are distributed via the announcement section or subscription to email and RSS updates.
### Citation Information
```
@inproceedings{wen-etal-2020-medal,
title = "{M}e{DAL}: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining",
author = "Wen, Zhi and
Lu, Xing Han and
Reddy, Siva",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.15",
pages = "130--135",
abstract = "One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.",
}
```
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) and [@xhlulu](https://github.com/xhlulu)) for adding this dataset. |
medical_dialog | ---
annotations_creators:
- found
language_creators:
- expert-generated
- found
language:
- en
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
pretty_name: MedDialog
configs:
- en
- zh
dataset_info:
- config_name: en
features:
- name: file_name
dtype: string
- name: dialogue_id
dtype: int32
- name: dialogue_url
dtype: string
- name: dialogue_turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': Patient
'1': Doctor
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 290274759
num_examples: 229674
download_size: 0
dataset_size: 290274759
- config_name: zh
features:
- name: file_name
dtype: string
- name: dialogue_id
dtype: int32
- name: dialogue_url
dtype: string
- name: dialogue_turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': 病人
'1': 医生
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 1092063621
num_examples: 1921127
download_size: 0
dataset_size: 1092063621
- config_name: processed.en
features:
- name: description
dtype: string
- name: utterances
sequence: string
splits:
- name: train
num_bytes: 370745
num_examples: 482
- name: validation
num_bytes: 52145
num_examples: 60
- name: test
num_bytes: 46514
num_examples: 61
download_size: 524214
dataset_size: 469404
- config_name: processed.zh
features:
- name: utterances
sequence: string
splits:
- name: train
num_bytes: 1571262099
num_examples: 2725989
- name: validation
num_bytes: 197117565
num_examples: 340748
- name: test
num_bytes: 196526738
num_examples: 340754
download_size: 2082354155
dataset_size: 1964906402
---
# Dataset Card for MedDialog
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
[//]: # (- **Homepage:** )
- **Repository:** https://github.com/UCSD-AI4H/Medical-Dialogue-System
- **Paper:** [MedDialog: Two Large-scale Medical Dialogue Datasets](https://arxiv.org/abs/2004.03329)
[//]: # (- **Leaderboard:** )
[//]: # (- **Point of Contact:** )
### Dataset Summary
The MedDialog dataset (Chinese) contains conversations (in Chinese) between doctors and patients. It has 1.1 million dialogues and 4 million utterances. The data is continuously growing and more dialogues will be added. The raw dialogues are from haodf.com. All copyrights of the data belong to haodf.com.
The MedDialog dataset (English) contains conversations (in English) between doctors and patients. It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com. All copyrights of the data belong to healthcaremagic.com and icliniq.com.
Directions for using the pre-trained model using BERT using PyTorch is available in the Homepage.
### Supported Tasks and Leaderboards
Closed domain qa
### Languages
Monolingual. The datasets are in English (EN) and Chinese (ZH)
## Dataset Structure
### Data Instances
There are 4 configurations:
- Raw data:
- en
- zh
- Processed data:
- processed.en
- processed.zh
#### en
Each consultation consists of the below:
- ID
- URL
- Description of patient’s medical condition
- Dialogue
The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites.
#### zh
Each consultation consists of the below:
- ID
- URL
- Description of patient’s medical condition
- Dialogue
- (Optional) Diagnosis and suggestions.
The dataset is built from [Haodf.com](https://www.haodf.com/) and all copyrights of the data belong to [Haodf.com](https://www.haodf.com/).
One example for chinese is
```
{
{'dialogue_id': 2,
'dialogue_turns': [{'speaker': '病人',
'utterance': '孩子哭闹时,鸡鸡旁边会肿起,情绪平静时肿块会消失,去一个私人诊所看过,说是疝气.如果确定是疝气,是不是一定要手术治疗?我孩子只有1岁10月,自愈的可能性大吗?如果一定要手术,这么小的孩子风险大吗?术后的恢复困难吗?谢谢.'},
{'speaker': '医生', 'utterance': '南方医的B超说得不清楚,可能是鞘膜积液,可到我医院复查一个B超。'}],
'dialogue_url': 'https://www.haodf.com/doctorteam/flow_team_6477251152.htm',
'file_name': '2020.txt'},
}
```
#### processed.en
```
{
'description': 'throat a bit sore and want to get a good imune booster, especially in light of the virus. please advise. have not been in contact with nyone with the virus.',
'utterances': [
'patient: throat a bit sore and want to get a good imune booster, especially in light of the virus. please advise. have not been in contact with nyone with the virus.',
"doctor: during this pandemic. throat pain can be from a strep throat infection (antibiotics needed), a cold or influenza or other virus, or from some other cause such as allergies or irritants. usually, a person sees the doctor (call first) if the sore throat is bothersome, recurrent, or doesn't go away quickly. covid-19 infections tend to have cough, whereas strep throat usually lacks cough but has more throat pain. (3/21/20)"
]
}
```
#### processed.zh
```
{
'utterances': [
'病人:强制性脊柱炎,晚上睡觉翻身时腰骶骨区域疼痛,其他身体任何部位均不疼痛。',
'医生:应该没有问题,但最好把图像上传看看。'
]
}
```
### Data Fields
For generating the QA only the below fields have been considered:
- ID : Consultatation Identifier (restarts for each file)
- URL: The url link of the extracted conversation
- Dialogue : The conversation between the doctor and the patient.
These are arranged as below in the prepared dataset. Each item will be represented with these parameters.
- "file_name": string - signifies the file from which the conversation was extracted
- "dialogue_id": int32 - the dialogue id
- "dialogue_url": string - url of the conversation
- "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english)
#### processed.en
- `description` (str): Description of the dialog.
- `utterances` (list of str): Dialog utterances between patient and doctor.
#### processed.zh
- `utterances` (list of str): Dialog utterances between patient and doctor.
### Data Splits
There are no data splits on the original raw data. The "train" split for each language contains:
- en: 229674 examples
- zh: 1921127 examples
For processed configurations, data is split into train, validation and test, with the following number of examples:
| | train | validation | test |
|--------------|--------:|-----------:|-------:|
| processed.en | 482 | 60 | 61 |
| processed.zh | 2725989 | 340748 | 340754 |
## Dataset Creation
### Curation Rationale
Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknow.
### Citation Information
```
@article{chen2020meddiag,
title={MedDialog: a large-scale medical dialogue dataset},
author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
journal={arXiv preprint arXiv:2004.03329},
year={2020}
}
```
### Contributions
Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset. |
medical_questions_pairs | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: MedicalQuestionsPairs
dataset_info:
features:
- name: dr_id
dtype: int32
- name: question_1
dtype: string
- name: question_2
dtype: string
- name: label
dtype:
class_label:
names:
'0': 0
'1': 1
splits:
- name: train
num_bytes: 701650
num_examples: 3048
download_size: 665688
dataset_size: 701650
---
# Dataset Card for [medical_questions_pairs]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Medical questions pairs repository](https://github.com/curai/medical-question-pair-dataset)
- **Paper:** [Effective Transfer Learning for Identifying Similar Questions:Matching User Questions to COVID-19 FAQs](https://arxiv.org/abs/2008.13546)
### Dataset Summary
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
### Supported Tasks and Leaderboards
- `text-classification` : The dataset can be used to train a model to identify similar and non similar medical question pairs.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
The dataset contains dr_id, question_1, question_2, label. 11 different doctors were used for this task so dr_id ranges from 1 to 11. The label is 1 if the question pair is similar and 0 otherwise.
### Data Fields
- `dr_id`: 11 different doctors were used for this task so dr_id ranges from 1 to 11
- `question_1`: Original Question
- `question_2`: Rewritten Question maintaining the same intent like Original Question
- `label`: The label is 1 if the question pair is similar and 0 otherwise.
### Data Splits
The dataset as of now consists of only one split(train) but can be split seperately based on the requirement
| | train |
|----------------------------|------:|
| Non similar Question Pairs | 1524 |
| Similar Question Pairs | 1524 |
## Dataset Creation
Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
### Curation Rationale
[More Information Needed]
### Source Data
1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
#### Who are the annotators?
**Curai's doctors**
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{mccreery2020effective,
title={Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs},
author={Clara H. McCreery and Namit Katariya and Anitha Kannan and Manish Chablani and Xavier Amatriain},
year={2020},
eprint={2008.13546},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
menyo20k_mt | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
language:
- en
- yo
license:
- cc-by-nc-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: menyo-20k
pretty_name: MENYO-20k
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
config_name: menyo20k_mt
splits:
- name: train
num_bytes: 2551345
num_examples: 10070
- name: validation
num_bytes: 870011
num_examples: 3397
- name: test
num_bytes: 1905432
num_examples: 6633
download_size: 5206234
dataset_size: 5326788
---
# Dataset Card for MENYO-20k
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/uds-lsv/menyo-20k_MT/
- **Paper:** [The Effect of Domain and Diacritics in Yorùbá-English Neural Machine Translation](https://arxiv.org/abs/2103.08647)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Languages are English and Yoruba.
## Dataset Structure
### Data Instances
An instance example:
```
{'translation':
{'en': 'Unit 1: What is Creative Commons?',
'yo': 'Ìdá 1: Kín ni Creative Commons?'
}
}
```
### Data Fields
- `translation`:
- `en`: English sentence.
- `yo`: Yoruba sentence.
### Data Splits
Training, validation and test splits are available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is open but for non-commercial use because some data sources like Ted talks and JW news require permission for commercial use.
The dataset is licensed under Creative Commons [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) License: https://github.com/uds-lsv/menyo-20k_MT/blob/master/LICENSE
### Citation Information
If you use this dataset, please cite this paper:
```
@inproceedings{adelani-etal-2021-effect,
title = "The Effect of Domain and Diacritics in {Y}oruba{--}{E}nglish Neural Machine Translation",
author = "Adelani, David and
Ruiter, Dana and
Alabi, Jesujoba and
Adebonojo, Damilola and
Ayeni, Adesina and
Adeyemi, Mofe and
Awokoya, Ayodele Esther and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 18th Biennial Machine Translation Summit (Volume 1: Research Track)",
month = aug,
year = "2021",
address = "Virtual",
publisher = "Association for Machine Translation in the Americas",
url = "https://aclanthology.org/2021.mtsummit-research.6",
pages = "61--75",
abstract = "Massively multilingual machine translation (MT) has shown impressive capabilities and including zero and few-shot translation between low-resource language pairs. However and these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper and we present MENYO-20k and the first multi-domain parallel corpus with a especially curated orthography for Yoruba{--}English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality and we also analyze the effect of diacritics and a major characteristic of Yoruba and in the training data. We investigate how and when this training condition affects the final quality of a translation and its understandability.Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$) when translating to Yoruba and setting a high quality benchmark for future research.",
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
|
meta_woz | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
license_details: Microsoft Research Data License Agreement
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: metalwoz
pretty_name: Meta-Learning Wizard-of-Oz
dataset_info:
- config_name: dialogues
features:
- name: id
dtype: string
- name: user_id
dtype: string
- name: bot_id
dtype: string
- name: domain
dtype: string
- name: task_id
dtype: string
- name: turns
sequence: string
splits:
- name: train
num_bytes: 19999218
num_examples: 37884
- name: test
num_bytes: 1284287
num_examples: 2319
download_size: 8629863
dataset_size: 21283505
- config_name: tasks
features:
- name: task_id
dtype: string
- name: domain
dtype: string
- name: bot_prompt
dtype: string
- name: bot_role
dtype: string
- name: user_prompt
dtype: string
- name: user_role
dtype: string
splits:
- name: train
num_bytes: 73768
num_examples: 227
- name: test
num_bytes: 4351
num_examples: 14
download_size: 8629863
dataset_size: 78119
---
# Dataset Card for MetaLWOz
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [MetaLWOz Project Website](https://www.microsoft.com/en-us/research/project/metalwoz/)
- **Paper:** [Fast Domain Adaptation for Goal-Oriented Dialogue Using a Hybrid Generative-Retrieval Transformer](https://ieeexplore.ieee.org/abstract/document/9053599), and [Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation](https://arxiv.org/pdf/2003.01680.pdf)
- **Point of Contact:** [Hannes Schulz](https://www.microsoft.com/en-us/research/people/haschulz/)
### Dataset Summary
MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models.
We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for
conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to
quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas
of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two
human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human
user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a
particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total.
Dialogues are a minimum of 10 turns long.
### Supported Tasks and Leaderboards
This dataset supports a range of task.
- **Generative dialogue modeling** or `dialogue-modeling`: This data can be used to train task-oriented dialogue
models, specifically to develop methods to quickly simulate user responses with a small amount of data. Such fast
-adaptation models fall into the research areas of transfer learning and meta learning. The text of the dialogues
can be used to train a sequence model on the utterances.
Example of sample input/output is given in section [Data Instances](#data-instances)
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
A data instance is a full multi-turn dialogue between two crowd-workers, one had the role of being a `bot`, and the other one was the `user`. Both were
given a `domain` and a `task`. Each turn has a single utterance, e.g.:
```
Domain: Ski
User Task: You want to know if there are good ski hills an
hour’s drive from your current location.
Bot Task: Tell the user that there are no ski hills in their
immediate location.
Bot: Hello how may I help you?
User: Is there any good ski hills an hour’s drive from my
current location?
Bot: I’m sorry to inform you that there are no ski hills in your
immediate location
User: Can you help me find the nearest?
Bot: Absolutely! It looks like you’re about 3 hours away from
Bear Mountain. That seems to be the closest.
User: Hmm.. sounds good
Bot: Alright! I can help you get your lift tickets now!When
will you be going?
User: Awesome! please get me a ticket for 10pax
Bot: You’ve got it. Anything else I can help you with?
User: None. Thanks again!
Bot: No problem!
```
Example of input/output for this dialog:
```
Input: dialog history = Hello how may I help you?; Is there
any good ski hills an hour’s drive from my current location?;
I’m sorry to inform you that there are no ski hills in your
immediate location
Output: user response = Can you help me find the nearest?
```
### Data Fields
Each dialogue instance has the following fields:
- `id`: a unique ID identifying the dialog.
- `user_id`: a unique ID identifying the user.
- `bot_id`: a unique ID identifying the bot.
- `domain`: a unique ID identifying the domain. Provides a mapping to tasks dataset.
- `task_id`: a unique ID identifying the task. Provides a mapping to tasks dataset.
- `turns`: the sequence of utterances alternating between `bot` and `user`, starting with a prompt from `bot`.
Each task instance has following fields:
- `task_id`: a unique ID identifying the task.
- `domain`: a unique ID identifying the domain.
- `bot_prompt`: The task specification for bot.
- `bot_role`: The domain oriented role of bot.
- `user_prompt`: The task specification for user.
- `user_role`: The domain oriented role of user.
### Data Splits
The dataset is split into a `train` and `test` split with the following sizes:
| | Training MetaLWOz | Evaluation MetaLWOz | Combined |
| ----- | ------ | ----- | ---- |
| Total Domains | 47 | 4 | 51 |
| Total Tasks | 226 | 14 | 240 |
| Total Dialogs | 37884 | 2319 | 40203 |
Below are the various statistics of the dataset:
| Statistic | Mean | Minimum | Maximum |
| ----- | ------ | ----- | ---- |
| Number of tasks per domain | 4.8 | 3 | 11 |
| Number of dialogs per domain | 806.0 | 288 | 1990 |
| Number of dialogs per task | 167.6 | 32 | 285 |
| Number of turns per dialog | 11.4 | 10 | 46 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset v1 version is created by team of researchers from Microsoft Research (Montreal, Canada)
### Licensing Information
The dataset is released under [Microsoft Research Data License Agreement](https://msropendata-web-api.azurewebsites.net/licenses/2f933be3-284d-500b-7ea3-2aa2fd0f1bb2/view)
### Citation Information
You can cite the following for the various versions of MetaLWOz:
Version 1.0
```
@InProceedings{shalyminov2020fast,
author = {Shalyminov, Igor and Sordoni, Alessandro and Atkinson, Adam and Schulz, Hannes},
title = {Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer},
booktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year = {2020},
month = {April},
url = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptation-for-goal-oriented-dialogue-using-a
-hybrid-generative-retrieval-transformer/},
}
```
### Contributions
Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset. |
metooma | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: metooma
pretty_name: '#MeTooMA dataset'
dataset_info:
features:
- name: TweetId
dtype: string
- name: Text_Only_Informative
dtype:
class_label:
names:
'0': Text Non Informative
'1': Text Informative
- name: Image_Only_Informative
dtype:
class_label:
names:
'0': Image Non Informative
'1': Image Informative
- name: Directed_Hate
dtype:
class_label:
names:
'0': Directed Hate Absent
'1': Directed Hate Present
- name: Generalized_Hate
dtype:
class_label:
names:
'0': Generalized Hate Absent
'1': Generalized Hate Present
- name: Sarcasm
dtype:
class_label:
names:
'0': Sarcasm Absent
'1': Sarcasm Present
- name: Allegation
dtype:
class_label:
names:
'0': Allegation Absent
'1': Allegation Present
- name: Justification
dtype:
class_label:
names:
'0': Justification Absent
'1': Justification Present
- name: Refutation
dtype:
class_label:
names:
'0': Refutation Absent
'1': Refutation Present
- name: Support
dtype:
class_label:
names:
'0': Support Absent
'1': Support Present
- name: Oppose
dtype:
class_label:
names:
'0': Oppose Absent
'1': Oppose Present
splits:
- name: train
num_bytes: 821738
num_examples: 7978
- name: test
num_bytes: 205489
num_examples: 1995
download_size: 408889
dataset_size: 1027227
---
# Dataset Card for #MeTooMA dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
- **Repository:** https://github.com/midas-research/MeTooMA
- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.
- This dataset includes more data points and has more labels than any of the previous datasets that contain social media
posts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.
- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,
other data can be fetched via Twitter API.
- The data has been labelled by experts, with the majority taken into the account for deciding the final label.
- The authors provide these labels for each of the tweets.
- Relevance
- Directed Hate
- Generalized Hate
- Sarcasm
- Allegation
- Justification
- Refutation
- Support
- Oppose
- The definitions for each task/label is in the main publication.
- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data
extracted from this dataset.
- The language of all the tweets in this dataset is English
- Time period: October 2018 - December 2018
- Suggested Use Cases of this dataset:
- Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.
- Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.
- Identifying how influential people were potrayed on public platform in the
events of mass social movements.
- Polarization analysis based on graph simulations of social nodes of users involved
in the #MeToo movement.
### Supported Tasks and Leaderboards
Multi Label and Multi-Class Classification
### Languages
English
## Dataset Structure
- The dataset is structured into CSV format with TweetID and accompanying labels.
- Train and Test sets are split into respective files.
### Data Instances
Tweet ID and the appropriate labels
### Data Fields
Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID
### Data Splits
- Train: 7979
- Test: 1996
## Dataset Creation
### Curation Rationale
- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.
- People expressed their opinions over issues which were previously missing from the social media space.
- This provides an option to study the linguistic behaviours of social media users in an informal setting,
therefore the authors decide to curate this annotated dataset.
- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.
- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.
### Source Data
- Source of all the data points in this dataset is Twitter social media platform.
#### Initial Data Collection and Normalization
- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.
- Redundant keywords were removed based on manual inspection.
- Public streaming APIs of Twitter were used for querying with the selected keywords.
- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.
- Non english tweets were removed.
- The final set was labelled by experts with the majority label taken into the account for deciding the final label.
- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
#### Who are the source language producers?
Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
### Annotations
#### Annotation process
- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.
- The annotators are domain experts having degress in advanced clinical psychology and gender studies.
- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.
- They studied the document, worked a few examples to get used to this annotation task.
- They also provided feedback for improving the class definitions.
- The annotation process is not mutually exclusive, implying that presence of one label does not mean the
absence of the other one.
#### Who are the annotators?
- The annotators are domain experts having a degree in clinical psychology and gender studies.
- Please refer to the accompnaying paper for a detailed annotation process.
### Personal and Sensitive Information
- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.
- It is highly encouraged to use this dataset for scientific purposes only.
- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.
## Considerations for Using the Data
### Social Impact of Dataset
- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.
- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these
should be used to assist already existing human intervention tools and therapies.
- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their
personal stance of issues pertaining to the #MeToo movement.
- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.
- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset
and social impact of this work.
### Discussion of Biases
- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of
community affected by sexual abuse.
- Any work undertaken on this dataset should aim to minimize the bias against minority groups which
might amplified in cases of sudden outburst of public reactions over sensitive social media discussions.
### Other Known Limitations
- Considering privacy concerns, social media practitioners should be aware of making automated interventions
to aid the victims of sexual abuse as some people might not prefer to disclose their notions.
- Concerned social media users might also repeal their social information, if they found out that their
information is being used for computational purposes, hence it is important seek subtle individual consent
before trying to profile authors involved in online discussions to uphold personal privacy.
## Additional Information
Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
### Dataset Curators
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
disclaims any responsibility for the use of the corpus and does not provide technical support.
However, the contact listed above will be happy to respond to queries and clarifications
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your social media data.
- if interested in a collaborative research project.
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
```
@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }
```
### Contributions
Thanks to [@akash418](https://github.com/akash418) for adding this dataset. |
metrec | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: metrec
pretty_name: MetRec
tags:
- poetry-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': saree
'1': kamel
'2': mutakareb
'3': mutadarak
'4': munsareh
'5': madeed
'6': mujtath
'7': ramal
'8': baseet
'9': khafeef
'10': taweel
'11': wafer
'12': hazaj
'13': rajaz
config_name: plain_text
splits:
- name: train
num_bytes: 5874919
num_examples: 47124
- name: test
num_bytes: 1037577
num_examples: 8316
download_size: 2267882
dataset_size: 6912496
---
# Dataset Card for MetRec
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Metrec](https://github.com/zaidalyafeai/MetRec)
- **Repository:** [Metrec repository](https://github.com/zaidalyafeai/MetRec)
- **Paper:** [MetRec: A dataset for meter classification of arabic poetry](https://www.sciencedirect.com/science/article/pii/S2352340920313792)
- **Point of Contact:** [Zaid Alyafeai](mailto:alyafey22@gmail.com)
### Dataset Summary
The dataset contains the verses and their corresponding meter classes.
Meter classes are represented as numbers from 0 to 13.
The dataset can be highly useful for further research in order to improve the field of Arabic poems’ meter classification.
The train dataset contains 47,124 records and the test dataset contains 8,316 records.
### Supported Tasks and Leaderboards
The dataset was published on this [paper](https://www.sciencedirect.com/science/article/pii/S2352340920313792). A benchmark is acheived on this [paper](https://www.sciencedirect.com/science/article/pii/S016786552030204X).
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises a label which is out of 13 classes and a verse part of poem.
### Data Fields
[N/A]
### Data Splits
The data is split into a training and testing. The split is organized as the following
| | train | test |
|------------|-------:|------:|
| data split | 47,124 | 8,316 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
The dataset was collected from [Aldiwan](https://www.aldiwan.net/).
#### Who are the source language producers?
The poems are from different poets.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
```
@article{metrec2020,
title={MetRec: A dataset for meter classification of arabic poetry},
author={Al-shaibani, Maged S and Alyafeai, Zaid and Ahmad, Irfan},
journal={Data in Brief},
year={2020},
publisher={Elsevier}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset. |
miam | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- de
- en
- es
- fr
- it
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- dialogue-modeling
- language-modeling
- masked-language-modeling
pretty_name: MIAM
configs:
- dihana
- ilisten
- loria
- maptask
- vm2
tags:
- dialogue-act-classification
dataset_info:
- config_name: dihana
features:
- name: Speaker
dtype: string
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Dialogue_ID
dtype: string
- name: File_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': Afirmacion
'1': Apertura
'2': Cierre
'3': Confirmacion
'4': Espera
'5': Indefinida
'6': Negacion
'7': No_entendido
'8': Nueva_consulta
'9': Pregunta
'10': Respuesta
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 1946735
num_examples: 19063
- name: validation
num_bytes: 216498
num_examples: 2123
- name: test
num_bytes: 238446
num_examples: 2361
download_size: 1777267
dataset_size: 2401679
- config_name: ilisten
features:
- name: Speaker
dtype: string
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Dialogue_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': AGREE
'1': ANSWER
'2': CLOSING
'3': ENCOURAGE-SORRY
'4': GENERIC-ANSWER
'5': INFO-REQUEST
'6': KIND-ATTITUDE_SMALL-TALK
'7': OFFER-GIVE-INFO
'8': OPENING
'9': PERSUASION-SUGGEST
'10': QUESTION
'11': REJECT
'12': SOLICITATION-REQ_CLARIFICATION
'13': STATEMENT
'14': TALK-ABOUT-SELF
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 244336
num_examples: 1986
- name: validation
num_bytes: 33988
num_examples: 230
- name: test
num_bytes: 145376
num_examples: 971
download_size: 349993
dataset_size: 423700
- config_name: loria
features:
- name: Speaker
dtype: string
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Dialogue_ID
dtype: string
- name: File_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': ack
'1': ask
'2': find_mold
'3': find_plans
'4': first_step
'5': greet
'6': help
'7': inform
'8': inform_engine
'9': inform_job
'10': inform_material_space
'11': informer_conditioner
'12': informer_decoration
'13': informer_elcomps
'14': informer_end_manufacturing
'15': kindAtt
'16': manufacturing_reqs
'17': next_step
'18': 'no'
'19': other
'20': quality_control
'21': quit
'22': reqRep
'23': security_policies
'24': staff_enterprise
'25': staff_job
'26': studies_enterprise
'27': studies_job
'28': todo_failure
'29': todo_irreparable
'30': 'yes'
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 1208730
num_examples: 8465
- name: validation
num_bytes: 133829
num_examples: 942
- name: test
num_bytes: 149855
num_examples: 1047
download_size: 1221132
dataset_size: 1492414
- config_name: maptask
features:
- name: Speaker
dtype: string
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Dialogue_ID
dtype: string
- name: File_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': acknowledge
'1': align
'2': check
'3': clarify
'4': explain
'5': instruct
'6': query_w
'7': query_yn
'8': ready
'9': reply_n
'10': reply_w
'11': reply_y
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 1910120
num_examples: 25382
- name: validation
num_bytes: 389879
num_examples: 5221
- name: test
num_bytes: 396947
num_examples: 5335
download_size: 1729021
dataset_size: 2696946
- config_name: vm2
features:
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Speaker
dtype: string
- name: Dialogue_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': ACCEPT
'1': BACKCHANNEL
'2': BYE
'3': CLARIFY
'4': CLOSE
'5': COMMIT
'6': CONFIRM
'7': DEFER
'8': DELIBERATE
'9': DEVIATE_SCENARIO
'10': EXCLUDE
'11': EXPLAINED_REJECT
'12': FEEDBACK
'13': FEEDBACK_NEGATIVE
'14': FEEDBACK_POSITIVE
'15': GIVE_REASON
'16': GREET
'17': INFORM
'18': INIT
'19': INTRODUCE
'20': NOT_CLASSIFIABLE
'21': OFFER
'22': POLITENESS_FORMULA
'23': REJECT
'24': REQUEST
'25': REQUEST_CLARIFY
'26': REQUEST_COMMENT
'27': REQUEST_COMMIT
'28': REQUEST_SUGGEST
'29': SUGGEST
'30': THANK
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 1869254
num_examples: 25060
- name: validation
num_bytes: 209390
num_examples: 2860
- name: test
num_bytes: 209032
num_examples: 2855
download_size: 1641453
dataset_size: 2287676
---
# Dataset Card for MIAM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [N/A]
- **Repository:** [N/A]
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** [N/A]
### Dataset Summary
Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and
analyzing natural language understanding systems specifically designed for spoken language. Datasets
are in English, French, German, Italian and Spanish. They cover a variety of domains including
spontaneous speech, scripted scenarios, and joint task completion. All datasets contain dialogue act
labels.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English, French, German, Italian, Spanish.
## Dataset Structure
### Data Instances
#### Dihana Corpus
For the `dihana` configuration one example from the dataset is:
```
{
'Speaker': 'U',
'Utterance': 'Hola , quería obtener el horario para ir a Valencia',
'Dialogue_Act': 9, # 'Pregunta' ('Request')
'Dialogue_ID': '0',
'File_ID': 'B209_BA5c3',
}
```
#### iLISTEN Corpus
For the `ilisten` configuration one example from the dataset is:
```
{
'Speaker': 'T_11_U11',
'Utterance': 'ok, grazie per le informazioni',
'Dialogue_Act': 6, # 'KIND-ATTITUDE_SMALL-TALK'
'Dialogue_ID': '0',
}
```
#### LORIA Corpus
For the `loria` configuration one example from the dataset is:
```
{
'Speaker': 'Samir',
'Utterance': 'Merci de votre visite, bonne chance, et à la prochaine !',
'Dialogue_Act': 21, # 'quit'
'Dialogue_ID': '5',
'File_ID': 'Dial_20111128_113927',
}
```
#### HCRC MapTask Corpus
For the `maptask` configuration one example from the dataset is:
```
{
'Speaker': 'f',
'Utterance': 'is it underneath the rope bridge or to the left',
'Dialogue_Act': 6, # 'query_w'
'Dialogue_ID': '0',
'File_ID': 'q4ec1',
}
```
#### VERBMOBIL
For the `vm2` configuration one example from the dataset is:
```
{
'Utterance': 'ja was sind viereinhalb Stunden Bahngerüttel gegen siebzig Minuten Turbulenzen im Flugzeug',
'Utterance': 'Utterance',
'Dialogue_Act': 'Dialogue_Act', # 'INFORM'
'Speaker': 'A',
'Dialogue_ID': '66',
}
```
### Data Fields
For the `dihana` configuration, the different fields are:
- `Speaker`: identifier of the speaker as a string.
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'Afirmacion' (0) [Feedback_positive], 'Apertura' (1) [Opening], 'Cierre' (2) [Closing], 'Confirmacion' (3) [Acknowledge], 'Espera' (4) [Hold], 'Indefinida' (5) [Undefined], 'Negacion' (6) [Feedback_negative], 'No_entendido' (7) [Request_clarify], 'Nueva_consulta' (8) [New_request], 'Pregunta' (9) [Request] or 'Respuesta' (10) [Reply].
- `Dialogue_ID`: identifier of the dialogue as a string.
- `File_ID`: identifier of the source file as a string.
For the `ilisten` configuration, the different fields are:
- `Speaker`: identifier of the speaker as a string.
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'AGREE' (0), 'ANSWER' (1), 'CLOSING' (2), 'ENCOURAGE-SORRY' (3), 'GENERIC-ANSWER' (4), 'INFO-REQUEST' (5), 'KIND-ATTITUDE_SMALL-TALK' (6), 'OFFER-GIVE-INFO' (7), 'OPENING' (8), 'PERSUASION-SUGGEST' (9), 'QUESTION' (10), 'REJECT' (11), 'SOLICITATION-REQ_CLARIFICATION' (12), 'STATEMENT' (13) or 'TALK-ABOUT-SELF' (14).
- `Dialogue_ID`: identifier of the dialogue as a string.
For the `loria` configuration, the different fields are:
- `Speaker`: identifier of the speaker as a string.
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'ack' (0), 'ask' (1), 'find_mold' (2), 'find_plans' (3), 'first_step' (4), 'greet' (5), 'help' (6), 'inform' (7), 'inform_engine' (8), 'inform_job' (9), 'inform_material_space' (10), 'informer_conditioner' (11), 'informer_decoration' (12), 'informer_elcomps' (13), 'informer_end_manufacturing' (14), 'kindAtt' (15), 'manufacturing_reqs' (16), 'next_step' (17), 'no' (18), 'other' (19), 'quality_control' (20), 'quit' (21), 'reqRep' (22), 'security_policies' (23), 'staff_enterprise' (24), 'staff_job' (25), 'studies_enterprise' (26), 'studies_job' (27), 'todo_failure' (28), 'todo_irreparable' (29), 'yes' (30)
- `Dialogue_ID`: identifier of the dialogue as a string.
- `File_ID`: identifier of the source file as a string.
For the `maptask` configuration, the different fields are:
- `Speaker`: identifier of the speaker as a string.
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'acknowledge' (0), 'align' (1), 'check' (2), 'clarify' (3), 'explain' (4), 'instruct' (5), 'query_w' (6), 'query_yn' (7), 'ready' (8), 'reply_n' (9), 'reply_w' (10) or 'reply_y' (11).
- `Dialogue_ID`: identifier of the dialogue as a string.
- `File_ID`: identifier of the source file as a string.
For the `vm2` configuration, the different fields are:
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialogue act label of the utterance. It can be one of 'ACCEPT' (0), 'BACKCHANNEL' (1), 'BYE' (2), 'CLARIFY' (3), 'CLOSE' (4), 'COMMIT' (5), 'CONFIRM' (6), 'DEFER' (7), 'DELIBERATE' (8), 'DEVIATE_SCENARIO' (9), 'EXCLUDE' (10), 'EXPLAINED_REJECT' (11), 'FEEDBACK' (12), 'FEEDBACK_NEGATIVE' (13), 'FEEDBACK_POSITIVE' (14), 'GIVE_REASON' (15), 'GREET' (16), 'INFORM' (17), 'INIT' (18), 'INTRODUCE' (19), 'NOT_CLASSIFIABLE' (20), 'OFFER' (21), 'POLITENESS_FORMULA' (22), 'REJECT' (23), 'REQUEST' (24), 'REQUEST_CLARIFY' (25), 'REQUEST_COMMENT' (26), 'REQUEST_COMMIT' (27), 'REQUEST_SUGGEST' (28), 'SUGGEST' (29), 'THANK' (30).
- `Speaker`: Speaker as a string.
- `Dialogue_ID`: identifier of the dialogue as a string.
### Data Splits
| Dataset name | Train | Valid | Test |
| ------------ | ----- | ----- | ---- |
| dihana | 19063 | 2123 | 2361 |
| ilisten | 1986 | 230 | 971 |
| loria | 8465 | 942 | 1047 |
| maptask | 25382 | 5221 | 5335 |
| vm2 | 25060 | 2860 | 2855 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Anonymous.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Unported License](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@inproceedings{colombo-etal-2021-code,
title = "Code-switched inspired losses for spoken dialog representations",
author = "Colombo, Pierre and
Chapuis, Emile and
Labeau, Matthieu and
Clavel, Chlo{\'e}",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.656",
doi = "10.18653/v1/2021.emnlp-main.656",
pages = "8320--8337",
abstract = "Spoken dialogue systems need to be able to handle both multiple languages and multilinguality inside a conversation (\textit{e.g} in case of code-switching). In this work, we introduce new pretraining losses tailored to learn generic multilingual spoken dialogue representations. The goal of these losses is to expose the model to code-switched language. In order to scale up training, we automatically build a pretraining corpus composed of multilingual conversations in five different languages (French, Italian, English, German and Spanish) from OpenSubtitles, a huge multilingual corpus composed of 24.3G tokens. We test the generic representations on MIAM, a new benchmark composed of five dialogue act corpora on the same aforementioned languages as well as on two novel multilingual tasks (\textit{i.e} multilingual mask utterance retrieval and multilingual inconsistency identification). Our experiments show that our new losses achieve a better performance in both monolingual and multilingual settings.",
}
```
### Contributions
Thanks to [@eusip](https://github.com/eusip) and [@PierreColombo](https://github.com/PierreColombo) for adding this dataset. |
mkb | ---
task_categories:
- text-generation
- fill-mask
multilinguality:
- translation
task_ids:
- language-modeling
- masked-language-modeling
language:
- bn
- en
- gu
- hi
- ml
- mr
- or
- pa
- ta
- te
- ur
annotations_creators:
- no-annotation
source_datasets:
- original
size_categories:
- 1K<n<10K
- n<1K
license:
- cc-by-4.0
paperswithcode_id: null
pretty_name: CVIT MKB
configs:
- bn-en
- bn-gu
- bn-hi
- bn-ml
- bn-mr
- bn-or
- bn-ta
- bn-te
- bn-ur
- en-gu
- en-hi
- en-ml
- en-mr
- en-or
- en-ta
- en-te
- en-ur
- gu-hi
- gu-ml
- gu-mr
- gu-or
- gu-ta
- gu-te
- gu-ur
- hi-ml
- hi-mr
- hi-or
- hi-ta
- hi-te
- hi-ur
- ml-mr
- ml-or
- ml-ta
- ml-te
- ml-ur
- mr-or
- mr-ta
- mr-te
- mr-ur
- or-ta
- or-te
- or-ur
- ta-te
- ta-ur
- te-ur
dataset_info:
- config_name: or-ur
features:
- name: translation
dtype:
translation:
languages:
- or
- ur
splits:
- name: train
num_bytes: 39336
num_examples: 98
download_size: 52428800
dataset_size: 39336
- config_name: ml-or
features:
- name: translation
dtype:
translation:
languages:
- ml
- or
splits:
- name: train
num_bytes: 224084
num_examples: 427
download_size: 52428800
dataset_size: 224084
- config_name: bn-ta
features:
- name: translation
dtype:
translation:
languages:
- bn
- ta
splits:
- name: train
num_bytes: 2020506
num_examples: 3460
download_size: 52428800
dataset_size: 2020506
- config_name: gu-mr
features:
- name: translation
dtype:
translation:
languages:
- gu
- mr
splits:
- name: train
num_bytes: 1818018
num_examples: 3658
download_size: 52428800
dataset_size: 1818018
- config_name: hi-or
features:
- name: translation
dtype:
translation:
languages:
- hi
- or
splits:
- name: train
num_bytes: 188779
num_examples: 389
download_size: 52428800
dataset_size: 188779
- config_name: en-or
features:
- name: translation
dtype:
translation:
languages:
- en
- or
splits:
- name: train
num_bytes: 276520
num_examples: 768
download_size: 52428800
dataset_size: 276520
- config_name: mr-ur
features:
- name: translation
dtype:
translation:
languages:
- mr
- ur
splits:
- name: train
num_bytes: 225305
num_examples: 490
download_size: 52428800
dataset_size: 225305
- config_name: en-ta
features:
- name: translation
dtype:
translation:
languages:
- en
- ta
splits:
- name: train
num_bytes: 2578828
num_examples: 5744
download_size: 52428800
dataset_size: 2578828
- config_name: hi-ta
features:
- name: translation
dtype:
translation:
languages:
- hi
- ta
splits:
- name: train
num_bytes: 1583237
num_examples: 2761
download_size: 52428800
dataset_size: 1583237
- config_name: bn-en
features:
- name: translation
dtype:
translation:
languages:
- bn
- en
splits:
- name: train
num_bytes: 2001834
num_examples: 5634
download_size: 52428800
dataset_size: 2001834
- config_name: bn-or
features:
- name: translation
dtype:
translation:
languages:
- bn
- or
splits:
- name: train
num_bytes: 220893
num_examples: 447
download_size: 52428800
dataset_size: 220893
- config_name: ml-ta
features:
- name: translation
dtype:
translation:
languages:
- ml
- ta
splits:
- name: train
num_bytes: 1958818
num_examples: 3124
download_size: 52428800
dataset_size: 1958818
- config_name: gu-ur
features:
- name: translation
dtype:
translation:
languages:
- gu
- ur
splits:
- name: train
num_bytes: 311082
num_examples: 749
download_size: 52428800
dataset_size: 311082
- config_name: bn-ml
features:
- name: translation
dtype:
translation:
languages:
- bn
- ml
splits:
- name: train
num_bytes: 1587528
num_examples: 2938
download_size: 52428800
dataset_size: 1587528
- config_name: bn-hi
features:
- name: translation
dtype:
translation:
languages:
- bn
- hi
splits:
- name: train
num_bytes: 1298611
num_examples: 2706
download_size: 52428800
dataset_size: 1298611
- config_name: gu-te
features:
- name: translation
dtype:
translation:
languages:
- gu
- te
splits:
- name: train
num_bytes: 1669386
num_examples: 3528
download_size: 52428800
dataset_size: 1669386
- config_name: hi-ml
features:
- name: translation
dtype:
translation:
languages:
- hi
- ml
splits:
- name: train
num_bytes: 1208956
num_examples: 2305
download_size: 52428800
dataset_size: 1208956
- config_name: or-te
features:
- name: translation
dtype:
translation:
languages:
- or
- te
splits:
- name: train
num_bytes: 209457
num_examples: 440
download_size: 52428800
dataset_size: 209457
- config_name: en-ml
features:
- name: translation
dtype:
translation:
languages:
- en
- ml
splits:
- name: train
num_bytes: 2007061
num_examples: 5017
download_size: 52428800
dataset_size: 2007061
- config_name: en-hi
features:
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 1865430
num_examples: 5272
download_size: 52428800
dataset_size: 1865430
- config_name: mr-te
features:
- name: translation
dtype:
translation:
languages:
- mr
- te
splits:
- name: train
num_bytes: 1434444
num_examples: 2839
download_size: 52428800
dataset_size: 1434444
- config_name: bn-te
features:
- name: translation
dtype:
translation:
languages:
- bn
- te
splits:
- name: train
num_bytes: 1431096
num_examples: 2939
download_size: 52428800
dataset_size: 1431096
- config_name: gu-hi
features:
- name: translation
dtype:
translation:
languages:
- gu
- hi
splits:
- name: train
num_bytes: 1521174
num_examples: 3213
download_size: 52428800
dataset_size: 1521174
- config_name: ta-ur
features:
- name: translation
dtype:
translation:
languages:
- ta
- ur
splits:
- name: train
num_bytes: 329809
num_examples: 637
download_size: 52428800
dataset_size: 329809
- config_name: te-ur
features:
- name: translation
dtype:
translation:
languages:
- te
- ur
splits:
- name: train
num_bytes: 254581
num_examples: 599
download_size: 52428800
dataset_size: 254581
- config_name: gu-ml
features:
- name: translation
dtype:
translation:
languages:
- gu
- ml
splits:
- name: train
num_bytes: 1822865
num_examples: 3469
download_size: 52428800
dataset_size: 1822865
- config_name: hi-te
features:
- name: translation
dtype:
translation:
languages:
- hi
- te
splits:
- name: train
num_bytes: 1078371
num_examples: 2289
download_size: 52428800
dataset_size: 1078371
- config_name: en-te
features:
- name: translation
dtype:
translation:
languages:
- en
- te
splits:
- name: train
num_bytes: 1784517
num_examples: 5177
download_size: 52428800
dataset_size: 1784517
- config_name: ml-te
features:
- name: translation
dtype:
translation:
languages:
- ml
- te
splits:
- name: train
num_bytes: 1556164
num_examples: 2898
download_size: 52428800
dataset_size: 1556164
- config_name: hi-ur
features:
- name: translation
dtype:
translation:
languages:
- hi
- ur
splits:
- name: train
num_bytes: 313360
num_examples: 742
download_size: 52428800
dataset_size: 313360
- config_name: mr-or
features:
- name: translation
dtype:
translation:
languages:
- mr
- or
splits:
- name: train
num_bytes: 219193
num_examples: 432
download_size: 52428800
dataset_size: 219193
- config_name: en-ur
features:
- name: translation
dtype:
translation:
languages:
- en
- ur
splits:
- name: train
num_bytes: 289419
num_examples: 1019
download_size: 52428800
dataset_size: 289419
- config_name: ml-ur
features:
- name: translation
dtype:
translation:
languages:
- ml
- ur
splits:
- name: train
num_bytes: 295806
num_examples: 624
download_size: 52428800
dataset_size: 295806
- config_name: bn-mr
features:
- name: translation
dtype:
translation:
languages:
- bn
- mr
splits:
- name: train
num_bytes: 1554154
num_examples: 3054
download_size: 52428800
dataset_size: 1554154
- config_name: gu-ta
features:
- name: translation
dtype:
translation:
languages:
- gu
- ta
splits:
- name: train
num_bytes: 2284643
num_examples: 3998
download_size: 52428800
dataset_size: 2284643
- config_name: bn-gu
features:
- name: translation
dtype:
translation:
languages:
- bn
- gu
splits:
- name: train
num_bytes: 1840059
num_examples: 3810
download_size: 52428800
dataset_size: 1840059
- config_name: bn-ur
features:
- name: translation
dtype:
translation:
languages:
- bn
- ur
splits:
- name: train
num_bytes: 234561
num_examples: 559
download_size: 52428800
dataset_size: 234561
- config_name: ml-mr
features:
- name: translation
dtype:
translation:
languages:
- ml
- mr
splits:
- name: train
num_bytes: 1568672
num_examples: 2803
download_size: 52428800
dataset_size: 1568672
- config_name: or-ta
features:
- name: translation
dtype:
translation:
languages:
- or
- ta
splits:
- name: train
num_bytes: 267193
num_examples: 470
download_size: 52428800
dataset_size: 267193
- config_name: ta-te
features:
- name: translation
dtype:
translation:
languages:
- ta
- te
splits:
- name: train
num_bytes: 1773728
num_examples: 3100
download_size: 52428800
dataset_size: 1773728
- config_name: gu-or
features:
- name: translation
dtype:
translation:
languages:
- gu
- or
splits:
- name: train
num_bytes: 256362
num_examples: 541
download_size: 52428800
dataset_size: 256362
- config_name: en-gu
features:
- name: translation
dtype:
translation:
languages:
- en
- gu
splits:
- name: train
num_bytes: 2318080
num_examples: 6615
download_size: 52428800
dataset_size: 2318080
- config_name: hi-mr
features:
- name: translation
dtype:
translation:
languages:
- hi
- mr
splits:
- name: train
num_bytes: 1243583
num_examples: 2491
download_size: 52428800
dataset_size: 1243583
- config_name: mr-ta
features:
- name: translation
dtype:
translation:
languages:
- mr
- ta
splits:
- name: train
num_bytes: 1906073
num_examples: 3175
download_size: 52428800
dataset_size: 1906073
- config_name: en-mr
features:
- name: translation
dtype:
translation:
languages:
- en
- mr
splits:
- name: train
num_bytes: 2140298
num_examples: 5867
download_size: 52428800
dataset_size: 2140298
---
# Dataset Card for CVIT MKB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Link](http://preon.iiit.ac.in/~jerin/bhasha/)
- **Repository:**
- **Paper:** [ARXIV](https://arxiv.org/abs/2007.07691)
- **Leaderboard:**
- **Point of Contact:** [email](cvit-bhasha@googlegroups.com)
### Dataset Summary
Indian Prime Minister's speeches - Mann Ki Baat, on All India Radio, translated into many languages.
### Supported Tasks and Leaderboards
[MORE INFORMATION NEEDED]
### Languages
Hindi, Telugu, Tamil, Malayalam, Gujarati, Urdu, Bengali, Oriya, Marathi, Punjabi, and English
## Dataset Structure
### Data Instances
[MORE INFORMATION NEEDED]
### Data Fields
- `src_tag`: `string` text in source language
- `tgt_tag`: `string` translation of source language in target language
### Data Splits
[MORE INFORMATION NEEDED]
## Dataset Creation
### Curation Rationale
[MORE INFORMATION NEEDED]
### Source Data
[MORE INFORMATION NEEDED]
#### Initial Data Collection and Normalization
[MORE INFORMATION NEEDED]
#### Who are the source language producers?
[MORE INFORMATION NEEDED]
### Annotations
#### Annotation process
[MORE INFORMATION NEEDED]
#### Who are the annotators?
[MORE INFORMATION NEEDED]
### Personal and Sensitive Information
[MORE INFORMATION NEEDED]
## Considerations for Using the Data
### Social Impact of Dataset
[MORE INFORMATION NEEDED]
### Discussion of Biases
[MORE INFORMATION NEEDED]
### Other Known Limitations
[MORE INFORMATION NEEDED]
## Additional Information
### Dataset Curators
[MORE INFORMATION NEEDED]
### Licensing Information
The datasets and pretrained models provided here are licensed under Creative Commons Attribution-ShareAlike 4.0 International License.
### Citation Information
```
@misc{siripragada2020multilingual,
title={A Multilingual Parallel Corpora Collection Effort for Indian Languages},
author={Shashank Siripragada and Jerin Philip and Vinay P. Namboodiri and C V Jawahar},
year={2020},
eprint={2007.07691},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset. |
mkqa | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ar
- da
- de
- en
- es
- fi
- fr
- he
- hu
- it
- ja
- km
- ko
- ms
- nl
- 'no'
- pl
- pt
- ru
- sv
- th
- tr
- vi
- zh
license:
- cc-by-3.0
multilinguality:
- multilingual
- translation
size_categories:
- 10K<n<100K
source_datasets:
- extended|natural_questions
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: mkqa
pretty_name: Multilingual Knowledge Questions and Answers
dataset_info:
features:
- name: example_id
dtype: string
- name: queries
struct:
- name: ar
dtype: string
- name: da
dtype: string
- name: de
dtype: string
- name: en
dtype: string
- name: es
dtype: string
- name: fi
dtype: string
- name: fr
dtype: string
- name: he
dtype: string
- name: hu
dtype: string
- name: it
dtype: string
- name: ja
dtype: string
- name: ko
dtype: string
- name: km
dtype: string
- name: ms
dtype: string
- name: nl
dtype: string
- name: 'no'
dtype: string
- name: pl
dtype: string
- name: pt
dtype: string
- name: ru
dtype: string
- name: sv
dtype: string
- name: th
dtype: string
- name: tr
dtype: string
- name: vi
dtype: string
- name: zh_cn
dtype: string
- name: zh_hk
dtype: string
- name: zh_tw
dtype: string
- name: query
dtype: string
- name: answers
struct:
- name: ar
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: da
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: de
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: en
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: es
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: fi
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: fr
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: he
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: hu
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: it
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ja
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ko
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: km
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ms
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: nl
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: 'no'
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: pl
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: pt
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ru
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: sv
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: th
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: tr
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: vi
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: zh_cn
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: zh_hk
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: zh_tw
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
config_name: mkqa
splits:
- name: train
num_bytes: 36005650
num_examples: 10000
download_size: 11903948
dataset_size: 36005650
---
# Dataset Card for MKQA: Multilingual Knowledge Questions & Answers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- [**Homepage:**](https://github.com/apple/ml-mkqa/)
- [**Paper:**](https://arxiv.org/abs/2007.15207)
### Dataset Summary
MKQA contains 10,000 queries sampled from the [Google Natural Questions dataset](https://github.com/google-research-datasets/natural-questions).
For each query we collect new passage-independent answers.
These queries and answers are then human translated into 25 Non-English languages.
### Supported Tasks and Leaderboards
`question-answering`
### Languages
| Language code | Language name |
|---------------|---------------|
| `ar` | `Arabic` |
| `da` | `Danish` |
| `de` | `German` |
| `en` | `English` |
| `es` | `Spanish` |
| `fi` | `Finnish` |
| `fr` | `French` |
| `he` | `Hebrew` |
| `hu` | `Hungarian` |
| `it` | `Italian` |
| `ja` | `Japanese` |
| `ko` | `Korean` |
| `km` | `Khmer` |
| `ms` | `Malay` |
| `nl` | `Dutch` |
| `no` | `Norwegian` |
| `pl` | `Polish` |
| `pt` | `Portuguese` |
| `ru` | `Russian` |
| `sv` | `Swedish` |
| `th` | `Thai` |
| `tr` | `Turkish` |
| `vi` | `Vietnamese` |
| `zh_cn` | `Chinese (Simplified)` |
| `zh_hk` | `Chinese (Hong kong)` |
| `zh_tw` | `Chinese (Traditional)` |
## Dataset Structure
### Data Instances
An example from the data set looks as follows:
```
{
'example_id': 563260143484355911,
'queries': {
'en': "who sings i hear you knocking but you can't come in",
'ru': "кто поет i hear you knocking but you can't come in",
'ja': '「 I hear you knocking」は誰が歌っていますか',
'zh_cn': "《i hear you knocking but you can't come in》是谁演唱的",
...
},
'query': "who sings i hear you knocking but you can't come in",
'answers': {'en': [{'type': 'entity',
'entity': 'Q545186',
'text': 'Dave Edmunds',
'aliases': []}],
'ru': [{'type': 'entity',
'entity': 'Q545186',
'text': 'Эдмундс, Дэйв',
'aliases': ['Эдмундс', 'Дэйв Эдмундс', 'Эдмундс Дэйв', 'Dave Edmunds']}],
'ja': [{'type': 'entity',
'entity': 'Q545186',
'text': 'デイヴ・エドモンズ',
'aliases': ['デーブ・エドモンズ', 'デイブ・エドモンズ']}],
'zh_cn': [{'type': 'entity', 'text': '戴维·埃德蒙兹 ', 'entity': 'Q545186'}],
...
},
}
```
### Data Fields
Each example in the dataset contains the unique Natural Questions `example_id`, the original English `query`, and then `queries` and `answers` in 26 languages.
Each answer is labelled with an answer type. The breakdown is:
| Answer Type | Occurrence |
|---------------|---------------|
| `entity` | `4221` |
| `long_answer` | `1815` |
| `unanswerable` | `1427` |
| `date` | `1174` |
| `number` | `485` |
| `number_with_unit` | `394` |
| `short_phrase` | `346` |
| `binary` | `138` |
For each language, there can be more than one acceptable textual answer, in order to capture a variety of possible valid answers.
Detailed explanation of fields taken from [here](https://github.com/apple/ml-mkqa/#dataset)
when `entity` field is not available it is set to an empty string ''.
when `aliases` field is not available it is set to an empty list [].
### Data Splits
- Train: 10000
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[Google Natural Questions dataset](https://github.com/google-research-datasets/natural-questions)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-SA 3.0](https://github.com/apple/ml-mkqa#license)
### Citation Information
```
@misc{mkqa,
title = {MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering},
author = {Shayne Longpre and Yi Lu and Joachim Daiber},
year = {2020},
URL = {https://arxiv.org/pdf/2007.15207.pdf}
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
mlqa | ---
pretty_name: MLQA (MultiLingual Question Answering)
language:
- en
- de
- es
- ar
- zh
- vi
- hi
license:
- cc-by-sa-3.0
source_datasets:
- original
size_categories:
- 10K<n<100K
language_creators:
- crowdsourced
annotations_creators:
- crowdsourced
multilinguality:
- multilingual
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: mlqa
dataset_info:
- config_name: mlqa-translate-train.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 101227245
num_examples: 78058
- name: validation
num_bytes: 13144332
num_examples: 9512
download_size: 63364123
dataset_size: 114371577
- config_name: mlqa-translate-train.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 77996825
num_examples: 80069
- name: validation
num_bytes: 10322113
num_examples: 9927
download_size: 63364123
dataset_size: 88318938
- config_name: mlqa-translate-train.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 97387431
num_examples: 84816
- name: validation
num_bytes: 12731112
num_examples: 10356
download_size: 63364123
dataset_size: 110118543
- config_name: mlqa-translate-train.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 55143547
num_examples: 76285
- name: validation
num_bytes: 7418070
num_examples: 9568
download_size: 63364123
dataset_size: 62561617
- config_name: mlqa-translate-train.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 80789653
num_examples: 81810
- name: validation
num_bytes: 10718376
num_examples: 10123
download_size: 63364123
dataset_size: 91508029
- config_name: mlqa-translate-train.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 168117671
num_examples: 82451
- name: validation
num_bytes: 22422152
num_examples: 10253
download_size: 63364123
dataset_size: 190539823
- config_name: mlqa-translate-test.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 5484467
num_examples: 5335
download_size: 10075488
dataset_size: 5484467
- config_name: mlqa-translate-test.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3884332
num_examples: 4517
download_size: 10075488
dataset_size: 3884332
- config_name: mlqa-translate-test.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 5998327
num_examples: 5495
download_size: 10075488
dataset_size: 5998327
- config_name: mlqa-translate-test.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4831704
num_examples: 5137
download_size: 10075488
dataset_size: 4831704
- config_name: mlqa-translate-test.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3916758
num_examples: 5253
download_size: 10075488
dataset_size: 3916758
- config_name: mlqa-translate-test.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4608811
num_examples: 4918
download_size: 10075488
dataset_size: 4608811
- config_name: mlqa.ar.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 8216837
num_examples: 5335
- name: validation
num_bytes: 808830
num_examples: 517
download_size: 75719050
dataset_size: 9025667
- config_name: mlqa.ar.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2132247
num_examples: 1649
- name: validation
num_bytes: 358554
num_examples: 207
download_size: 75719050
dataset_size: 2490801
- config_name: mlqa.ar.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3235363
num_examples: 2047
- name: validation
num_bytes: 283834
num_examples: 163
download_size: 75719050
dataset_size: 3519197
- config_name: mlqa.ar.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3175660
num_examples: 1912
- name: validation
num_bytes: 334016
num_examples: 188
download_size: 75719050
dataset_size: 3509676
- config_name: mlqa.ar.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 8074057
num_examples: 5335
- name: validation
num_bytes: 794775
num_examples: 517
download_size: 75719050
dataset_size: 8868832
- config_name: mlqa.ar.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2981237
num_examples: 1978
- name: validation
num_bytes: 223188
num_examples: 161
download_size: 75719050
dataset_size: 3204425
- config_name: mlqa.ar.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2993225
num_examples: 1831
- name: validation
num_bytes: 276727
num_examples: 186
download_size: 75719050
dataset_size: 3269952
- config_name: mlqa.de.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1587005
num_examples: 1649
- name: validation
num_bytes: 195822
num_examples: 207
download_size: 75719050
dataset_size: 1782827
- config_name: mlqa.de.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4274496
num_examples: 4517
- name: validation
num_bytes: 477366
num_examples: 512
download_size: 75719050
dataset_size: 4751862
- config_name: mlqa.de.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1654540
num_examples: 1675
- name: validation
num_bytes: 211985
num_examples: 182
download_size: 75719050
dataset_size: 1866525
- config_name: mlqa.de.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1645937
num_examples: 1621
- name: validation
num_bytes: 180114
num_examples: 190
download_size: 75719050
dataset_size: 1826051
- config_name: mlqa.de.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4251153
num_examples: 4517
- name: validation
num_bytes: 474863
num_examples: 512
download_size: 75719050
dataset_size: 4726016
- config_name: mlqa.de.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1678176
num_examples: 1776
- name: validation
num_bytes: 166193
num_examples: 196
download_size: 75719050
dataset_size: 1844369
- config_name: mlqa.de.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1343983
num_examples: 1430
- name: validation
num_bytes: 150679
num_examples: 163
download_size: 75719050
dataset_size: 1494662
- config_name: mlqa.vi.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3164094
num_examples: 2047
- name: validation
num_bytes: 226724
num_examples: 163
download_size: 75719050
dataset_size: 3390818
- config_name: mlqa.vi.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2189315
num_examples: 1675
- name: validation
num_bytes: 272794
num_examples: 182
download_size: 75719050
dataset_size: 2462109
- config_name: mlqa.vi.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 7807045
num_examples: 5495
- name: validation
num_bytes: 715291
num_examples: 511
download_size: 75719050
dataset_size: 8522336
- config_name: mlqa.vi.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2947458
num_examples: 1943
- name: validation
num_bytes: 265154
num_examples: 184
download_size: 75719050
dataset_size: 3212612
- config_name: mlqa.vi.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 7727204
num_examples: 5495
- name: validation
num_bytes: 707925
num_examples: 511
download_size: 75719050
dataset_size: 8435129
- config_name: mlqa.vi.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2822481
num_examples: 2018
- name: validation
num_bytes: 279235
num_examples: 189
download_size: 75719050
dataset_size: 3101716
- config_name: mlqa.vi.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2738045
num_examples: 1947
- name: validation
num_bytes: 251470
num_examples: 177
download_size: 75719050
dataset_size: 2989515
- config_name: mlqa.zh.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1697005
num_examples: 1912
- name: validation
num_bytes: 171743
num_examples: 188
download_size: 75719050
dataset_size: 1868748
- config_name: mlqa.zh.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1356268
num_examples: 1621
- name: validation
num_bytes: 170686
num_examples: 190
download_size: 75719050
dataset_size: 1526954
- config_name: mlqa.zh.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1770535
num_examples: 1943
- name: validation
num_bytes: 169651
num_examples: 184
download_size: 75719050
dataset_size: 1940186
- config_name: mlqa.zh.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4324740
num_examples: 5137
- name: validation
num_bytes: 433960
num_examples: 504
download_size: 75719050
dataset_size: 4758700
- config_name: mlqa.zh.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4353361
num_examples: 5137
- name: validation
num_bytes: 437016
num_examples: 504
download_size: 75719050
dataset_size: 4790377
- config_name: mlqa.zh.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1697983
num_examples: 1947
- name: validation
num_bytes: 134693
num_examples: 161
download_size: 75719050
dataset_size: 1832676
- config_name: mlqa.zh.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1547159
num_examples: 1767
- name: validation
num_bytes: 180928
num_examples: 189
download_size: 75719050
dataset_size: 1728087
- config_name: mlqa.en.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6641971
num_examples: 5335
- name: validation
num_bytes: 621075
num_examples: 517
download_size: 75719050
dataset_size: 7263046
- config_name: mlqa.en.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4966262
num_examples: 4517
- name: validation
num_bytes: 584725
num_examples: 512
download_size: 75719050
dataset_size: 5550987
- config_name: mlqa.en.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6958087
num_examples: 5495
- name: validation
num_bytes: 631268
num_examples: 511
download_size: 75719050
dataset_size: 7589355
- config_name: mlqa.en.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6441614
num_examples: 5137
- name: validation
num_bytes: 598772
num_examples: 504
download_size: 75719050
dataset_size: 7040386
- config_name: mlqa.en.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 13787522
num_examples: 11590
- name: validation
num_bytes: 1307399
num_examples: 1148
download_size: 75719050
dataset_size: 15094921
- config_name: mlqa.en.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6074990
num_examples: 5253
- name: validation
num_bytes: 545657
num_examples: 500
download_size: 75719050
dataset_size: 6620647
- config_name: mlqa.en.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6293785
num_examples: 4918
- name: validation
num_bytes: 614223
num_examples: 507
download_size: 75719050
dataset_size: 6908008
- config_name: mlqa.es.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1696778
num_examples: 1978
- name: validation
num_bytes: 145105
num_examples: 161
download_size: 75719050
dataset_size: 1841883
- config_name: mlqa.es.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1361983
num_examples: 1776
- name: validation
num_bytes: 139968
num_examples: 196
download_size: 75719050
dataset_size: 1501951
- config_name: mlqa.es.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1707141
num_examples: 2018
- name: validation
num_bytes: 172801
num_examples: 189
download_size: 75719050
dataset_size: 1879942
- config_name: mlqa.es.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1635294
num_examples: 1947
- name: validation
num_bytes: 122829
num_examples: 161
download_size: 75719050
dataset_size: 1758123
- config_name: mlqa.es.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4249431
num_examples: 5253
- name: validation
num_bytes: 408169
num_examples: 500
download_size: 75719050
dataset_size: 4657600
- config_name: mlqa.es.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4281273
num_examples: 5253
- name: validation
num_bytes: 411196
num_examples: 500
download_size: 75719050
dataset_size: 4692469
- config_name: mlqa.es.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1489611
num_examples: 1723
- name: validation
num_bytes: 178003
num_examples: 187
download_size: 75719050
dataset_size: 1667614
- config_name: mlqa.hi.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4374373
num_examples: 1831
- name: validation
num_bytes: 402817
num_examples: 186
download_size: 75719050
dataset_size: 4777190
- config_name: mlqa.hi.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2961556
num_examples: 1430
- name: validation
num_bytes: 294325
num_examples: 163
download_size: 75719050
dataset_size: 3255881
- config_name: mlqa.hi.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4664436
num_examples: 1947
- name: validation
num_bytes: 411654
num_examples: 177
download_size: 75719050
dataset_size: 5076090
- config_name: mlqa.hi.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4281309
num_examples: 1767
- name: validation
num_bytes: 416192
num_examples: 189
download_size: 75719050
dataset_size: 4697501
- config_name: mlqa.hi.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 11245629
num_examples: 4918
- name: validation
num_bytes: 1076115
num_examples: 507
download_size: 75719050
dataset_size: 12321744
- config_name: mlqa.hi.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3789337
num_examples: 1723
- name: validation
num_bytes: 412469
num_examples: 187
download_size: 75719050
dataset_size: 4201806
- config_name: mlqa.hi.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 11606982
num_examples: 4918
- name: validation
num_bytes: 1115055
num_examples: 507
download_size: 75719050
dataset_size: 12722037
---
# Dataset Card for "mlqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/facebookresearch/MLQA](https://github.com/facebookresearch/MLQA)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.15 GB
- **Size of the generated dataset:** 910.01 MB
- **Total amount of disk used:** 5.06 GB
### Dataset Summary
MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance.
MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic,
German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between
4 different languages on average.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
MLQA contains QA instances in 7 languages, English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese.
## Dataset Structure
### Data Instances
#### mlqa-translate-test.ar
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 5.48 MB
- **Total amount of disk used:** 15.56 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.de
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 3.88 MB
- **Total amount of disk used:** 13.96 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.es
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 3.92 MB
- **Total amount of disk used:** 13.99 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.hi
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 4.61 MB
- **Total amount of disk used:** 14.68 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.vi
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 6.00 MB
- **Total amount of disk used:** 16.07 MB
An example of 'test' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### mlqa-translate-test.ar
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.de
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.es
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.hi
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.vi
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |test|
|----------------------|---:|
|mlqa-translate-test.ar|5335|
|mlqa-translate-test.de|4517|
|mlqa-translate-test.es|5253|
|mlqa-translate-test.hi|4918|
|mlqa-translate-test.vi|5495|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{lewis2019mlqa,
title = {MLQA: Evaluating Cross-lingual Extractive Question Answering},
author = {Lewis, Patrick and Oguz, Barlas and Rinott, Ruty and Riedel, Sebastian and Schwenk, Holger},
journal = {arXiv preprint arXiv:1910.07475},
year = 2019,
eid = {arXiv: 1910.07475}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@M-Salti](https://github.com/M-Salti), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
mlsum | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- es
- fr
- ru
- tr
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|cnn_dailymail
- original
task_categories:
- summarization
- translation
- text-classification
task_ids:
- news-articles-summarization
- multi-class-classification
- multi-label-classification
- topic-classification
paperswithcode_id: mlsum
pretty_name: MLSUM
configs:
- de
- es
- fr
- ru
- tu
dataset_info:
- config_name: de
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 846959840
num_examples: 220887
- name: validation
num_bytes: 47119541
num_examples: 11394
- name: test
num_bytes: 46847612
num_examples: 10701
download_size: 1005814154
dataset_size: 940926993
- config_name: es
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1214558302
num_examples: 266367
- name: validation
num_bytes: 50643400
num_examples: 10358
- name: test
num_bytes: 71263665
num_examples: 13920
download_size: 1456211154
dataset_size: 1336465367
- config_name: fr
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1471965014
num_examples: 392902
- name: validation
num_bytes: 70413212
num_examples: 16059
- name: test
num_bytes: 69660288
num_examples: 15828
download_size: 1849565564
dataset_size: 1612038514
- config_name: ru
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 257389497
num_examples: 25556
- name: validation
num_bytes: 9128497
num_examples: 750
- name: test
num_bytes: 9656398
num_examples: 757
download_size: 766226107
dataset_size: 276174392
- config_name: tu
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 641622783
num_examples: 249277
- name: validation
num_bytes: 25530661
num_examples: 11565
- name: test
num_bytes: 27830212
num_examples: 12775
download_size: 942308960
dataset_size: 694983656
---
# Dataset Card for MLSUM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** https://github.com/recitalAI/MLSUM
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.647/
- **Point of Contact:** [email](thomas@recital.ai)
- **Size of downloaded dataset files:** 1.83 GB
- **Size of the generated dataset:** 4.86 GB
- **Total amount of disk used:** 6.69 GB
### Dataset Summary
We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
We report cross-lingual comparative analyses based on state-of-the-art systems.
These highlight existing biases which motivate the use of a multi-lingual dataset.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### de
- **Size of downloaded dataset files:** 346.58 MB
- **Size of the generated dataset:** 940.93 MB
- **Total amount of disk used:** 1.29 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### es
- **Size of downloaded dataset files:** 513.31 MB
- **Size of the generated dataset:** 1.34 GB
- **Total amount of disk used:** 1.85 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### fr
- **Size of downloaded dataset files:** 619.99 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.23 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### ru
- **Size of downloaded dataset files:** 106.22 MB
- **Size of the generated dataset:** 276.17 MB
- **Total amount of disk used:** 382.39 MB
An example of 'train' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### tu
- **Size of downloaded dataset files:** 247.50 MB
- **Size of the generated dataset:** 694.99 MB
- **Total amount of disk used:** 942.48 MB
An example of 'train' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
### Data Fields
The data fields are the same among all splits.
#### de
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### es
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### fr
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### ru
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### tu
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
### Data Splits
|name|train |validation|test |
|----|-----:|---------:|----:|
|de |220887| 11394|10701|
|es |266367| 10358|13920|
|fr |392902| 16059|15828|
|ru | 25556| 750| 757|
|tu |249277| 11565|12775|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders. See https://github.com/recitalAI/MLSUM#mlsum
### Citation Information
```
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2004.14900},
year={2020}
}
```
### Contributions
Thanks to [@RachelKer](https://github.com/RachelKer), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
mnist | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-nist
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: mnist
pretty_name: MNIST
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
config_name: mnist
splits:
- name: train
num_bytes: 17470848
num_examples: 60000
- name: test
num_bytes: 2916440
num_examples: 10000
download_size: 11594722
dataset_size: 20387288
---
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://yann.lecun.com/exdb/mnist/
- **Repository:**
- **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
'label': 5
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `label`: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Citation Information
```
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
}
```
### Contributions
Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset. |
mocha | ---
pretty_name: MOCHA
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: mocha
tags:
- generative-reading-comprehension-metric
dataset_info:
features:
- name: constituent_dataset
dtype: string
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: reference
dtype: string
- name: candidate
dtype: string
- name: score
dtype: float32
- name: metadata
struct:
- name: scores
sequence: int32
- name: source
dtype: string
- name: candidate2
dtype: string
- name: score2
dtype: float32
splits:
- name: train
num_bytes: 33292592
num_examples: 31069
- name: validation
num_bytes: 4236883
num_examples: 4009
- name: test
num_bytes: 6767409
num_examples: 6321
- name: minimal_pairs
num_bytes: 193560
num_examples: 200
download_size: 14452311
dataset_size: 44490444
---
# Dataset Card for Mocha
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Mocha](https://allennlp.org/mocha)
- **Repository:** [https://github.com/anthonywchen/MOCHA](https://github.com/anthonywchen/MOCHA)
- **Paper:** [MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics](https://www.aclweb.org/anthology/2020.emnlp-main.528/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading comprehension. To address this, we introduce a benchmark for training and evaluating generative reading comprehension metrics: MOdeling Correctness with Human Annotations. MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. Using MOCHA, we train a Learned Evaluation metric for Reading Comprehension, LERC, to mimic human judgement scores. LERC outperforms baseline metrics by 10 to 36 absolute Pearson points on held-out annotations. When we evaluate robustness on minimal pairs, LERC achieves 80% accuracy, outperforming baselines by 14 to 26 absolute percentage points while leaving significant room for improvement. MOCHA presents a challenging problem for developing accurate and robust generative reading comprehension metrics.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. MOCHA pairs reading comprehension instances, which consists of a passage, question, and reference, with candidates and human judgement scores.
### Data Fields
- `constituent_dataset`: the original QA dataset which the data instance came from.
- `id`
- `context`: the passage content.
- `question`: the question related to the passage content.
- `reference`: the correct answer for the question.
- `candidate`: the answer generated from the `reference` by `source`
- `score`: the human judgement score for the `candidate`. Not included in test split, defaults to `-1`
- `metadata`: Not included in minimal pairs split.
- `scores`: list of scores from difference judges, averaged out to get final `score`. defaults to `[]`
- `source`: the generative model to generate the `candidate`
In minimal pairs, we'll have an additional candidate for robust evaluation.
- `candidate2`
- `score2`
### Data Splits
Dataset Split | Number of Instances in Split
--------------|--------------------------------------------
Train | 31,069
Validation | 4,009
Test | 6,321
Minimal Pairs | 200
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation Information
```bitex
@inproceedings{Chen2020MOCHAAD,
author={Anthony Chen and Gabriel Stanovsky and Sameer Singh and Matt Gardner},
title={MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics},
booktitle={EMNLP},
year={2020}
}
```
### Contributions
Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset. |
moroco | ---
annotations_creators:
- found
language_creators:
- found
language:
- ro
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: moroco
pretty_name: 'MOROCO: The Moldavian and Romanian Dialectal Corpus'
language_bcp47:
- ro-MD
dataset_info:
features:
- name: id
dtype: string
- name: category
dtype:
class_label:
names:
'0': culture
'1': finance
'2': politics
'3': science
'4': sports
'5': tech
- name: sample
dtype: string
config_name: moroco
splits:
- name: train
num_bytes: 39314292
num_examples: 21719
- name: test
num_bytes: 10877813
num_examples: 5924
- name: validation
num_bytes: 10721304
num_examples: 5921
download_size: 60711985
dataset_size: 60913409
---
# Dataset Card for MOROCO
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/butnaruandrei/MOROCO)
- **Repository:** [Github](https://github.com/butnaruandrei/MOROCO)
- **Paper:** [Arxiv](https://arxiv.org/abs/1901.06543)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [email](raducu.ionescu@gmail.com)
### Dataset Summary
Introducing MOROCO - The **Mo**ldavian and **Ro**manian Dialectal **Co**rpus. The MOROCO data set contains Moldavian and Romanian samples of text collected from the news domain. The samples belong to one of the following six topics: (0) culture, (1) finance, (2) politics, (3) science, (4) sports, (5) tech. The corpus features a total of 33,564 samples labelled with one of the fore mentioned six categories. We are also including a train/validation/test split with 21,719/5,921/5,924 samples in each subset.
### Supported Tasks and Leaderboards
[LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/)
### Languages
The text dataset is in Romanian (`ro`)
## Dataset Structure
### Data Instances
Below we have an example of sample from MOROCO:
```
{'id': , '48482',
'category': 2,
'sample': '“$NE$ cum am spus, nu este un sfârşit de drum . Vom continua lupta cu toate instrumentele şi cu toate mijloacele legale, parlamentare şi civice pe care le avem la dispoziţie . Evident că vom contesta la $NE$ această lege, au anunţat şi colegii de la $NE$ o astfel de contestaţie . Practic trebuie utilizat orice instrument pe care îl identificăm pentru a bloca intrarea în vigoare a acestei legi . Bineînţeles, şi preşedintele are punctul său de vedere . ( . . . ) $NE$ legi sunt împănate de motive de neconstituţionalitate . Colegii mei de la departamentul juridic lucrează în prezent pentru a definitiva textul contestaţiei”, a declarat $NE$ $NE$ citat de news . ro . Senatul a adoptat, marţi, în calitate de for decizional, $NE$ privind statutul judecătorilor şi procurorilor, cu 80 de voturi ”pentru” şi niciun vot ”împotrivă”, în condiţiile în care niciun partid din opoziţie nu a fost prezent în sală .',
}
```
where 48482 is the sample ID, followed by the category ground truth label, and then the text representing the actual content to be classified by topic.
Note: The category label has integer values ranging from 0 to 5.
### Data Fields
- `id`: string, the unique indentifier of a sample
- `category_label`: integer in the range [0, 5]; the category assigned to a sample.
- `sample`: a string, news report to be classified / used in classification.
### Data Splits
The train/validation/test split contains 21,719/5,921/5,924 samples tagged with the category assigned to each sample in the dataset.
## Dataset Creation
### Curation Rationale
The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics.
For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543).
### Source Data
#### Data Collection and Normalization
For the data collection, five of the most popular news websites in Romania and the Republic of Moldova were targetted. Given that the data set was obtained through a web scraping technique, all the HTML tags needed to be removed, as well as replace consecutive white spaces with a single space.
As part of the pre-processing, we remove named entities, such as country names, cities, public figures, etc. The named entities have been replaced with $NE$. The necessity to remove them, comes also from the scope of this dataset: categorization by topic. Thus, the authors decided to remove named entities in order to prevent classifiers from taking the decision based on features that are not truly indicative of the topics.
#### Who are the source language producers?
The original text comes from news websites from Romania and the Republic of Moldova.
### Annotations
#### Annotation process
As mentioned above, MOROCO is composed of text samples from the top five most popular news websites in Romania and the Republic of Moldova, respectively. Since there are topic tags in the news websites targetd, the text samples can be automatically labeled with the corresponding category.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The textual data collected for MOROCO consists in news reports freely available on the Internet and of public interest.
To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures.
In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language.
### Discussion of Biases
The data included in MOROCO spans a well defined time frame of a few years. Part of the topics that were of interest then in the news landscape, might not show up nowadays or a few years from now in news websites.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Published and managed by Radu Tudor Ionescu and Andrei Butnaru.
### Licensing Information
CC BY-SA 4.0 License
### Citation Information
```
@inproceedings{ Butnaru-ACL-2019,
author = {Andrei M. Butnaru and Radu Tudor Ionescu},
title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}",
booktitle = {Proceedings of ACL},
year = {2019},
pages={688--698},
}
```
### Contributions
Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset. |
movie_rationales | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: MovieRationales
dataset_info:
features:
- name: review
dtype: string
- name: label
dtype:
class_label:
names:
'0': NEG
'1': POS
- name: evidences
sequence: string
splits:
- name: test
num_bytes: 1046377
num_examples: 199
- name: train
num_bytes: 6853624
num_examples: 1600
- name: validation
num_bytes: 830417
num_examples: 200
download_size: 3899487
dataset_size: 8730418
---
# Dataset Card for "movie_rationales"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jayded/eraserbenchmark
- **Paper:** [ERASER: A Benchmark to Evaluate Rationalized NLP Models](https://aclanthology.org/2020.acl-main.408/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.90 MB
- **Size of the generated dataset:** 8.73 MB
- **Total amount of disk used:** 12.62 MB
### Dataset Summary
The movie rationale dataset contains human annotated rationales for movie
reviews.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.90 MB
- **Size of the generated dataset:** 8.73 MB
- **Total amount of disk used:** 12.62 MB
An example of 'validation' looks as follows.
```
{
"evidences": ["Fun movie"],
"label": 1,
"review": "Fun movie\n"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `review`: a `string` feature.
- `label`: a classification label, with possible values including `NEG` (0), `POS` (1).
- `evidences`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 1600| 200| 199|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{deyoung-etal-2020-eraser,
title = "{ERASER}: {A} Benchmark to Evaluate Rationalized {NLP} Models",
author = "DeYoung, Jay and
Jain, Sarthak and
Rajani, Nazneen Fatema and
Lehman, Eric and
Xiong, Caiming and
Socher, Richard and
Wallace, Byron C.",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.408",
doi = "10.18653/v1/2020.acl-main.408",
pages = "4443--4458",
}
@InProceedings{zaidan-eisner-piatko-2008:nips,
author = {Omar F. Zaidan and Jason Eisner and Christine Piatko},
title = {Machine Learning with Annotator Rationales to Reduce Annotation Cost},
booktitle = {Proceedings of the NIPS*2008 Workshop on Cost Sensitive Learning},
month = {December},
year = {2008}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
mrqa | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|drop
- extended|hotpot_qa
- extended|natural_questions
- extended|race
- extended|search_qa
- extended|squad
- extended|trivia_qa
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: mrqa-2019
pretty_name: MRQA 2019
dataset_info:
features:
- name: subset
dtype: string
- name: context
dtype: string
- name: context_tokens
sequence:
- name: tokens
dtype: string
- name: offsets
dtype: int32
- name: qid
dtype: string
- name: question
dtype: string
- name: question_tokens
sequence:
- name: tokens
dtype: string
- name: offsets
dtype: int32
- name: detected_answers
sequence:
- name: text
dtype: string
- name: char_spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: token_spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: answers
sequence: string
config_name: plain_text
splits:
- name: train
num_bytes: 4090681873
num_examples: 516819
- name: test
num_bytes: 57712177
num_examples: 9633
- name: validation
num_bytes: 484107026
num_examples: 58221
download_size: 1479518355
dataset_size: 4632501076
---
# Dataset Card for MRQA 2019
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MRQA 2019 Shared Task](https://mrqa.github.io/2019/shared.html)
- **Repository:** [MRQA 2019 Github repository](https://github.com/mrqa/MRQA-Shared-Task-2019)
- **Paper:** [MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
](https://arxiv.org/abs/1910.09753)
- **Leaderboard:** [Shared task](https://mrqa.github.io/2019/shared.html)
- **Point of Contact:** [mrforqa@gmail.com](mrforqa@gmail.com)
### Dataset Summary
The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge.
The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task.
### Supported Tasks and Leaderboards
From the official repository:
*The format of the task is extractive question answering. Given a question and context passage, systems must find the word or phrase in the document that best answers the question. While this format is somewhat restrictive, it allows us to leverage many existing datasets, and its simplicity helps us focus on out-of-domain generalization, instead of other important but orthogonal challenges.*
*We have adapted several existing datasets from their original formats and settings to conform to our unified extractive setting. Most notably:*
- *We provide only a single, length-limited context.*
- *There are no unanswerable or non-span answer questions.*
- *All questions have at least one accepted answer that is found exactly in the context.*
*A span is judged to be an exact match if it matches the answer string after performing normalization consistent with the SQuAD dataset. Specifically:*
- *The text is uncased.*
- *All punctuation is stripped.*
- *All articles `{a, an, the}` are removed.*
- *All consecutive whitespace markers are compressed to just a single normal space `' '`.*
Answers are evaluated using exact match and token-level F1 metrics. One can refer to the [mrqa_official_eval.py](https://github.com/mrqa/MRQA-Shared-Task-2019/blob/master/mrqa_official_eval.py) for evaluation.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An examples looks like this:
```
{
'qid': 'f43c83e38d1e424ea00f8ad3c77ec999',
'subset': 'SQuAD'
'context': 'CBS broadcast Super Bowl 50 in the U.S., and charged an average of $5 million for a 30-second commercial during the game. The Super Bowl 50 halftime show was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively. It was the third-most watched U.S. broadcast ever.',
'context_tokens': {
'offsets': [0, 4, 14, 20, 25, 28, 31, 35, 39, 41, 45, 53, 56, 64, 67, 68, 70, 78, 82, 84, 94, 105, 112, 116, 120, 122, 126, 132, 137, 140, 149, 154, 158, 168, 171, 175, 183, 188, 194, 203, 208, 216, 222, 233, 241, 245, 251, 255, 257, 261, 271, 275, 281, 286, 292, 296, 302, 307, 314, 323, 328, 330, 342, 344, 347, 351, 355, 360, 361, 366, 374, 379, 389, 393],
'tokens': ['CBS', 'broadcast', 'Super', 'Bowl', '50', 'in', 'the', 'U.S.', ',', 'and', 'charged', 'an', 'average', 'of', '$', '5', 'million', 'for', 'a', '30-second', 'commercial', 'during', 'the', 'game', '.', 'The', 'Super', 'Bowl', '50', 'halftime', 'show', 'was', 'headlined', 'by', 'the', 'British', 'rock', 'group', 'Coldplay', 'with', 'special', 'guest', 'performers', 'Beyoncé', 'and', 'Bruno', 'Mars', ',', 'who', 'headlined', 'the', 'Super', 'Bowl', 'XLVII', 'and', 'Super', 'Bowl', 'XLVIII', 'halftime', 'shows', ',', 'respectively', '.', 'It', 'was', 'the', 'third', '-', 'most', 'watched', 'U.S.', 'broadcast', 'ever', '.']
},
'question': "Who was the main performer at this year's halftime show?",
'question_tokens': {
'offsets': [0, 4, 8, 12, 17, 27, 30, 35, 39, 42, 51, 55],
'tokens': ['Who', 'was', 'the', 'main', 'performer', 'at', 'this', 'year', "'s", 'halftime', 'show', '?']
},
'detected_answers': {
'char_spans': [
{
'end': [201],
'start': [194]
}, {
'end': [201],
'start': [194]
}, {
'end': [201],
'start': [194]
}
],
'text': ['Coldplay', 'Coldplay', 'Coldplay'],
'token_spans': [
{
'end': [38],
'start': [38]
}, {
'end': [38],
'start': [38]
}, {
'end': [38],
'start': [38]
}
]
},
'answers': ['Coldplay', 'Coldplay', 'Coldplay'],
}
```
### Data Fields
- `subset`: which of the dataset does this examples come from?
- `context`: This is the raw text of the supporting passage. Three special token types have been inserted: `[TLE]` precedes document titles, `[DOC]` denotes document breaks, and `[PAR]` denotes paragraph breaks. The maximum length of the context is 800 tokens.
- `context_tokens`: A tokenized version of the supporting passage, using spaCy. Each token is a tuple of the token string and token character offset. The maximum number of tokens is 800.
- `tokens`: list of tokens.
- `offets`: list of offsets.
- `qas`: A list of questions for the given context.
- `qid`: A unique identifier for the question. The `qid` is unique across all datasets.
- `question`: The raw text of the question.
- `question_tokens`: A tokenized version of the question. The tokenizer and token format is the same as for the context.
- `tokens`: list of tokens.
- `offets`: list of offsets.
- `detected_answers`: A list of answer spans for the given question that index into the context. For some datasets these spans have been automatically detected using searching heuristics. The same answer may appear multiple times in the text --- each of these occurrences is recorded. For example, if `42` is the answer, the context `"The answer is 42. 42 is the answer."`, has two occurrences marked.
- `text`: The raw text of the detected answer.
- `char_spans`: Inclusive (start, end) character spans (indexing into the raw context).
- `start`: start (single element)
- `end`: end (single element)
- `token_spans`: Inclusive (start, end) token spans (indexing into the tokenized context).
- `start`: start (single element)
- `end`: end (single element)
### Data Splits
**Training data**
| Dataset | Number of Examples |
| :-----: | :------: |
| [SQuAD](https://arxiv.org/abs/1606.05250) | 86,588 |
| [NewsQA](https://arxiv.org/abs/1611.09830) | 74,160 |
| [TriviaQA](https://arxiv.org/abs/1705.03551)| 61,688 |
| [SearchQA](https://arxiv.org/abs/1704.05179)| 117,384 |
| [HotpotQA](https://arxiv.org/abs/1809.09600)| 72,928 |
| [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 104,071 |
**Development data**
This in-domain data may be used for helping develop models.
| Dataset | Examples |
| :-----: | :------: |
| [SQuAD](https://arxiv.org/abs/1606.05250) | 10,507 |
| [NewsQA](https://arxiv.org/abs/1611.09830) | 4,212 |
| [TriviaQA](https://arxiv.org/abs/1705.03551)| 7,785|
| [SearchQA](https://arxiv.org/abs/1704.05179)| 16,980 |
| [HotpotQA](https://arxiv.org/abs/1809.09600)| 5,904 |
| [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 12,836 |
**Test data**
The final testing data only contain out-of-domain data.
| Dataset | Examples |
| :-----: | :------: |
| [BioASQ](http://bioasq.org/) | 1,504 |
| [DROP](https://arxiv.org/abs/1903.00161) | 1,503 |
| [DuoRC](https://arxiv.org/abs/1804.07927)| 1,501 |
| [RACE](https://arxiv.org/abs/1704.04683) | 674 |
| [RelationExtraction](https://arxiv.org/abs/1706.04115) | 2,948|
| [TextbookQA](http://ai2-website.s3.amazonaws.com/publications/CVPR17_TQA.pdf)| 1,503 |
From the official repository:
***Note:** As previously mentioned, the out-of-domain dataset have been modified from their original settings to fit the unified MRQA Shared Task paradigm. At a high level, the following two major modifications have been made:*
*1. All QA-context pairs are extractive. That is, the answer is selected from the context and not via, e.g., multiple-choice.*
*2. All contexts are capped at a maximum of `800` tokens. As a result, for longer contexts like Wikipedia articles, we only consider examples where the answer appears in the first `800` tokens.*
*As a result, some splits are harder than the original datasets (e.g., removal of multiple-choice in RACE), while some are easier (e.g., restricted context length in NaturalQuestions --- we use the short answer selection). Thus one should expect different performance ranges if comparing to previous work on these datasets.*
## Dataset Creation
### Curation Rationale
From the official repository:
*Both train and test datasets have the same format described above, but may differ in some of the following ways:*
- *Passage distribution: Test examples may involve passages from different sources (e.g., science, news, novels, medical abstracts, etc) with pronounced syntactic and lexical differences.*
- *Question distribution: Test examples may emphasize different styles of questions (e.g., entity-centric, relational, other tasks reformulated as QA, etc) which may come from different sources (e.g., crowdworkers, domain experts, exam writers, etc.)*
- *Joint distribution: Test examples may vary according to the relationship of the question to the passage (e.g., collected independent vs. dependent of evidence, multi-hop, etc)*
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown
### Citation Information
```
@inproceedings{fisch2019mrqa,
title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},
author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},
booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},
year={2019},
}
```
### Contributions
Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
ms_marco | ---
language:
- en
paperswithcode_id: ms-marco
pretty_name: Microsoft Machine Reading Comprehension Dataset
dataset_info:
- config_name: v1.1
features:
- name: answers
sequence: string
- name: passages
sequence:
- name: is_selected
dtype: int32
- name: passage_text
dtype: string
- name: url
dtype: string
- name: query
dtype: string
- name: query_id
dtype: int32
- name: query_type
dtype: string
- name: wellFormedAnswers
sequence: string
splits:
- name: validation
num_bytes: 42710107
num_examples: 10047
- name: train
num_bytes: 350884446
num_examples: 82326
- name: test
num_bytes: 41020711
num_examples: 9650
download_size: 168698008
dataset_size: 434615264
- config_name: v2.1
features:
- name: answers
sequence: string
- name: passages
sequence:
- name: is_selected
dtype: int32
- name: passage_text
dtype: string
- name: url
dtype: string
- name: query
dtype: string
- name: query_id
dtype: int32
- name: query_type
dtype: string
- name: wellFormedAnswers
sequence: string
splits:
- name: validation
num_bytes: 414286005
num_examples: 101093
- name: train
num_bytes: 3466972085
num_examples: 808731
- name: test
num_bytes: 406197152
num_examples: 101092
download_size: 1384271865
dataset_size: 4287455242
---
# Dataset Card for "ms_marco"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://microsoft.github.io/msmarco/](https://microsoft.github.io/msmarco/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.55 GB
- **Size of the generated dataset:** 4.72 GB
- **Total amount of disk used:** 6.28 GB
### Dataset Summary
Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
keyphrase extraction dataset, crawling dataset, and a conversational search.
There have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking
submissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions
This data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1).
The original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.
The current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and
is much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and
builds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.
version v1.1
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### v1.1
- **Size of downloaded dataset files:** 168.69 MB
- **Size of the generated dataset:** 434.61 MB
- **Total amount of disk used:** 603.31 MB
An example of 'train' looks as follows.
```
```
#### v2.1
- **Size of downloaded dataset files:** 1.38 GB
- **Size of the generated dataset:** 4.29 GB
- **Total amount of disk used:** 5.67 GB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### v1.1
- `answers`: a `list` of `string` features.
- `passages`: a dictionary feature containing:
- `is_selected`: a `int32` feature.
- `passage_text`: a `string` feature.
- `url`: a `string` feature.
- `query`: a `string` feature.
- `query_id`: a `int32` feature.
- `query_type`: a `string` feature.
- `wellFormedAnswers`: a `list` of `string` features.
#### v2.1
- `answers`: a `list` of `string` features.
- `passages`: a dictionary feature containing:
- `is_selected`: a `int32` feature.
- `passage_text`: a `string` feature.
- `url`: a `string` feature.
- `query`: a `string` feature.
- `query_id`: a `int32` feature.
- `query_type`: a `string` feature.
- `wellFormedAnswers`: a `list` of `string` features.
### Data Splits
|name|train |validation| test |
|----|-----:|---------:|-----:|
|v1.1| 82326| 10047| 9650|
|v2.1|808731| 101093|101092|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/NguyenRSGTMD16,
author = {Tri Nguyen and
Mir Rosenberg and
Xia Song and
Jianfeng Gao and
Saurabh Tiwary and
Rangan Majumder and
Li Deng},
title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
journal = {CoRR},
volume = {abs/1611.09268},
year = {2016},
url = {http://arxiv.org/abs/1611.09268},
archivePrefix = {arXiv},
eprint = {1611.09268},
timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset. |
ms_terms | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bs
- ca
- chr
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fil
- fr
- ga
- gd
- gl
- gu
- guc
- ha
- he
- hi
- hr
- hu
- hy
- id
- ig
- is
- it
- iu
- ja
- ka
- kk
- km
- kn
- knn
- ko
- ku
- ky
- lb
- lo
- lt
- lv
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- nb
- ne
- nl
- nn
- ory
- pa
- pl
- prs
- pst
- pt
- qu
- quc
- ro
- ru
- rw
- sd
- si
- sk
- sl
- sq
- sr
- st
- sv
- swh
- ta
- te
- tg
- th
- ti
- tk
- tn
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yo
- zh
- zu
language_bcp47:
- bn-IN
- bs-Latn
- es-MX
- fr-CA
- ms-BN
- pt-BR
- sr-BH
- sr-Latn
- zh-Hant-HK
- zh-Hant-TW
license:
- ms-pl
multilinguality:
- multilingual
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MsTerms
dataset_info:
features:
- name: entry_id
dtype: string
- name: term_source
dtype: string
- name: pos
dtype: string
- name: definition
dtype: string
- name: term_target
dtype: string
splits:
- name: train
num_bytes: 6995497
num_examples: 33738
download_size: 0
dataset_size: 6995497
---
# Dataset Card for [ms_terms]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
[Microsoft Terminology Collection](https://www.microsoft.com/en-us/language/terminology)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Nearly 100 Languages.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@leoxzhao](https://github.com/leoxzhao), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
msr_genomics_kbcomp | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: MsrGenomicsKbcomp
tags:
- genomics-knowledge-base-bompletion
dataset_info:
features:
- name: GENE1
dtype: string
- name: relation
dtype:
class_label:
names:
'0': Positive_regulation
'1': Negative_regulation
'2': Family
- name: GENE2
dtype: string
splits:
- name: train
num_bytes: 256789
num_examples: 12160
- name: test
num_bytes: 58116
num_examples: 2784
- name: validation
num_bytes: 27457
num_examples: 1315
download_size: 0
dataset_size: 342362
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NCI-PID-PubMed Genomics Knowledge Base Completion Dataset](https://msropendata.com/datasets/80b4f6e8-5d7c-4abc-9c79-2e51dfedd791)
- **Repository:** [NCI-PID-PubMed Genomics Knowledge Base Completion Dataset](NCI-PID-PubMed Genomics Knowledge Base Completion Dataset)
- **Paper:** [Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text](https://www.aclweb.org/anthology/P16-1136/)
- **Point of Contact:** [Kristina Toutanova](mailto:kristout@google.com)
### Dataset Summary
The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed and annotated by Literome (Poon et al. 2014). This dataset was used in the paper “Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text” (Toutanova, Lin, Yih, Poon, and Quirk, 2016). More details can be found in the included README.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
NCI-PID-PubMed Genomics Knowledge Base Completion Dataset
This dataset includes a database of regulation relationships among genes and corresponding textual mentions of pairs of genes in PubMed article abstracts.
The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed and annotated by Literome. This dataset was used in the paper "Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text".
FILE FORMAT DETAILS
The files train.txt, valid.txt, and test.text contain the training, development, and test set knowledge base (database of regulation relationships) triples used in.
The file text.txt contains the textual triples derived from PubMed via entity linking and processing with Literome. The textual mentions were used for knowledge base completion in.
The separator is a tab character; the relations are Positive_regulation, Negative_regulation, and Family (Family relationships occur only in the training set).
The format is:
| GENE1 | relation | GENE2 |
Example:
ABL1 Positive_regulation CDK2
The separator is a tab character; the relations are Positive_regulation, Negative_regulation, and Family (Family relationships occur only in the training set).
### Data Instances
[More Information Needed]
### Data Fields
The format is:
| GENE1 | relation | GENE2 |
### Data Splits
[More Information Needed]
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
The dataset was initially created by Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon and Chris Quirk, during work done at Microsoft Research.
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{toutanova-etal-2016-compositional,
title = "Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text",
author = "Toutanova, Kristina and
Lin, Victoria and
Yih, Wen-tau and
Poon, Hoifung and
Quirk, Chris",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2016",
address = "Berlin, Germany",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P16-1136",
doi = "10.18653/v1/P16-1136",
pages = "1434--1444",
}
```
### Contributions
Thanks to [@manandey](https://github.com/manandey) for adding this dataset. |
msr_sqa | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- ms-pl
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: null
pretty_name: Microsoft Research Sequential Question Answering
dataset_info:
features:
- name: id
dtype: string
- name: annotator
dtype: int32
- name: position
dtype: int32
- name: question
dtype: string
- name: question_and_history
sequence: string
- name: table_file
dtype: string
- name: table_header
sequence: string
- name: table_data
sequence:
sequence: string
- name: answer_coordinates
sequence:
- name: row_index
dtype: int32
- name: column_index
dtype: int32
- name: answer_text
sequence: string
splits:
- name: train
num_bytes: 19732499
num_examples: 12276
- name: validation
num_bytes: 3738331
num_examples: 2265
- name: test
num_bytes: 5105873
num_examples: 3012
download_size: 4796932
dataset_size: 28576703
---
# Dataset Card for Microsoft Research Sequential Question Answering
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Microsoft Research Sequential Question Answering (SQA) Dataset](https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2)
- **Repository:**
- **Paper:** [https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/acl17-dynsp.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/acl17-dynsp.pdf)
- **Leaderboard:**
- **Point of Contact:**
- Scott Wen-tau Yih scottyih@microsoft.com
- Mohit Iyyer m.iyyer@gmail.com
- Ming-Wei Chang minchang@microsoft.com
### Dataset Summary
Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions.
We created SQA by asking crowdsourced workers to decompose 2,022 questions from WikiTableQuestions (WTQ)*, which contains highly-compositional questions about tables from Wikipedia. We had three workers decompose each WTQ question, resulting in a dataset of 6,066 sequences that contain 17,553 questions in total. Each question is also associated with answers in the form of cell locations in the tables.
- Panupong Pasupat, Percy Liang. "Compositional Semantic Parsing on Semi-Structured Tables" ACL-2015.
[http://www-nlp.stanford.edu/software/sempre/wikitable/](http://www-nlp.stanford.edu/software/sempre/wikitable/)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`).
## Dataset Structure
### Data Instances
```
{'id': 'nt-639',
'annotator': 0,
'position': 0,
'question': 'where are the players from?',
'table_file': 'table_csv/203_149.csv',
'table_header': ['Pick', 'Player', 'Team', 'Position', 'School'],
'table_data': [['1',
'Ben McDonald',
'Baltimore Orioles',
'RHP',
'Louisiana State University'],
['2',
'Tyler Houston',
'Atlanta Braves',
'C',
'"Valley HS (Las Vegas',
' NV)"'],
['3', 'Roger Salkeld', 'Seattle Mariners', 'RHP', 'Saugus (CA) HS'],
['4',
'Jeff Jackson',
'Philadelphia Phillies',
'OF',
'"Simeon HS (Chicago',
' IL)"'],
['5', 'Donald Harris', 'Texas Rangers', 'OF', 'Texas Tech University'],
['6', 'Paul Coleman', 'Saint Louis Cardinals', 'OF', 'Frankston (TX) HS'],
['7', 'Frank Thomas', 'Chicago White Sox', '1B', 'Auburn University'],
['8', 'Earl Cunningham', 'Chicago Cubs', 'OF', 'Lancaster (SC) HS'],
['9',
'Kyle Abbott',
'California Angels',
'LHP',
'Long Beach State University'],
['10',
'Charles Johnson',
'Montreal Expos',
'C',
'"Westwood HS (Fort Pierce',
' FL)"'],
['11',
'Calvin Murray',
'Cleveland Indians',
'3B',
'"W.T. White High School (Dallas',
' TX)"'],
['12', 'Jeff Juden', 'Houston Astros', 'RHP', 'Salem (MA) HS'],
['13', 'Brent Mayne', 'Kansas City Royals', 'C', 'Cal State Fullerton'],
['14',
'Steve Hosey',
'San Francisco Giants',
'OF',
'Fresno State University'],
['15',
'Kiki Jones',
'Los Angeles Dodgers',
'RHP',
'"Hillsborough HS (Tampa',
' FL)"'],
['16', 'Greg Blosser', 'Boston Red Sox', 'OF', 'Sarasota (FL) HS'],
['17', 'Cal Eldred', 'Milwaukee Brewers', 'RHP', 'University of Iowa'],
['18',
'Willie Greene',
'Pittsburgh Pirates',
'SS',
'"Jones County HS (Gray',
' GA)"'],
['19', 'Eddie Zosky', 'Toronto Blue Jays', 'SS', 'Fresno State University'],
['20', 'Scott Bryant', 'Cincinnati Reds', 'OF', 'University of Texas'],
['21', 'Greg Gohr', 'Detroit Tigers', 'RHP', 'Santa Clara University'],
['22',
'Tom Goodwin',
'Los Angeles Dodgers',
'OF',
'Fresno State University'],
['23', 'Mo Vaughn', 'Boston Red Sox', '1B', 'Seton Hall University'],
['24', 'Alan Zinter', 'New York Mets', 'C', 'University of Arizona'],
['25', 'Chuck Knoblauch', 'Minnesota Twins', '2B', 'Texas A&M University'],
['26', 'Scott Burrell', 'Seattle Mariners', 'RHP', 'Hamden (CT) HS']],
'answer_coordinates': {'row_index': [0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25],
'column_index': [4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4,
4]},
'answer_text': ['Louisiana State University',
'Valley HS (Las Vegas, NV)',
'Saugus (CA) HS',
'Simeon HS (Chicago, IL)',
'Texas Tech University',
'Frankston (TX) HS',
'Auburn University',
'Lancaster (SC) HS',
'Long Beach State University',
'Westwood HS (Fort Pierce, FL)',
'W.T. White High School (Dallas, TX)',
'Salem (MA) HS',
'Cal State Fullerton',
'Fresno State University',
'Hillsborough HS (Tampa, FL)',
'Sarasota (FL) HS',
'University of Iowa',
'Jones County HS (Gray, GA)',
'Fresno State University',
'University of Texas',
'Santa Clara University',
'Fresno State University',
'Seton Hall University',
'University of Arizona',
'Texas A&M University',
'Hamden (CT) HS']}
```
### Data Fields
- `id` (`str`): question sequence id (the id is consistent with those in WTQ)
- `annotator` (`int`): `0`, `1`, `2` (the 3 annotators who annotated the question intent)
- `position` (`int`): the position of the question in the sequence
- `question` (`str`): the question given by the annotator
- `table_file` (`str`): the associated table
- `table_header` (`List[str]`): a list of headers in the table
- `table_data` (`List[List[str]]`): 2d array of data in the table
- `answer_coordinates` (`List[Dict]`): the table cell coordinates of the answers (0-based, where 0 is the first row after the table header)
- `row_index`
- `column_index`
- `answer_text` (`List[str]`): the content of the answer cells
Note that some text fields may contain Tab or LF characters and thus start with quotes.
It is recommended to use a CSV parser like the Python CSV package to process the data.
### Data Splits
| | train | test |
|-------------|------:|-----:|
| N. examples | 14541 | 3012 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Microsoft Research Data License Agreement](https://msropendata-web-api.azurewebsites.net/licenses/2f933be3-284d-500b-7ea3-2aa2fd0f1bb2/view).
### Citation Information
```
@inproceedings{iyyer-etal-2017-search,
title = "Search-based Neural Structured Learning for Sequential Question Answering",
author = "Iyyer, Mohit and
Yih, Wen-tau and
Chang, Ming-Wei",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1167",
doi = "10.18653/v1/P17-1167",
pages = "1821--1831",
}
```
### Contributions
Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset. |
msr_text_compression | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
license_details: Microsoft Research Data License Agreement
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-Open-American-National-Corpus-(OANC1)
task_categories:
- summarization
task_ids: []
pretty_name: MsrTextCompression
dataset_info:
features:
- name: source_id
dtype: string
- name: domain
dtype: string
- name: source_text
dtype: string
- name: targets
sequence:
- name: compressed_text
dtype: string
- name: judge_id
dtype: string
- name: num_ratings
dtype: int64
- name: ratings
sequence: int64
splits:
- name: train
num_bytes: 5001312
num_examples: 4936
- name: validation
num_bytes: 449691
num_examples: 447
- name: test
num_bytes: 804536
num_examples: 785
download_size: 0
dataset_size: 6255539
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563
- **Repository:**
- **Paper:** https://www.microsoft.com/en-us/research/wp-content/uploads/2016/09/Sentence_Compression_final-1.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing.
### Supported Tasks and Leaderboards
Text Summarization
### Languages
English
## Dataset Structure
### Data Instances
It contains approximately 6,000 source texts with multiple compressions (about 26,000 pairs of source and compressed texts), representing business letters, newswire, journals, and technical documents sampled from the Open American National Corpus (OANC1).
- Each source text is accompanied by up to five crowd-sourced rewrites constrained to a preset
compression ratio and annotated with quality judgments. Multiple rewrites permit study of the impact of operations on human compression quality and facilitate automatic evaluation.
- This dataset is the first to provide compressions at the multi-sentence (two-sentence paragraph)
level, which may present a stepping stone to whole document summarization.
- Many of these two-sentence paragraphs are compressed both as paragraphs and separately sentence-bysentence, offering data that may yield insights
into the impact of multi-sentence operations on human compression quality.
| Description | Source | Target | Average CPS | Meaning Quality | Grammar Quality |
| :------------- | :----------: | -----------: | -----------: | -----------: | -----------: |
| 1-Sentence | 3764 | 15523 | 4.12 | 2.78 | 2.81 |
| 2-Sentence | 2405 | 10900 | 4.53 | 2.78 | 2.83 |
**Note**: Average CPS = Average Compressions per Source Text
### Data Fields
```
{'domain': 'Newswire',
'source_id': '106',
'source_text': '" Except for this small vocal minority, we have just not gotten a lot of groundswell against this from members, " says APA president Philip G. Zimbardo of Stanford University.',
'targets': {'compressed_text': ['"Except for this small vocal minority, we have not gotten a lot of groundswell against this," says APA president Zimbardo.',
'"Except for a vocal minority, we haven\'t gotten much groundswell from members, " says Philip G. Zimbardo of Stanford University.',
'APA president of Stanford has stated that except for a vocal minority they have not gotten a lot of pushback from members.',
'APA president Philip G. Zimbardo of Stanford says they have not had much opposition against this.'],
'judge_id': ['2', '22', '10', '0'],
'num_ratings': [3, 3, 3, 3],
'ratings': [[6, 6, 6], [11, 6, 6], [6, 11, 6], [6, 11, 11]]}}
```
- source_id: index of article per original dataset
- source_text: uncompressed original text
- domain: source of the article
- targets:
- compressed_text: compressed version of `source_text`
- judge_id: anonymized ids of crowdworkers who proposed compression
- num_ratings: number of ratings available for each proposed compression
- ratings: see table below
Ratings system (excerpted from authors' README):
- 6 = Most important meaning Flawless language (3 on meaning and 3 on grammar as per the paper's terminology)
- 7 = Most important meaning Minor errors (3 on meaning and 2 on grammar)
- 9 = Most important meaning Disfluent or incomprehensible (3 on meaning and 1 on grammar)
- 11 = Much meaning Flawless language (2 on meaning and 3 on grammar)
- 12 = Much meaning Minor errors (2 on meaning and 2 on grammar)
- 14 = Much meaning Disfluent or incomprehensible (2 on meaning and 1 on grammar)
- 21 = Little or none meaning Flawless language (1 on meaning and 3 on grammar)
- 22 = Little or none meaning Minor errors (1 on meaning and 2 on grammar)
- 24 = Little or none meaning Disfluent or incomprehensible (1 on meaning and 1 on grammar)
See **README.txt** from data archive for additional details.
### Data Splits
There are 4,936 source texts in the training, 448 in the development, and 785 in the test set.
## Dataset Creation
### Annotations
#### Annotation process
Compressions were created using UHRS, an inhouse crowd-sourcing system similar to Amazon’s Mechanical Turk, in two annotation rounds, one for shortening and a second to rate compression quality:
1. In the first round, five workers were tasked with abridging each source text by at least 25%, while remaining grammatical and fluent, and retaining the meaning of the original.
2. In the second round, 3-5 judges (raters) were asked to evaluate the grammaticality of each compression on a scale from 1 (major errors, disfluent) through 3 (fluent), and again analogously for meaning preservation on a scale from 1 (orthogonal) through 3 (most important meaning-preserving).
## Additional Information
### Licensing Information
Microsoft Research Data License Agreement
### Citation Information
@inproceedings{Toutanova2016ADA,
title={A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs},
author={Kristina Toutanova and Chris Brockett and Ke M. Tran and Saleema Amershi},
booktitle={EMNLP},
year={2016}
}
### Contributions
Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset. |
msr_zhen_translation_parity | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
- machine-generated
language:
- en
license:
- ms-pl
multilinguality:
- monolingual
- translation
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-newstest2017
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MsrZhenTranslationParity
dataset_info:
features:
- name: Reference-HT
dtype: string
- name: Reference-PE
dtype: string
- name: Combo-4
dtype: string
- name: Combo-5
dtype: string
- name: Combo-6
dtype: string
- name: Online-A-1710
dtype: string
splits:
- name: train
num_bytes: 1797033
num_examples: 2001
download_size: 0
dataset_size: 1797033
---
# Dataset Card for msr_zhen_translation_parity
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
[Translator Human Parity Data](https://msropendata.com/datasets/93f9aa87-9491-45ac-81c1-6498b6be0d0b)
- **Repository:**
- **Paper:**
[Achieving Human Parity on Automatic Chinese to English News Translation](https://www.microsoft.com/en-us/research/publication/achieving-human-parity-on-automatic-chinese-to-english-news-translation/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> Human evaluation results and translation output for the Translator Human Parity Data release,
> as described in https://blogs.microsoft.com/ai/machine-translation-news-test-set-human-parity/
> The Translator Human Parity Data release contains all human evaluation results and translations
> related to our paper "Achieving Human Parity on Automatic Chinese to English News Translation",
> published on March 14, 2018. We have released this data to
> 1) allow external validation of our claim of having achieved human parity
> 2) to foster future research by releasing two additional human references
> for the Reference-WMT test set.
>
The dataset includes:
1) two new references for Chinese-English language pair of WMT17,
one based on human translation from scratch (Reference-HT),
the other based on human post-editing (Reference-PE);
2) human parity translations generated by our research systems Combo-4, Combo-5, and Combo-6,
as well as translation output from online machine translation service Online-A-1710,
collected on October 16, 2017;
The data package provided with the study also includes (but not parsed and provided as
workable features of this dataset) all data points collected in human evaluation campaigns.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset contains 6 extra English translations to Chinese-English language pair of WMT17.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
As mentioned in the summary, this dataset provides 6 extra English translations of
Chinese-English language pair of WMT17.
Data fields are named exactly like the associated paper for easier cross-referenceing.
- `Reference-HT`: human translation from scrach.
- `Reference-PE`: human post-editing.
- `Combo-4`, `Combo-5`, `Combo-6`: three translations by research systems.
- `Online-A-1710`: a translation from an anonymous online machine translation service.
All data fields of a record are translations for the same Chinese source sentence.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Citation information is available at this link [Achieving Human Parity on Automatic Chinese to English News Translation](https://www.microsoft.com/en-us/research/publication/achieving-human-parity-on-automatic-chinese-to-english-news-translation/)
### Contributions
Thanks to [@leoxzhao](https://github.com/leoxzhao) for adding this dataset. |
msra_ner | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MSRA NER
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
config_name: msra_ner
splits:
- name: train
num_bytes: 33323074
num_examples: 45001
- name: test
num_bytes: 2642934
num_examples: 3443
download_size: 15156606
dataset_size: 35966008
train-eval-index:
- config: msra_ner
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset Card for MSRA NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/OYE93/Chinese-NLP-Corpus/tree/master/NER/MSRA)
- **Repository:** [Github](https://github.com/OYE93/Chinese-NLP-Corpus)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
mt_eng_vietnamese | ---
annotations_creators:
- found
language_creators:
- found
multilinguality:
- multilingual
language:
- en
- vi
license:
- unknown
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MtEngVietnamese
dataset_info:
- config_name: iwslt2015-vi-en
features:
- name: translation
dtype:
translation:
languages:
- vi
- en
splits:
- name: train
num_bytes: 32478282
num_examples: 133318
- name: validation
num_bytes: 323743
num_examples: 1269
- name: test
num_bytes: 323743
num_examples: 1269
download_size: 32323025
dataset_size: 33125768
- config_name: iwslt2015-en-vi
features:
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: train
num_bytes: 32478282
num_examples: 133318
- name: validation
num_bytes: 323743
num_examples: 1269
- name: test
num_bytes: 323743
num_examples: 1269
download_size: 32323025
dataset_size: 33125768
---
# Dataset Card for mt_eng_vietnamese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
English, Vietnamese
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'translation': {
'en': 'In 4 minutes , atmospheric chemist Rachel Pike provides a glimpse of the massive scientific effort behind the bold headlines on climate change , with her team -- one of thousands who contributed -- taking a risky flight over the rainforest in pursuit of data on a key molecule .',
'vi': 'Trong 4 phút , chuyên gia hoá học khí quyển Rachel Pike giới thiệu sơ lược về những nỗ lực khoa học miệt mài đằng sau những tiêu đề táo bạo về biến đổi khí hậu , cùng với đoàn nghiên cứu của mình -- hàng ngàn người đã cống hiến cho dự án này -- một chuyến bay mạo hiểm qua rừng già để tìm kiếm thông tin về một phân tử then chốt .'
}
}
```
### Data Fields
- translation:
- en: text in english
- vi: text in vietnamese
### Data Splits
train: 133318, validation: 1269, test: 1269
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{Luong-Manning:iwslt15,
Address = {Da Nang, Vietnam}
Author = {Luong, Minh-Thang and Manning, Christopher D.},
Booktitle = {International Workshop on Spoken Language Translation},
Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},
Year = {2015}}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset. |
muchocine | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Muchocine
dataset_info:
features:
- name: review_body
dtype: string
- name: review_summary
dtype: string
- name: star_rating
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
splits:
- name: train
num_bytes: 11871095
num_examples: 3872
download_size: 55556703
dataset_size: 11871095
---
# Dataset Card for Muchocine
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.lsi.us.es/~fermin/index.php/Datasets
### Dataset Summary
The Muchocine reviews dataset contains 3,872 longform movie reviews in Spanish language,
each with a shorter summary review, and a rating on a 1-5 scale.
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used for Text Classification, more precisely Sentiment Classification where the task is to predict the `star_rating` for a `reveiw_body` or a `review summaray`.
### Languages
Spanish.
## Dataset Structure
### Data Instances
An example from the train split:
```
{
'review_body': 'Zoom nos cuenta la historia de Jack Shepard, anteriormente conocido como el Capitán Zoom, Superhéroe que perdió sus poderes y que actualmente vive en el olvido. La llegada de una amenaza para la Tierra hará que la agencia del gobierno que se ocupa de estos temas acuda a él para que entrene a un grupo de jóvenes con poderes para combatir esta amenaza.Zoom es una comedia familiar, con todo lo que eso implica, es decir, guión flojo y previsible, bromas no salidas de tono, historia amorosa de por medio y un desenlace tópico. La gracia está en que los protagonistas son jóvenes con superpoderes, una producción cargada de efectos especiales y unos cuantos guiños frikis. La película además se pasa volando ya que dura poco mas de ochenta minutos y cabe destacar su prologo en forma de dibujos de comics explicando la historia de la cual partimos en la película.Tim Allen protagoniza la cinta al lado de un envejecido Chevy Chase, que hace de doctor encargado del proyecto, un papel bastante gracioso y ridículo, pero sin duda el mejor papel es el de Courteney Cox, en la piel de una científica amante de los comics y de lo más friki. Del grupito de los cuatro niños sin duda la mas graciosa es la niña pequeña con súper fuerza y la que provocara la mayor parte de los gags debido a su poder.Una comedia entretenida y poca cosa más para ver una tarde de domingo. ',
'review_summary': 'Una comedia entretenida y poca cosa más para ver una tarde de domingo ', 'star_rating': 2
}
```
### Data Fields
- `review_body` - longform review
- `review_summary` - shorter-form review
- `star_rating` - an integer star rating (1-5)
The original source also includes part-of-speech tagging for body and summary fields.
### Data Splits
One split (train) with 3,872 reviews.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data was collected from www.muchocine.net and uploaded by Dr. Fermín L. Cruz Mata
of La Universidad de Sevilla.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The text reviews and star ratings came directly from users, so no additional annotation was needed.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dr. Fermín L. Cruz Mata.
### Licensing Information
[More Information Needed]
### Citation Information
See http://www.lsi.us.es/~fermin/index.php/Datasets
### Contributions
Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset. |
multi_booked | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
- eu
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: multibooked
pretty_name: MultiBooked
configs:
- ca
- eu
dataset_info:
- config_name: ca
features:
- name: text
sequence:
- name: wid
dtype: string
- name: sent
dtype: string
- name: para
dtype: string
- name: word
dtype: string
- name: terms
sequence:
- name: tid
dtype: string
- name: lemma
dtype: string
- name: morphofeat
dtype: string
- name: pos
dtype: string
- name: target
sequence: string
- name: opinions
sequence:
- name: oid
dtype: string
- name: opinion_holder_target
sequence: string
- name: opinion_target_target
sequence: string
- name: opinion_expression_polarity
dtype:
class_label:
names:
'0': StrongNegative
'1': Negative
'2': Positive
'3': StrongPositive
- name: opinion_expression_target
sequence: string
splits:
- name: train
num_bytes: 1952731
num_examples: 567
download_size: 4429415
dataset_size: 1952731
- config_name: eu
features:
- name: text
sequence:
- name: wid
dtype: string
- name: sent
dtype: string
- name: para
dtype: string
- name: word
dtype: string
- name: terms
sequence:
- name: tid
dtype: string
- name: lemma
dtype: string
- name: morphofeat
dtype: string
- name: pos
dtype: string
- name: target
sequence: string
- name: opinions
sequence:
- name: oid
dtype: string
- name: opinion_holder_target
sequence: string
- name: opinion_target_target
sequence: string
- name: opinion_expression_polarity
dtype:
class_label:
names:
'0': StrongNegative
'1': Negative
'2': Positive
'3': StrongPositive
- name: opinion_expression_target
sequence: string
splits:
- name: train
num_bytes: 1175816
num_examples: 343
download_size: 4429415
dataset_size: 1175816
---
# Dataset Card for MultiBooked
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://hdl.handle.net/10230/33928
- **Repository:** https://github.com/jerbarnes/multibooked
- **Paper:** https://arxiv.org/abs/1803.08614
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification.
The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is
an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and
word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two
annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the
guidelines set out in the OpeNER project.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Each sub-dataset is monolingual in the languages:
- ca: Catalan
- eu: Basque
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `text`: layer of the original text.
- `wid`: list of word IDs for each word within the example.
- `sent`: list of sentence IDs for each sentence within the example.
- `para`: list of paragraph IDs for each paragraph within the example.
- `word`: list of words.
- `terms`: layer of the terms resulting from the analysis of the original text (lemmatization, morphological,
PoS tagging)
- `tid`: list of term IDs for each term within the example.
- `lemma`: list of lemmas.
- `morphofeat`: list of morphological features.
- `pos`: list of PoS tags.
- `target`: list of sublists of the corresponding word IDs (normally, the sublists contain only one element,
in a one-to-one correspondence between words and terms).
- `opinions`: layer of the opinions in the text.
- `oid`: list of opinion IDs
- `opinion_holder_target`: list of sublists of the corresponding term IDs that span the opinion holder.
- `opinion_target_target`: list of sublists of the corresponding term IDs that span the opinion target.
- `opinion_expression_polarity`: list of the opinion expression polarities. The polarity can take one of the values:
`StrongNegative`, `Negative`, `Positive`, or `StrongPositive`.
- `opinion_expression_target`: list of sublists of the corresponding term IDs that span the opinion expression.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY 3.0](https://creativecommons.org/licenses/by/3.0/) license.
### Citation Information
```
@inproceedings{Barnes2018multibooked,
author={Barnes, Jeremy and Lambert, Patrik and Badia, Toni},
title={MultiBooked: A corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)},
year = {2018},
month = {May},
date = {7-12},
address = {Miyazaki, Japan},
publisher = {European Language Resources Association (ELRA)},
language = {english}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
multi_eurlex | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
- topic-classification
pretty_name: MultiEURLEX
dataset_info:
- config_name: en
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 389250183
num_examples: 55000
- name: test
num_bytes: 58966963
num_examples: 5000
- name: validation
num_bytes: 41516165
num_examples: 5000
download_size: 2770050147
dataset_size: 489733311
- config_name: da
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 395774777
num_examples: 55000
- name: test
num_bytes: 60343696
num_examples: 5000
- name: validation
num_bytes: 42366390
num_examples: 5000
download_size: 2770050147
dataset_size: 498484863
- config_name: de
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 425489905
num_examples: 55000
- name: test
num_bytes: 65739074
num_examples: 5000
- name: validation
num_bytes: 46079574
num_examples: 5000
download_size: 2770050147
dataset_size: 537308553
- config_name: nl
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 430232783
num_examples: 55000
- name: test
num_bytes: 64728034
num_examples: 5000
- name: validation
num_bytes: 45452550
num_examples: 5000
download_size: 2770050147
dataset_size: 540413367
- config_name: sv
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 329071297
num_examples: 42490
- name: test
num_bytes: 60602026
num_examples: 5000
- name: validation
num_bytes: 42766067
num_examples: 5000
download_size: 2770050147
dataset_size: 432439390
- config_name: bg
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 273160256
num_examples: 15986
- name: test
num_bytes: 109874769
num_examples: 5000
- name: validation
num_bytes: 76892281
num_examples: 5000
download_size: 2770050147
dataset_size: 459927306
- config_name: cs
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 189826410
num_examples: 23187
- name: test
num_bytes: 60702814
num_examples: 5000
- name: validation
num_bytes: 42764243
num_examples: 5000
download_size: 2770050147
dataset_size: 293293467
- config_name: hr
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 80808173
num_examples: 7944
- name: test
num_bytes: 56790830
num_examples: 5000
- name: validation
num_bytes: 23881832
num_examples: 2500
download_size: 2770050147
dataset_size: 161480835
- config_name: pl
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 202211478
num_examples: 23197
- name: test
num_bytes: 64654979
num_examples: 5000
- name: validation
num_bytes: 45545517
num_examples: 5000
download_size: 2770050147
dataset_size: 312411974
- config_name: sk
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 188126769
num_examples: 22971
- name: test
num_bytes: 60922686
num_examples: 5000
- name: validation
num_bytes: 42786793
num_examples: 5000
download_size: 2770050147
dataset_size: 291836248
- config_name: sl
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 170800933
num_examples: 23184
- name: test
num_bytes: 54552441
num_examples: 5000
- name: validation
num_bytes: 38286422
num_examples: 5000
download_size: 2770050147
dataset_size: 263639796
- config_name: es
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 433955383
num_examples: 52785
- name: test
num_bytes: 66885004
num_examples: 5000
- name: validation
num_bytes: 47178821
num_examples: 5000
download_size: 2770050147
dataset_size: 548019208
- config_name: fr
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 442358905
num_examples: 55000
- name: test
num_bytes: 68520127
num_examples: 5000
- name: validation
num_bytes: 48408938
num_examples: 5000
download_size: 2770050147
dataset_size: 559287970
- config_name: it
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 429495813
num_examples: 55000
- name: test
num_bytes: 64731770
num_examples: 5000
- name: validation
num_bytes: 45886537
num_examples: 5000
download_size: 2770050147
dataset_size: 540114120
- config_name: pt
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 419281927
num_examples: 52370
- name: test
num_bytes: 64771247
num_examples: 5000
- name: validation
num_bytes: 45897231
num_examples: 5000
download_size: 2770050147
dataset_size: 529950405
- config_name: ro
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 164966676
num_examples: 15921
- name: test
num_bytes: 67248472
num_examples: 5000
- name: validation
num_bytes: 46968070
num_examples: 5000
download_size: 2770050147
dataset_size: 279183218
- config_name: et
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 173878703
num_examples: 23126
- name: test
num_bytes: 56535287
num_examples: 5000
- name: validation
num_bytes: 39580866
num_examples: 5000
download_size: 2770050147
dataset_size: 269994856
- config_name: fi
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 336145949
num_examples: 42497
- name: test
num_bytes: 63280920
num_examples: 5000
- name: validation
num_bytes: 44500040
num_examples: 5000
download_size: 2770050147
dataset_size: 443926909
- config_name: hu
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 208805862
num_examples: 22664
- name: test
num_bytes: 68990666
num_examples: 5000
- name: validation
num_bytes: 48101023
num_examples: 5000
download_size: 2770050147
dataset_size: 325897551
- config_name: lt
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 185211691
num_examples: 23188
- name: test
num_bytes: 59484711
num_examples: 5000
- name: validation
num_bytes: 41841024
num_examples: 5000
download_size: 2770050147
dataset_size: 286537426
- config_name: lv
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 186396252
num_examples: 23208
- name: test
num_bytes: 59814093
num_examples: 5000
- name: validation
num_bytes: 42002727
num_examples: 5000
download_size: 2770050147
dataset_size: 288213072
- config_name: el
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 768224743
num_examples: 55000
- name: test
num_bytes: 117209312
num_examples: 5000
- name: validation
num_bytes: 81923366
num_examples: 5000
download_size: 2770050147
dataset_size: 967357421
- config_name: mt
features:
- name: celex_id
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 179866781
num_examples: 17521
- name: test
num_bytes: 65831230
num_examples: 5000
- name: validation
num_bytes: 46737914
num_examples: 5000
download_size: 2770050147
dataset_size: 292435925
- config_name: all_languages
features:
- name: celex_id
dtype: string
- name: text
dtype:
translation:
languages:
- en
- da
- de
- nl
- sv
- bg
- cs
- hr
- pl
- sk
- sl
- es
- fr
- it
- pt
- ro
- et
- fi
- hu
- lt
- lv
- el
- mt
- name: labels
sequence:
class_label:
names:
'0': '100149'
'1': '100160'
'2': '100148'
'3': '100147'
'4': '100152'
'5': '100143'
'6': '100156'
'7': '100158'
'8': '100154'
'9': '100153'
'10': '100142'
'11': '100145'
'12': '100150'
'13': '100162'
'14': '100159'
'15': '100144'
'16': '100151'
'17': '100157'
'18': '100161'
'19': '100146'
'20': '100155'
splits:
- name: train
num_bytes: 6971500859
num_examples: 55000
- name: test
num_bytes: 1536038431
num_examples: 5000
- name: validation
num_bytes: 1062290624
num_examples: 5000
download_size: 2770050147
dataset_size: 9569829914
---
# Dataset Card for "MultiEURLEX"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nlpaueb/MultiEURLEX/
- **Repository:** https://github.com/nlpaueb/MultiEURLEX/
- **Paper:** https://arxiv.org/abs/2109.00904
- **Leaderboard:** N/A
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
**Documents**
MultiEURLEX comprises 65k EU laws in 23 official EU languages. Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. Each EUROVOC label ID is associated with a *label descriptor*, e.g., [60, agri-foodstuffs], [6006, plant product], [1115, fruit]. The descriptors are also available in the 23 languages. Chalkidis et al. (2019) published a monolingual (English) version of this dataset, called EUR-LEX, comprising 57k EU laws with the originally assigned gold labels.
**Multi-granular Labeling**
EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
We created three alternative sets of labels per document, by replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively. Thus, we provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment. Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
**Data Split and Concept Drift**
MultiEURLEX is *chronologically* split in training (55k, 1958-2010), development (5k, 2010-2012), test (5k, 2012-2016) subsets, using the English documents. The test subset contains the same 5k documents in all 23 languages. The development subset also contains the same 5k documents in 23 languages, except Croatian. Croatia is the most recent EU member (2013); older laws are gradually translated.
For the official languages of the seven oldest member countries, the same 55k training documents are available; for the other languages, only a subset of the 55k training documents is available.
Compared to EUR-LEX (Chalkidis et al., 2019), MultiEURLEX is not only larger (8k more documents) and multilingual; it is also more challenging, as the chronological split leads to temporal real-world *concept drift* across the training, development, test subsets, i.e., differences in label distribution and phrasing, representing a realistic *temporal generalization* problem (Huang et al., 2019; Lazaridou et al., 2021). Recently, Søgaard et al. (2021) showed this setup is more realistic, as it does not over-estimate real performance, contrary to random splits (Gorman and Bedrick, 2019).
### Supported Tasks and Leaderboards
Similarly to EUR-LEX (Chalkidis et al., 2019), MultiEURLEX can be used for legal topic classification, a multi-label classification task where legal documents need to be assigned concepts (in our case, from EUROVOC) reflecting their topics. Unlike EUR-LEX, however, MultiEURLEX supports labels from three different granularities (EUROVOC levels). More importantly, apart from monolingual (*one-to-one*) experiments, it can be used to study cross-lingual transfer scenarios, including *one-to-many* (systems trained in one language and used in other languages with no training data), and *many-to-one* or *many-to-many* (systems jointly trained in multiple languages and used in one or more other languages).
The dataset is not yet part of an established benchmark.
### Languages
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
## Dataset Structure
### Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages')
```
```json
{
"celex_id": "31979D0509",
"text": {"en": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"es": "DECISIÓN DEL CONSEJO de 24 de mayo de 1979 sobre ayuda financiera de la Comunidad para la erradicación de la peste porcina africana en España (79/509/CEE)\nEL CONSEJO DE LAS COMUNIDADES EUROPEAS\nVeniendo en cuenta el Tratado constitutivo de la Comunidad Económica Europea y, en particular, Su artículo 43,\n Vista la propuesta de la Comisión (1),\n Visto el dictamen del Parlamento Europeo (2),\nConsiderando que la Comunidad debe tomar todas las medidas adecuadas para protegerse contra la aparición de la peste porcina africana en su territorio;\nConsiderando a tal fin que la Comunidad ha emprendido y sigue llevando a cabo acciones destinadas a contener los brotes de este tipo de enfermedades lejos de sus fronteras, ayudando a los países afectados a reforzar sus medidas preventivas; que a tal efecto ya se han concedido a España subvenciones comunitarias;\nQue estas medidas han contribuido sin duda alguna a la protección de la ganadería comunitaria, especialmente mediante la creación y mantenimiento de una zona tampón al norte del río Ebro;\nConsiderando, no obstante, , a juicio de las propias autoridades españolas, las medidas implementadas hasta ahora deben reforzarse si se quiere alcanzar el objetivo fundamental de erradicar la enfermedad en todo el país;\nConsiderando que las autoridades españolas han pedido a la Comunidad que contribuya a los gastos necesarios para la ejecución eficaz de un programa de erradicación total;\nConsiderando que conviene dar una respuesta favorable a esta solicitud concediendo una ayuda a España, habida cuenta del compromiso asumido por dicho país de proteger a la Comunidad contra la peste porcina africana y de eliminar completamente esta enfermedad al final de un plan de erradicación de cinco años;\nMientras que este plan de erradicación debe incluir e determinadas medidas que garanticen la eficacia de las acciones emprendidas, debiendo ser posible adaptar estas medidas a la evolución de la situación mediante un procedimiento que establezca una estrecha cooperación entre los Estados miembros y la Comisión;\nConsiderando que es necesario mantener el Los Estados miembros informados periódicamente sobre el progreso de las acciones emprendidas.",
"de": "...",
"bg": "..."
},
"labels": [
1,
13,
47
]
}
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 23 supported languages. For example:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'en')
```
```json
{
"celex_id": "31979D0509",
"text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"labels": [
1,
13,
47
]
}
```
### Data Fields
**Multilingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (dict[**str**]) A dictionary with the 23 languages as keys and the full content of each document as values.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
**Monolingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (**str**) The full content of each document across languages.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of the EUROVOC concepts, similar to [Chalkidis et al. (2020)](https://aclanthology.org/2020.emnlp-main.607/), please download the relevant JSON file [here](https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json).
Then you may load it and use it:
```python
import json
from datasets import load_dataset
# Load the English part of the dataset
dataset = load_dataset('multi_eurlex', 'en', split='train')
# Load (label_id, descriptor) mapping
with open('./eurovoc_descriptors.json') as jsonl_file:
eurovoc_concepts = json.load(jsonl_file)
# Get feature map info
classlabel = dataset.features["labels"].feature
# Retrieve IDs and descriptors from dataset
for sample in dataset:
print(f'DOCUMENT: {sample["celex_id"]}')
# DOCUMENT: 32006D0213
for label_id in sample['labels']:
print(f'LABEL: id:{label_id}, eurovoc_id: {classlabel.int2str(label_id)}, \
eurovoc_desc:{eurovoc_concepts[classlabel.int2str(label_id)]}')
# LABEL: id: 1, eurovoc_id: '100160', eurovoc_desc: 'industry'
```
### Data Splits
<table>
<tr><td> Language </td> <td> ISO code </td> <td> Member Countries where official </td> <td> EU Speakers [1] </td> <td> Number of Documents [2] </td> </tr>
<tr><td> English </td> <td> <b>en</b> </td> <td> United Kingdom (1973-2020), Ireland (1973), Malta (2004) </td> <td> 13/ 51% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> Germany (1958), Belgium (1958), Luxembourg (1958) </td> <td> 16/32% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> France (1958), Belgium(1958), Luxembourg (1958) </td> <td> 12/26% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Italian </td> <td> <b>it</b> </td> <td> Italy (1958) </td> <td> 13/16% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Spanish </td> <td> <b>es</b> </td> <td> Spain (1986) </td> <td> 8/15% </td> <td> 52,785 / 5,000 / 5,000 </td> </tr>
<tr><td> Polish </td> <td> <b>pl</b> </td> <td> Poland (2004) </td> <td> 8/9% </td> <td> 23,197 / 5,000 / 5,000 </td> </tr>
<tr><td> Romanian </td> <td> <b>ro</b> </td> <td> Romania (2007) </td> <td> 5/5% </td> <td> 15,921 / 5,000 / 5,000 </td> </tr>
<tr><td> Dutch </td> <td> <b>nl</b> </td> <td> Netherlands (1958), Belgium (1958) </td> <td> 4/5% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> Greece (1981), Cyprus (2008) </td> <td> 3/4% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Hungarian </td> <td> <b>hu</b> </td> <td> Hungary (2004) </td> <td> 3/3% </td> <td> 22,664 / 5,000 / 5,000 </td> </tr>
<tr><td> Portuguese </td> <td> <b>pt</b> </td> <td> Portugal (1986) </td> <td> 2/3% </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Czech </td> <td> <b>cs</b> </td> <td> Czech Republic (2004) </td> <td> 2/3% </td> <td> 23,187 / 5,000 / 5,000 </td> </tr>
<tr><td> Swedish </td> <td> <b>sv</b> </td> <td> Sweden (1995) </td> <td> 2/3% </td> <td> 42,490 / 5,000 / 5,000 </td> </tr>
<tr><td> Bulgarian </td> <td> <b>bg</b> </td> <td> Bulgaria (2007) </td> <td> 2/2% </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Danish </td> <td> <b>da</b> </td> <td> Denmark (1973) </td> <td> 1/1% </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Finnish </td> <td> <b>fi</b> </td> <td> Finland (1995) </td> <td> 1/1% </td> <td> 42,497 / 5,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> Slovakia (2004) </td> <td> 1/1% </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Lithuanian </td> <td> <b>lt</b> </td> <td> Lithuania (2004) </td> <td> 1/1% </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Croatian </td> <td> <b>hr</b> </td> <td> Croatia (2013) </td> <td> 1/1% </td> <td> 7,944 / 2,500 / 5,000 </td> </tr>
<tr><td> Slovene </td> <td> <b>sl</b> </td> <td> Slovenia (2004) </td> <td> <1/<1% </td> <td> 23,184 / 5,000 / 5,000 </td> </tr>
<tr><td> Estonian </td> <td> <b>et</b> </td> <td> Estonia (2004) </td> <td> <1/<1% </td> <td> 23,126 / 5,000 / 5,000 </td> </tr>
<tr><td> Latvian </td> <td> <b>lv</b> </td> <td> Latvia (2004) </td> <td> <1/<1% </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Maltese </td> <td> <b>mt</b> </td> <td> Malta (2004) </td> <td> <1/<1% </td> <td> 17,521 / 5,000 / 5,000 </td> </tr>
</table>
[1] Native and Total EU speakers percentage (%) \
[2] Training / Development / Test Splits
## Dataset Creation
### Curation Rationale
The dataset was curated by Chalkidis et al. (2021).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the EUR-LEX portal (https://eur-lex.europa.eu) in unprocessed formats (HTML, XML, RDF). The documents were downloaded from the EUR-LEX portal in HTML. The relevant EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
We stripped HTML mark-up to provide the documents in plain text format.
We inferred the labels for EUROVOC levels 1--3, by backtracking the EUROVOC hierarchy branches, from the originally assigned labels to their ancestors in levels 1--3, respectively.
#### Who are the source language producers?
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
### Annotations
#### Annotation process
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
We augmented the annotation with three alternative sets of labels per document, replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively.
Thus, we provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment.Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset contains publicly available EU laws that do not include personal or sensitive information with the exception of trivial information presented by consent, e.g., the names of the current presidents of the European Parliament and European Council, and other administration bodies.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). This does not imply that no other languages are spoken in EU countries, although EU laws are not translated to other languages (https://europa.eu/european-union/about-eu/eu-languages_en).
## Additional Information
### Dataset Curators
Chalkidis et al. (2021)
### Licensing Information
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos.*
*MultiEURLEX - A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer.*
*Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Punta Cana, Dominican Republic. 2021*
```
@InProceedings{chalkidis-etal-2021-multieurlex,
author = {Chalkidis, Ilias
and Fergadiotis, Manos
and Androutsopoulos, Ion},
title = {MultiEURLEX -- A multi-lingual and multi-label legal document
classification dataset for zero-shot cross-lingual transfer},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods
in Natural Language Processing},
year = {2021},
publisher = {Association for Computational Linguistics},
location = {Punta Cana, Dominican Republic},
url = {https://arxiv.org/abs/2109.00904}
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
multi_news | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 558392265
num_examples: 44972
- name: validation
num_bytes: 68272432
num_examples: 5622
- name: test
num_bytes: 70032124
num_examples: 5622
download_size: 756785627
dataset_size: 696696821
---
# Dataset Card for Multi-News
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/Alex-Fabbri/Multi-News](https://github.com/Alex-Fabbri/Multi-News)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 256.96 MB
- **Size of the generated dataset:** 700.18 MB
- **Total amount of disk used:** 957.14 MB
### Dataset Summary
Multi-News, consists of news articles and human-written summaries
of these articles from the site newser.com.
Each summary is professionally written by editors and
includes links to the original articles cited.
There are two features:
- document: text of news articles seperated by special token "|||||".
- summary: news summary.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 256.96 MB
- **Size of the generated dataset:** 700.18 MB
- **Total amount of disk used:** 957.14 MB
An example of 'validation' looks as follows.
```
{
"document": "some line val \n another line",
"summary": "target val line"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|44972| 5622|5622|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
```
This Dataset Usage Agreement ("Agreement") is a legal agreement with LILY LAB for the Dataset made available to the individual or entity ("Researcher") exercising rights under this Agreement. "Dataset" includes all text, data, information, source code, and any related materials, documentation, files, media, updates or revisions.
The Dataset is intended for non-commercial research and educational purposes only, and is made available free of charge without extending any license or other intellectual property rights. By downloading or using the Dataset, the Researcher acknowledges that they agree to the terms in this Agreement, and represent and warrant that they have authority to do so on behalf of any entity exercising rights under this Agreement. The Researcher accepts and agrees to be bound by the terms and conditions of this Agreement. If the Researcher does not agree to this Agreement, they may not download or use the Dataset.
By sharing content with m, such as by submitting content to this site or by corresponding with LILY LAB contributors, the Researcher grants LILY LAB the right to use, reproduce, display, perform, adapt, modify, distribute, have distributed, and promote the content in any form, anywhere and for any purpose, such as for evaluating and comparing summarization systems. Nothing in this Agreement shall obligate LILY LAB to provide any support for the Dataset. Any feedback, suggestions, ideas, comments, improvements given by the Researcher related to the Dataset is voluntarily given, and may be used by LILY LAB without obligation or restriction of any kind.
The Researcher accepts full responsibility for their use of the Dataset and shall defend indemnify, and hold harmless m, including their employees, trustees, officers, and agents, against any and all claims arising from the Researcher's use of the Dataset. The Researcher agrees to comply with all laws and regulations as they relate to access to and use of the Dataset and Service including U.S. export jurisdiction and other U.S. and international regulations.
THE DATASET IS PROVIDED "AS IS." LILY LAB DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WITHOUT LIMITATION OF THE ABOVE, LILY LAB DISCLAIMS ANY WARRANTY THAT DATASET IS BUG OR ERROR-FREE, AND GRANTS NO WARRANTY REGARDING ITS USE OR THE RESULTS THEREFROM INCLUDING, WITHOUT LIMITATION, ITS CORRECTNESS, ACCURACY, OR RELIABILITY. THE DATASET IS NOT WARRANTIED TO FULFILL ANY PARTICULAR PURPOSES OR NEEDS.
TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT SHALL LILY LAB BE LIABLE FOR ANY LOSS, DAMAGE OR INJURY, DIRECT AND INDIRECT, INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER FOR BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE) OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, INCLUDING BUT NOT LIMITED TO LOSS OF PROFITS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS SHALL APPLY NOTWITHSTANDING ANY FAILURE OF ESSENTIAL PURPOSE OF ANY LIMITED REMEDY.
This Agreement is effective until terminated. LILY LAB reserves the right to terminate the Researcher's access to the Dataset at any time. If the Researcher breaches this Agreement, the Researcher's rights to use the Dataset shall terminate automatically. The Researcher will immediately cease all use and distribution of the Dataset and destroy any copies or portions of the Dataset in their possession.
This Agreement is governed by the laws of the SOME_PLACE, without regard to conflict of law principles. All terms and provisions of this Agreement shall, if possible, be construed in a manner which makes them valid, but in the event any term or provision of this Agreement is found by a court of competent jurisdiction to be illegal or unenforceable, the validity or enforceability of the remainder of this Agreement shall not be affected.
This Agreement is the complete and exclusive agreement between the parties with respect to its subject matter and supersedes all prior or contemporaneous oral or written agreements or understandings relating to the subject matter.
```
### Citation Information
```
@misc{alex2019multinews,
title={Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model},
author={Alexander R. Fabbri and Irene Li and Tianwei She and Suyi Li and Dragomir R. Radev},
year={2019},
eprint={1906.01749},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
multi_nli | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-3.0
- cc-by-sa-3.0
- mit
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: multinli
pretty_name: Multi-Genre Natural Language Inference
license_details: Open Portion of the American National Corpus
dataset_info:
features:
- name: promptID
dtype: int32
- name: pairID
dtype: string
- name: premise
dtype: string
- name: premise_binary_parse
dtype: string
- name: premise_parse
dtype: string
- name: hypothesis
dtype: string
- name: hypothesis_binary_parse
dtype: string
- name: hypothesis_parse
dtype: string
- name: genre
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 410211586
num_examples: 392702
- name: validation_matched
num_bytes: 10063939
num_examples: 9815
- name: validation_mismatched
num_bytes: 10610221
num_examples: 9832
download_size: 226850426
dataset_size: 430885746
---
# Dataset Card for Multi-Genre Natural Language Inference (MultiNLI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 76.95 MB
- **Total amount of disk used:** 303.81 MB
### Dataset Summary
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
crowd-sourced collection of 433k sentence pairs annotated with textual
entailment information. The corpus is modeled on the SNLI corpus, but differs in
that covers a range of genres of spoken and written text, and supports a
distinctive cross-genre generalization evaluation. The corpus served as the
basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 76.95 MB
- **Total amount of disk used:** 303.81 MB
Example of a data instance:
```
{
"promptID": 31193,
"pairID": "31193n",
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"premise_binary_parse": "( ( Conceptually ( cream skimming ) ) ( ( has ( ( ( two ( basic dimensions ) ) - ) ( ( product and ) geography ) ) ) . ) )",
"premise_parse": "(ROOT (S (NP (JJ Conceptually) (NN cream) (NN skimming)) (VP (VBZ has) (NP (NP (CD two) (JJ basic) (NNS dimensions)) (: -) (NP (NN product) (CC and) (NN geography)))) (. .)))",
"hypothesis": "Product and geography are what make cream skimming work. ",
"hypothesis_binary_parse": "( ( ( Product and ) geography ) ( ( are ( what ( make ( cream ( skimming work ) ) ) ) ) . ) )",
"hypothesis_parse": "(ROOT (S (NP (NN Product) (CC and) (NN geography)) (VP (VBP are) (SBAR (WHNP (WP what)) (S (VP (VBP make) (NP (NP (NN cream)) (VP (VBG skimming) (NP (NN work)))))))) (. .)))",
"genre": "government",
"label": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `promptID`: Unique identifier for prompt
- `pairID`: Unique identifier for pair
- `{premise,hypothesis}`: combination of `premise` and `hypothesis`
- `{premise,hypothesis} parse`: Each sentence as parsed by the Stanford PCFG Parser 3.5.2
- `{premise,hypothesis} binary parse`: parses in unlabeled binary-branching format
- `genre`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.
### Data Splits
|train |validation_matched|validation_mismatched|
|-----:|-----------------:|--------------------:|
|392702| 9815| 9832|
## Dataset Creation
### Curation Rationale
They constructed MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representations within the training domain and on their ability to derive reasonable representations in unfamiliar domains.
### Source Data
#### Initial Data Collection and Normalization
They created each sentence pair by selecting a premise sentence from a preexisting text source and asked a human annotator to compose a novel sentence to pair with it as a hypothesis.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere).
### Citation Information
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
multi_nli_mismatch | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-3.0
- cc-by-sa-3.0
- mit
- other
license_details: Open Portion of the American National Corpus
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: multinli
pretty_name: Multi-Genre Natural Language Inference
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 75601459
num_examples: 392702
- name: validation
num_bytes: 2009444
num_examples: 10000
download_size: 226850426
dataset_size: 77610903
---
# Dataset Card for Multi-Genre Natural Language Inference (Mismatched only)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 77.62 MB
- **Total amount of disk used:** 304.46 MB
### Dataset Summary
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
crowd-sourced collection of 433k sentence pairs annotated with textual
entailment information. The corpus is modeled on the SNLI corpus, but differs in
that covers a range of genres of spoken and written text, and supports a
distinctive cross-genre generalization evaluation. The corpus served as the
basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 77.62 MB
- **Total amount of disk used:** 304.46 MB
An example of 'train' looks as follows.
```
{
"hypothesis": "independence",
"label": "contradiction",
"premise": "correlation"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a `string` feature.
### Data Splits
| name |train |validation|
|----------|-----:|---------:|
|plain_text|392702| 10000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
multi_para_crawl | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- ca
- cs
- da
- de
- el
- es
- et
- eu
- fi
- fr
- ga
- gl
- ha
- hr
- hu
- ig
- is
- it
- km
- lt
- lv
- mt
- my
- nb
- ne
- nl
- nn
- pl
- ps
- pt
- ro
- ru
- si
- sk
- sl
- so
- sv
- sw
- tl
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MultiParaCrawl
dataset_info:
- config_name: cs-is
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- is
splits:
- name: train
num_bytes: 148967967
num_examples: 691006
download_size: 61609317
dataset_size: 148967967
- config_name: ga-sk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ga
- sk
splits:
- name: train
num_bytes: 92802332
num_examples: 390327
download_size: 39574554
dataset_size: 92802332
- config_name: lv-mt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- lv
- mt
splits:
- name: train
num_bytes: 116533998
num_examples: 464160
download_size: 49770574
dataset_size: 116533998
- config_name: nb-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nb
- ru
splits:
- name: train
num_bytes: 116899303
num_examples: 399050
download_size: 40932849
dataset_size: 116899303
- config_name: de-tl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- tl
splits:
- name: train
num_bytes: 30880849
num_examples: 98156
download_size: 12116471
dataset_size: 30880849
---
# Dataset Card for MultiParaCrawl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/MultiParaCrawl.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/MultiParaCrawl.php
E.g.
`dataset = load_dataset("multi_para_crawl", lang1="en", lang2="nl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
multi_re_qa | ---
annotations_creators:
- expert-generated
- found
language_creators:
- expert-generated
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
source_datasets:
- extended|other-BioASQ
- extended|other-DuoRC
- extended|other-HotpotQA
- extended|other-Natural-Questions
- extended|other-Relation-Extraction
- extended|other-SQuAD
- extended|other-SearchQA
- extended|other-TextbookQA
- extended|other-TriviaQA
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: multireqa
pretty_name: MultiReQA
configs:
- BioASQ
- DuoRC
- HotpotQA
- NaturalQuestions
- RelationExtraction
- SQuAD
- SearchQA
- TextbookQA
- TriviaQA
dataset_info:
- config_name: SearchQA
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: train
num_bytes: 183902877
num_examples: 3163801
- name: validation
num_bytes: 26439174
num_examples: 454836
download_size: 36991959
dataset_size: 210342051
- config_name: TriviaQA
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: train
num_bytes: 107326326
num_examples: 1893674
- name: validation
num_bytes: 13508062
num_examples: 238339
download_size: 21750402
dataset_size: 120834388
- config_name: HotpotQA
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: train
num_bytes: 29516866
num_examples: 508879
- name: validation
num_bytes: 3027229
num_examples: 52191
download_size: 6343389
dataset_size: 32544095
- config_name: SQuAD
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: train
num_bytes: 16828974
num_examples: 95659
- name: validation
num_bytes: 2012997
num_examples: 10642
download_size: 3003646
dataset_size: 18841971
- config_name: NaturalQuestions
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: train
num_bytes: 28732767
num_examples: 448355
- name: validation
num_bytes: 1418124
num_examples: 22118
download_size: 6124487
dataset_size: 30150891
- config_name: BioASQ
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: test
num_bytes: 766190
num_examples: 14158
download_size: 156649
dataset_size: 766190
- config_name: RelationExtraction
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: test
num_bytes: 217870
num_examples: 3301
download_size: 73019
dataset_size: 217870
- config_name: TextbookQA
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: test
num_bytes: 4182675
num_examples: 71147
download_size: 704602
dataset_size: 4182675
- config_name: DuoRC
features:
- name: candidate_id
dtype: string
- name: response_start
dtype: int32
- name: response_end
dtype: int32
splits:
- name: test
num_bytes: 1483518
num_examples: 5525
download_size: 97625
dataset_size: 1483518
---
# Dataset Card for MultiReQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/MultiReQA
- **Repository:** https://github.com/google-research-datasets/MultiReQA
- **Paper:** https://arxiv.org/pdf/2005.02507.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, in cluding BioASQ, RelationExtraction, TextbookQA, contain only the test data (also includes DuoRC but not specified in the official documentation)
### Supported Tasks and Leaderboards
- Question answering (QA)
- Retrieval question answering (ReQA)
### Languages
Sentence boundary annotation for SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, TextbookQA and DuoRC
## Dataset Structure
### Data Instances
The general format is:
`
{
"candidate_id": <candidate_id>,
"response_start": <response_start>,
"response_end": <response_end>
}
...
`
An example from SearchQA:
`{'candidate_id': 'SearchQA_000077f3912049dfb4511db271697bad/_0_1',
'response_end': 306,
'response_start': 243} `
### Data Fields
`
{
"candidate_id": <STRING>,
"response_start": <INT>,
"response_end": <INT>
}
...
`
- **candidate_id:** The candidate id of the candidate sentence. It consists of the original qid from the MRQA shared task.
- **response_start:** The start index of the sentence with respect to its original context.
- **response_end:** The end index of the sentence with respect to its original context
### Data Splits
Train and Dev splits are available only for the following datasets,
- SearchQA
- TriviaQA
- HotpotQA
- SQuAD
- NaturalQuestions
Test splits are available only for the following datasets,
- BioASQ
- RelationExtraction
- TextbookQA
The number of candidate sentences for each dataset in the table below.
| | MultiReQA | |
|--------------------|-----------|---------|
| | train | test |
| SearchQA | 629,160 | 454,836 |
| TriviaQA | 335,659 | 238,339 |
| HotpotQA | 104,973 | 52,191 |
| SQuAD | 87,133 | 10,642 |
| NaturalQuestions | 106,521 | 22,118 |
| BioASQ | - | 14,158 |
| RelationExtraction | - | 3,301 |
| TextbookQA | - | 3,701 |
## Dataset Creation
### Curation Rationale
MultiReQA is a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets from the [MRQA shared task](https://mrqa.github.io/). The dataset was curated by converting existing QA datasets from [MRQA shared task](https://mrqa.github.io/) to the format of MultiReQA benchmark.
### Source Data
#### Initial Data Collection and Normalization
The Initial data collection was performed by converting existing QA datasets from MRQA shared task to the format of MultiReQA benchmark.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotators/curators of the dataset are [mandyguo-xyguo](https://github.com/mandyguo-xyguo) and [mwurts4google](https://github.com/mwurts4google), the contributors of the official MultiReQA github repository
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotators/curators of the dataset are [mandyguo-xyguo](https://github.com/mandyguo-xyguo) and [mwurts4google](https://github.com/mwurts4google), the contributors of the official MultiReQA github repository
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{m2020multireqa,
title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},
author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},
year={2020},
eprint={2005.02507},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset. |
multi_woz_v22 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- dialogue-modeling
- multi-class-classification
- parsing
paperswithcode_id: multiwoz
pretty_name: Multi-domain Wizard-of-Oz
dataset_info:
- config_name: v2.2
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: turn_id
dtype: string
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slots_values
sequence:
- name: slots_values_name
dtype: string
- name: slots_values_list
sequence: string
- name: slots
sequence:
- name: slot
dtype: string
- name: value
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: copy_from
dtype: string
- name: copy_from_value
sequence: string
- name: dialogue_acts
struct:
- name: dialog_act
sequence:
- name: act_type
dtype: string
- name: act_slots
sequence:
- name: slot_name
dtype: string
- name: slot_value
dtype: string
- name: span_info
sequence:
- name: act_type
dtype: string
- name: act_slot_name
dtype: string
- name: act_slot_value
dtype: string
- name: span_start
dtype: int32
- name: span_end
dtype: int32
splits:
- name: train
num_bytes: 68222649
num_examples: 8437
- name: validation
num_bytes: 8990945
num_examples: 1000
- name: test
num_bytes: 9027095
num_examples: 1000
download_size: 276592909
dataset_size: 86240689
- config_name: v2.2_active_only
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: turn_id
dtype: string
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slots_values
sequence:
- name: slots_values_name
dtype: string
- name: slots_values_list
sequence: string
- name: slots
sequence:
- name: slot
dtype: string
- name: value
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: copy_from
dtype: string
- name: copy_from_value
sequence: string
- name: dialogue_acts
struct:
- name: dialog_act
sequence:
- name: act_type
dtype: string
- name: act_slots
sequence:
- name: slot_name
dtype: string
- name: slot_value
dtype: string
- name: span_info
sequence:
- name: act_type
dtype: string
- name: act_slot_name
dtype: string
- name: act_slot_value
dtype: string
- name: span_start
dtype: int32
- name: span_end
dtype: int32
splits:
- name: train
num_bytes: 40937577
num_examples: 8437
- name: validation
num_bytes: 5377939
num_examples: 1000
- name: test
num_bytes: 5410819
num_examples: 1000
download_size: 276592909
dataset_size: 51726335
---
# Dataset Card for MultiWOZ
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [MultiWOZ 2.2 github repository](https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2)
- **Paper:** [MultiWOZ v2](https://arxiv.org/abs/1810.00278), and [MultiWOZ v2.2](https://www.aclweb.org/anthology/2020.nlp4convai-1.13.pdf)
- **Point of Contact:** [Paweł Budzianowski](pfb30@cam.ac.uk)
### Dataset Summary
Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics.
MultiWOZ 2.1 (Eric et al., 2019) identified and fixed many erroneous annotations and user utterances in the original version, resulting in an
improved version of the dataset. MultiWOZ 2.2 is a yet another improved version of this dataset, which identifies and fixes dialogue state annotation errors
across 17.3% of the utterances on top of MultiWOZ 2.1 and redefines the ontology by disallowing vocabularies of slots with a large number of possible values
(e.g., restaurant name, time of booking) and introducing standardized slot span annotations for these slots.
### Supported Tasks and Leaderboards
This dataset supports a range of task.
- **Generative dialogue modeling** or `dialogue-modeling`: the text of the dialogues can be used to train a sequence model on the utterances. Performance on this task is typically evaluated with delexicalized-[BLEU](https://huggingface.co/metrics/bleu), inform rate and request success.
- **Intent state tracking**, a `multi-class-classification` task: predict the belief state of the user side of the conversation, performance is measured by [F1](https://huggingface.co/metrics/f1).
- **Dialog act prediction**, a `parsing` task: parse an utterance into the corresponding dialog acts for the system to use. [F1](https://huggingface.co/metrics/f1) is typically reported.
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
A data instance is a full multi-turn dialogue between a `USER` and a `SYSTEM`. Each turn has a single utterance, e.g.:
```
['What fun places can I visit in the East?',
'We have five spots which include boating, museums and entertainment. Any preferences that you have?']
```
The utterances of the `USER` are also annotated with frames denoting their intent and believe state:
```
[{'service': ['attraction'],
'slots': [{'copy_from': [],
'copy_from_value': [],
'exclusive_end': [],
'slot': [],
'start': [],
'value': []}],
'state': [{'active_intent': 'find_attraction',
'requested_slots': [],
'slots_values': {'slots_values_list': [['east']],
'slots_values_name': ['attraction-area']}}]},
{'service': [], 'slots': [], 'state': []}]
```
Finally, each of the utterances is annotated with dialog acts which provide a structured representation of what the `USER` or `SYSTEM` is inquiring or giving information about.
```
[{'dialog_act': {'act_slots': [{'slot_name': ['east'],
'slot_value': ['area']}],
'act_type': ['Attraction-Inform']},
'span_info': {'act_slot_name': ['area'],
'act_slot_value': ['east'],
'act_type': ['Attraction-Inform'],
'span_end': [39],
'span_start': [35]}},
{'dialog_act': {'act_slots': [{'slot_name': ['none'], 'slot_value': ['none']},
{'slot_name': ['boating', 'museums', 'entertainment', 'five'],
'slot_value': ['type', 'type', 'type', 'choice']}],
'act_type': ['Attraction-Select', 'Attraction-Inform']},
'span_info': {'act_slot_name': ['type', 'type', 'type', 'choice'],
'act_slot_value': ['boating', 'museums', 'entertainment', 'five'],
'act_type': ['Attraction-Inform',
'Attraction-Inform',
'Attraction-Inform',
'Attraction-Inform'],
'span_end': [40, 49, 67, 12],
'span_start': [33, 42, 54, 8]}}]
```
### Data Fields
Each dialogue instance has the following fields:
- `dialogue_id`: a unique ID identifying the dialog. The MUL and PMUL names refer to strictly multi domain dialogues (at least 2 main domains are involved) while the SNG, SSNG and WOZ names refer to single domain dialogues with potentially sub-domains like booking.
- `services`: a list of services mentioned in the dialog, such as `train` or `hospitals`.
- `turns`: the sequence of utterances with their annotations, including:
- `turn_id`: a turn identifier, unique per dialog.
- `speaker`: either the `USER` or `SYSTEM`.
- `utterance`: the text of the utterance.
- `dialogue_acts`: The structured parse of the utterance into dialog acts in the system's grammar
- `act_type`: Such as e.g. `Attraction-Inform` to seek or provide information about an `attraction`
- `act_slots`: provide more details about the action
- `span_info`: maps these `act_slots` to the `utterance` text.
- `frames`: only for `USER` utterances, track the user's belief state, i.e. a structured representation of what they are trying to achieve in the fialog. This decomposes into:
- `service`: the service they are interested in
- `state`: their belief state including their `active_intent` and further information expressed in `requested_slots`
- `slots`: a mapping of the `requested_slots` to where they are mentioned in the text. It takes one of two forms, detailed next:
The first type are span annotations that identify the location where slot values have been mentioned in the utterances for non-categorical slots. These span annotations are represented as follows:
```
{
"slots": [
{
"slot": String of slot name.
"start": Int denoting the index of the starting character in the utterance corresponding to the slot value.
"exclusive_end": Int denoting the index of the character just after the last character corresponding to the slot value in the utterance. In python, utterance[start:exclusive_end] gives the slot value.
"value": String of value. It equals to utterance[start:exclusive_end], where utterance is the current utterance in string.
}
]
}
```
There are also some non-categorical slots whose values are carried over from another slot in the dialogue state. Their values don"t explicitly appear in the utterances. For example, a user utterance can be "I also need a taxi from the restaurant to the hotel.", in which the state values of "taxi-departure" and "taxi-destination" are respectively carried over from that of "restaurant-name" and "hotel-name". For these slots, instead of annotating them as spans, a "copy from" annotation identifies the slot it copies the value from. This annotation is formatted as follows,
```
{
"slots": [
{
"slot": Slot name string.
"copy_from": The slot to copy from.
"value": A list of slot values being . It corresponds to the state values of the "copy_from" slot.
}
]
}
```
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|---------------------|------:|-----------:|-----:|
| Number of dialogues | 8438 | 1000 | 1000 |
| Number of turns | 42190 | 5000 | 5000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The initial dataset (Versions 1.0 and 2.0) was created by a team of researchers from the [Cambridge Dialogue Systems Group](https://mi.eng.cam.ac.uk/research/dialogue/corpora/). Version 2.1 was developed on top of v2.0 by a team from Amazon, and v2.2 was developed by a team of Google researchers.
### Licensing Information
The dataset is released under the Apache License 2.0.
### Citation Information
You can cite the following for the various versions of MultiWOZ:
Version 1.0
```
@inproceedings{ramadan2018large,
title={Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing},
author={Ramadan, Osman and Budzianowski, Pawe{\l} and Gasic, Milica},
booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics},
volume={2},
pages={432--437},
year={2018}
}
```
Version 2.0
```
@inproceedings{budzianowski2018large,
Author = {Budzianowski, Pawe{\l} and Wen, Tsung-Hsien and Tseng, Bo-Hsiang and Casanueva, I{\~n}igo and Ultes Stefan and Ramadan Osman and Ga{\v{s}}i\'c, Milica},
title={MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling},
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2018}
}
```
Version 2.1
```
@article{eric2019multiwoz,
title={MultiWOZ 2.1: Multi-Domain Dialogue State Corrections and State Tracking Baselines},
author={Eric, Mihail and Goel, Rahul and Paul, Shachi and Sethi, Abhishek and Agarwal, Sanchit and Gao, Shuyag and Hakkani-Tur, Dilek},
journal={arXiv preprint arXiv:1907.01669},
year={2019}
}
```
Version 2.2
```
@inproceedings{zang2020multiwoz,
title={MultiWOZ 2.2: A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines},
author={Zang, Xiaoxue and Rastogi, Abhinav and Sunkara, Srinivas and Gupta, Raghav and Zhang, Jianguo and Chen, Jindong},
booktitle={Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, ACL 2020},
pages={109--117},
year={2020}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
multi_x_science_sum | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
tags:
- paper-abstract-generation
dataset_info:
features:
- name: aid
dtype: string
- name: mid
dtype: string
- name: abstract
dtype: string
- name: related_work
dtype: string
- name: ref_abstract
sequence:
- name: cite_N
dtype: string
- name: mid
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 169364465
num_examples: 30369
- name: test
num_bytes: 27965523
num_examples: 5093
- name: validation
num_bytes: 28168498
num_examples: 5066
download_size: 61329304
dataset_size: 225498486
---
# Dataset Card for Multi-XScience
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Multi-XScience repository](https://github.com/yaolu/Multi-XScience)
- **Paper:** [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
### Dataset Summary
Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
{'abstract': 'Author(s): Kuperberg, Greg; Thurston, Dylan P. | Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3-manifolds associated with Chern-Simons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.',
'aid': 'math9912167',
'mid': '1631980677',
'ref_abstract': {'abstract': ['This note is a sequel to our earlier paper of the same title [4] and describes invariants of rational homology 3-spheres associated to acyclic orthogonal local systems. Our work is in the spirit of the Axelrod–Singer papers [1], generalizes some of their results, and furnishes a new setting for the purely topological implications of their work.',
'Recently, Mullins calculated the Casson-Walker invariant of the 2-fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2-fold branched cover is a rational homology 3-sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the p-fold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot.'],
'cite_N': ['@cite_16', '@cite_26'],
'mid': ['1481005306', '1641082372']},
'related_work': 'Two other generalizations that can be considered are invariants of graphs in 3-manifolds, and invariants associated to other flat connections @cite_16 . We will analyze these in future work. Among other things, there should be a general relation between flat bundles and links in 3-manifolds on the one hand and finite covers and branched covers on the other hand @cite_26 .'}
### Data Fields
{`abstract`: text of paper abstract \
`aid`: arxiv id \
`mid`: microsoft academic graph id \
`ref_abstract`: \
{ \
`abstract`: text of reference paper (cite_N) abstract \
`cite_N`: special cite symbol, \
`mid`: reference paper's (cite_N) microsoft academic graph id \
}, \
`related_work`: text of paper related work \
}
### Data Splits
The data is split into a training, validation and test.
| train | validation | test |
|------:|-----------:|-----:|
| 30369 | 5066 | 5093 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{lu2020multi,
title={Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
author={Lu, Yao and Dong, Yue and Charlin, Laurent},
journal={arXiv preprint arXiv:2010.14235},
year={2020}
}
```
### Contributions
Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset. |
multidoc2dial | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: MultiDoc2Dial
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|doc2dial
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: multidoc2dial
configs:
- dialogue_domain
- document_domain
- multidoc2dial
dataset_info:
- config_name: dialogue_domain
features:
- name: dial_id
dtype: string
- name: domain
dtype: string
- name: turns
list:
- name: turn_id
dtype: int32
- name: role
dtype: string
- name: da
dtype: string
- name: references
list:
- name: id_sp
dtype: string
- name: label
dtype: string
- name: doc_id
dtype: string
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 11700598
num_examples: 3474
- name: validation
num_bytes: 2210378
num_examples: 661
download_size: 6451144
dataset_size: 13910976
- config_name: document_domain
features:
- name: domain
dtype: string
- name: doc_id
dtype: string
- name: title
dtype: string
- name: doc_text
dtype: string
- name: spans
list:
- name: id_sp
dtype: string
- name: tag
dtype: string
- name: start_sp
dtype: int32
- name: end_sp
dtype: int32
- name: text_sp
dtype: string
- name: title
dtype: string
- name: parent_titles
sequence:
- name: id_sp
dtype: string
- name: text
dtype: string
- name: level
dtype: string
- name: id_sec
dtype: string
- name: start_sec
dtype: int32
- name: text_sec
dtype: string
- name: end_sec
dtype: int32
- name: doc_html_ts
dtype: string
- name: doc_html_raw
dtype: string
splits:
- name: train
num_bytes: 29378955
num_examples: 488
download_size: 6451144
dataset_size: 29378955
- config_name: multidoc2dial
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: da
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: utterance
dtype: string
- name: domain
dtype: string
splits:
- name: validation
num_bytes: 24331976
num_examples: 4201
- name: train
num_bytes: 126589982
num_examples: 21451
- name: test
num_bytes: 33032
num_examples: 5
download_size: 6451144
dataset_size: 150954990
---
# Dataset Card for MultiDoc2Dial
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doc2dial.github.io/multidoc2dial/
- **Repository:** https://github.com/IBM/multidoc2dial
- **Paper:** https://arxiv.org/pdf/2109.12595.pdf
- **Leaderboard:**
- **Point of Contact:** sngfng@gmail.com
### Dataset Summary
MultiDoc2Dial is a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents.
Most previous works treat document-grounded dialogue modeling as a machine reading comprehension task based on a
single given document or passage. We aim to address more realistic scenarios where a goal-oriented information-seeking
conversation involves multiple topics, and hence is grounded on different documents.
### Supported Tasks and Leaderboards
> Supported Task: Open domain question answering, document-grounded dialogue, passage retrieval
> Leaderboard:
### Languages
English
## Dataset Structure
### Data Instances
Sample data instance for `multidoc2dial` :
```
{
"id": "8df07b7a98990db27c395cb1f68a962e_1",
"title": "Top 5 DMV Mistakes and How to Avoid Them#3_0",
"context": "Many DMV customers make easily avoidable mistakes that cause them significant problems, including encounters with law enforcement and impounded vehicles. Because we see customers make these mistakes over and over again , we are issuing this list of the top five DMV mistakes and how to avoid them. \n\n1. Forgetting to Update Address \nBy statute , you must report a change of address to DMV within ten days of moving. That is the case for the address associated with your license, as well as all the addresses associated with each registered vehicle, which may differ. It is not sufficient to only: write your new address on the back of your old license; tell the United States Postal Service; or inform the police officer writing you a ticket. If you fail to keep your address current , you will miss a suspension order and may be charged with operating an unregistered vehicle and/or aggravated unlicensed operation, both misdemeanors. This really happens , but the good news is this is a problem that is easily avoidable. Learn more about how to change the address on your license and registrations [1 ] \n\n2. Leaving the State Without Notifying DMV \nStates communicate with each other , so when you move to another state, be sure to tie up any loose ends regarding your New York State license or registration. That means resolving any unanswered tickets, suspensions or revocations, and surrendering your license plates to NYS when you get to your new home state. A license suspension or revocation here could mean that your new home state will not issue you a license there. Remember , it is important to notify DMV of your new address so that any possible mail correspondence can reach you. Also , turning in your plates is important to avoid an insurance lapse. \n\n3. Letting Insurance Lapse \nBecause we all pay indirectly for crashes involving uninsured motorists , New York State requires every motorist to maintain auto insurance every single day a vehicle is registered. DMV works with insurance companies to electronically monitor your insurance coverage , and we know when coverage is dropped for any reason. When that happens , we mail you an insurance inquiry letter to allow you to clear up the problem. We send 500,000 inquiry letters a year. If the inquiry letter does not resolve the problem , we must suspend the vehicle registration and , if it persists, your driver license!We suspend 300,000 registrations a year for failure to maintain insurance. If you fail to maintain an updated address with us , you won t learn that you have an insurance problem , and we will suspend your registration and license. Make sure you turn in your vehicle s license plates at DMV before you cancel your insurance policy. Insurance policies must be from a company licensed in New York State. Learn more about Insurances Lapes [2] and How to Surrender your Plates [3 ] \n\n4. Understanding how Much Traffic Points Cost \nDMV maintains a point system to track dangerous drivers. Often , motorists convicted of a traffic ticket feel they have resolved all their motoring issues with the local court, but later learn that the Driver Responsibility Assessment DRA is a separate DMV charge based on the total points they accumulate. The $300 DRA fee can be paid in $100 annual installments over three years. Motorists who fail to maintain an updated address with DMV may resolve their tickets with the court, but never receive their DRA assessment because we do not have their new address on record. Failure to pay the DRA will result in a suspended license. Learn more about About the NYS Driver Point System [4] and how to Pay Driver Responsibility Assessment [5 ] \n\n5. Not Bringing Proper Documentation to DMV Office \nAbout ten percent of customers visiting a DMV office do not bring what they need to complete their transaction, and have to come back a second time to finish their business. This can be as simple as not bringing sufficient funds to pay for a license renewal or not having the proof of auto insurance required to register a car. Better yet , don t visit a DMV office at all, and see if your transaction can be performed online, like an address change, registration renewal, license renewal, replacing a lost title, paying a DRA or scheduling a road test. Our award - winning website is recognized as one of the best in the nation. It has all the answers you need to efficiently perform any DMV transaction. Consider signing up for our MyDMV service, which offers even more benefits. Sign up or log into MyDMV [6 ] ",
"question": "Hello, I forgot o update my address, can you help me with that?[SEP]",
"da": "query_condition",
"answers":
{
"text": ['you must report a change of address to DMV within ten days of moving. That is the case for the address associated with your license, as well as all the addresses associated with each registered vehicle, which may differ. "],
"answer_start": [346]
},
"utterance": "hi, you have to report any change of address to DMV within 10 days after moving. You should do this both for the address associated with your license and all the addresses associated with all your vehicles.",
"domain": "dmv"
}
```
Sample data instance for `document_domain` :
```
{
"domain": "ssa",
"doc_id": "Benefits Planner: Survivors | Planning For Your Survivors | Social Security Administration#1_0",
"title": "Benefits Planner: Survivors | Planning For Your Survivors | Social Security Administration#1",
"doc_text": "\n\nBenefits Planner: Survivors | Planning For Your Survivors \nAs you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. You can earn up to four credits each year. In 2019 , for example , you earn one credit for each $1,360 of wages or self - employment income. When you have earned $5,440 , you have earned your four credits for the year. The number of credits needed to provide benefits for your survivors depends on your age when you die. No one needs more than 40 credits 10 years of work to be eligible for any Social Security benefit. But , the younger a person is , the fewer credits they must have for family members to receive survivors benefits. Benefits can be paid to your children and your spouse who is caring for the children even if you don't have the required number of credits. They can get benefits if you have credit for one and one - half years of work 6 credits in the three years just before your death. \n\nFor Your Widow Or Widower \nThere are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse's earnings record. And , for many of those survivors, particularly aged women, those benefits are keeping them out of poverty. Widows and widowers can receive : reduced benefits as early as age 60 or full benefits at full retirement age or older. benefits as early as age 50 if they're disabled AND their disability started before or within seven years of your death. benefits at any age , if they have not remarried , and if they take care of your child who is under age 16 or disabled and receives benefits on your record. If applying for disability benefits on a deceased worker s record , they can speed up the application process if they complete an Adult Disability Report and have it available at the time of their appointment. We use the same definition of disability for widows and widowers as we do for workers. \n\nFor Your Surviving Divorced Spouse \nIf you have a surviving divorced spouse , they could get the same benefits as your widow or widower provided that your marriage lasted 10 years or more. Benefits paid to a surviving divorced spouse won't affect the benefit amounts your other survivors will receive based on your earnings record. If your former spouse is caring for your child who is under age 16 or disabled and gets benefits on your record , they will not have to meet the length - of - marriage rule. The child must be your natural or legally adopted child. \n\nFor Your Children \nYour unmarried children who are under 18 up to age 19 if attending elementary or secondary school full time can be eligible to receive Social Security benefits when you die. And your child can get benefits at any age if they were disabled before age 22 and remain disabled. Besides your natural children , your stepchildren, grandchildren, step grandchildren or adopted children may receive benefits under certain circumstances. For further information , view our publication. \n\nFor Your Parents \nYou must have been providing at least half of your parent s support and your parent must not be eligible to receive a retirement benefit that is higher than the benefit we could pay on your record. Generally, your parent also must not have married after your death ; however, there are some exceptions. In addition to your natural parent , your stepparent or adoptive parent may receive benefits if they became your parent before you were age 16. \n\nHow Much Would Your Survivors Receive \nHow much your family could receive in benefits depends on your average lifetime earnings. The higher your earnings were , the higher their benefits would be. We calculate a basic amount as if you had reached full retirement age at the time you die. These are examples of monthly benefit payments : Widow or widower, full retirement age or older 100 percent of your benefit amount ; Widow or widower , age 60 to full retirement age 71 to 99 percent of your basic amount ; Disabled widow or widower , age 50 through 59 71 percent ; Widow or widower , any age, caring for a child under age 16 75 percent ; A child under age 18 19 if still in elementary or secondary school or disabled 75 percent ; and Your dependent parent , age 62 or older : One surviving parent 82 percent. Two surviving parents 75 percent to each parent. Percentages for a surviving divorced spouse would be the same as above. There may also be a special lump - sum death payment. \n\nMaximum Family Amount \nThere's a limit to the amount that family members can receive each month. The limit varies , but it is generally equal to between 150 and 180 percent of the basic benefit rate. If the sum of the benefits payable to family members is greater than this limit , the benefits will be reduced proportionately. Any benefits paid to a surviving divorced spouse based on disability or age won't count toward this maximum amount. Get your online or check our Benefit Calculators for an estimate of the benefits your family could receive if you died right now. \n\nOther Things You Need To Know \nThere are limits on how much survivors may earn while they receive benefits. Benefits for a widow, widower, or surviving divorced spouse may be affected by several additional factors : If your widow, widower, or surviving divorced spouse remarries before they reach age 60 age 50 if disabled , they cannot receive benefits as a surviving spouse while they're married. If your widow, widower, or surviving divorced spouse remarries after they reach age 60 age 50 if disabled , they will continue to qualify for benefits on your Social Security record. However , if their current spouse is a Social Security beneficiary , they may want to apply for spouse's benefits on their record. If that amount is more than the widow's or widower's benefit on your record , they will receive a combination of benefits that equals the higher amount. If your widow, widower, or surviving divorced spouse receives benefits on your record , they can switch to their own retirement benefit as early as age 62. This assumes they're eligible for retirement benefits and their retirement rate is higher than their rate as a widow, widower, or surviving divorced spouse. In many cases , a widow or widower can begin receiving one benefit at a reduced rate and then, at full retirement age, switch to the other benefit at an unreduced rate. If your widow, widower, or surviving divorced spouse will also receive a pension based on work not covered by Social Security, such as government or foreign work , their Social Security benefits as a survivor may be affected. ",
"spans": [
{
"id_sp": "1",
"tag": "h2",
"start_sp": 0,
"end_sp": 61,
"text_sp": "\n\nBenefits Planner: Survivors | Planning For Your Survivors \n",
"title": "Benefits Planner: Survivors | Planning For Your Survivors",
"parent_titles": {
"id_sp": [],
"text": [],
"level": []
},
"id_sec": "t_0",
"start_sec": 0,
"text_sec": "\n\nBenefits Planner: Survivors | Planning For Your Survivors \n",
"end_sec": 61
},
{
"id_sp": "2",
"tag": "u",
"start_sp": 61,
"end_sp": 90,
"text_sp": "As you plan for the future , ",
"title": "Benefits Planner: Survivors | Planning For Your Survivors",
"parent_titles": {
"id_sp": [],
"text": [],
"level": []
},
"id_sec": "1",
"start_sec": 61,
"text_sec": "As you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. ",
"end_sec": 274
},
{
"id_sp": "3",
"tag": "u",
"start_sp": 90,
"end_sp": 168,
"text_sp": "you'll want to think about what your family would need if you should die now. ",
"title": "Benefits Planner: Survivors | Planning For Your Survivors",
"parent_titles": {
"id_sp": [],
"text": [],
"level": []
},
"id_sec": "1",
"start_sec": 61,
"text_sec": "As you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. ",
"end_sec": 274
}
],
"doc_html_ts": "<main><section><div><h2 sent_id=\"1\" text_id=\"1\">Benefits Planner: Survivors | Planning For Your Survivors</h2></div></section><section><div><article><section><div tag_id=\"1\"><u sent_id=\"2\" tag_id=\"1\"><u sent_id=\"2\" tag_id=\"1\" text_id=\"2\">As you plan for the future ,</u><u sent_id=\"2\" tag_id=\"1\" text_id=\"3\">you 'll want to think about what your family would need if you should die now .</u></u><u sent_id=\"3\" tag_id=\"1\"><u sent_id=\"3\" tag_id=\"1\" text_id=\"4\">Social Security can help your family if you have earned enough Social Security credits through your work .</u></u></div><div tag_id=\"2\"><u sent_id=\"4\" tag_id=\"2\"><u sent_id=\"4\" tag_id=\"2\" text_id=\"5\">You can earn up to four credits each year .</u></u><u sent_id=\"5\" tag_id=\"2\"><u sent_id=\"5\" tag_id=\"2\" text_id=\"6\">In 2019 ,</u><u sent_id=\"5\" tag_id=\"2\" text_id=\"7\">for example ,</u><u sent_id=\"5\" tag_id=\"2\" text_id=\"8\">you earn one credit for each $ 1,360 of wages or self - employment income .</u></u><u sent_id=\"6\" tag_id=\"2\"><u sent_id=\"6\" tag_id=\"2\" text_id=\"9\">When you have earned $ 5,440 ,</u><u sent_id=\"6\" tag_id=\"2\" text_id=\"10\">you have earned your four credits for the year .</u></u></div><div tag_id=\"3\"><u sent_id=\"7\" tag_id=\"3\"><u sent_id=\"7\" tag_id=\"3\" text_id=\"11\">The number of credits needed to provide benefits for your survivors depends on your age when you die .</u></u><u sent_id=\"8\" tag_id=\"3\"><u sent_id=\"8\" tag_id=\"3\" text_id=\"12\">No one needs more than 40 credits 10 years of work to be eligible for any Social Security benefit .</u></u><u sent_id=\"9\" tag_id=\"3\"><u sent_id=\"9\" tag_id=\"3\" text_id=\"13\">But ,</u><u sent_id=\"9\" tag_id=\"3\" text_id=\"14\">the younger a person is ,</u><u sent_id=\"9\" tag_id=\"3\" text_id=\"15\">the fewer credits they must have for family members to receive survivors benefits .</u></u></div><div tag_id=\"4\"><u sent_id=\"10\" tag_id=\"4\"><u sent_id=\"10\" tag_id=\"4\" text_id=\"16\">Benefits can be paid to your children and your spouse who is caring for the children even if you do n't have the required number of credits .</u></u><u sent_id=\"11\" tag_id=\"4\"><u sent_id=\"11\" tag_id=\"4\" text_id=\"17\">They can get benefits if you have credit for one and one - half years of work 6 credits in the three years just before your death .</u></u></div></section><section><h3 sent_id=\"12\" text_id=\"18\">For Your Widow Or Widower</h3><div tag_id=\"5\"><u sent_id=\"13\" tag_id=\"5\"><u sent_id=\"13\" tag_id=\"5\" text_id=\"19\">There are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse 's earnings record .</u></u><u sent_id=\"14\" tag_id=\"5\"><u sent_id=\"14\" tag_id=\"5\" text_id=\"20\">And ,</u><u sent_id=\"14\" tag_id=\"5\" text_id=\"21\">for many of those survivors , particularly aged women , those benefits are keeping them out of poverty .</u></u></div><div tag_id=\"6\"><u sent_id=\"15\" tag_id=\"6\"><u sent_id=\"15\" tag_id=\"6\" text_id=\"22\">Widows and widowers can receive :</u></u></div><ul class=\"browser-default\" tag_id=\"6\"><li tag_id=\"6\"><u sent_id=\"16\" tag_id=\"6\"><u sent_id=\"16\" tag_id=\"6\" text_id=\"23\">reduced benefits as early as age 60 or full benefits at full retirement age or older .</u></u></li><div>If widows or widowers qualify for retirement benefits on their own record, they can switch to their own retirement benefit as early as age 62.</div><li tag_id=\"6\"><u sent_id=\"17\" tag_id=\"6\"><u sent_id=\"17\" tag_id=\"6\" text_id=\"24\">benefits as early as age 50 if they 're disabled AND their disability started before or within seven years of your death .</u></u></li><div>If a widow or widower who is caring for your children receives Social Security benefits, they're still eligible if their disability starts before those payments end or within seven years after they end.</div><li tag_id=\"6\"><u sent_id=\"18\" tag_id=\"6\"><u sent_id=\"18\" tag_id=\"6\" text_id=\"25\">benefits at any age ,</u><u sent_id=\"18\" tag_id=\"6\" text_id=\"26\">if they have not remarried ,</u><u sent_id=\"18\" tag_id=\"6\" text_id=\"27\">and if they take care of your child who is under age 16 or disabled and receives benefits on your record .</u></u></li><div>If a widow or widower remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.</div></ul><div>Widows, widowers, and surviving divorced spouses cannot apply online for survivors benefits. They should <a>contact Social Security</a> at <nobr><strong>1-800-772-1213</strong></nobr> (TTY <nobr><strong>1-800-325-0778</strong>) to request an appointment.</nobr></div><div tag_id=\"7\"><u sent_id=\"19\" tag_id=\"7\"><u sent_id=\"19\" tag_id=\"7\" text_id=\"28\">If applying for disability benefits on a deceased worker s record ,</u><u sent_id=\"19\" tag_id=\"7\" text_id=\"29\">they can speed up the application process if they complete an Adult Disability Report and have it available at the time of their appointment .</u></u></div><div tag_id=\"8\"><u sent_id=\"20\" tag_id=\"8\"><u sent_id=\"20\" tag_id=\"8\" text_id=\"30\">We use the same definition of disability for widows and widowers as we do for workers .</u></u></div></section><section><h3 sent_id=\"21\" text_id=\"31\">For Your Surviving Divorced Spouse</h3><div tag_id=\"9\"><u sent_id=\"22\" tag_id=\"9\"><u sent_id=\"22\" tag_id=\"9\" text_id=\"32\">If you have a surviving divorced spouse ,</u><u sent_id=\"22\" tag_id=\"9\" text_id=\"33\">they could get the same benefits as your widow or widower provided that your marriage lasted 10 years or more .</u></u></div><div>If your surviving divorced spouse qualifies for retirement benefits on their own record they can switch to their own retirement benefit as early as age 62.</div><div tag_id=\"10\"><u sent_id=\"23\" tag_id=\"10\"><u sent_id=\"23\" tag_id=\"10\" text_id=\"34\">Benefits paid to a surviving divorced spouse wo n't affect the benefit amounts your other survivors will receive based on your earnings record .</u></u></div><div>If your surviving divorced spouse remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.</div><div tag_id=\"11\"><u sent_id=\"24\" tag_id=\"11\"><u sent_id=\"24\" tag_id=\"11\" text_id=\"35\">If your former spouse is caring for your child who is under age 16 or disabled and gets benefits on your record ,</u><u sent_id=\"24\" tag_id=\"11\" text_id=\"36\">they will not have to meet the length - of - marriage rule .</u></u><u sent_id=\"25\" tag_id=\"11\"><u sent_id=\"25\" tag_id=\"11\" text_id=\"37\">The child must be your natural or legally adopted child .</u></u></div><div>However, if they qualify for benefits as a surviving divorced mother or father who is caring for your child, their benefits may affect the amount of benefits your other survivors will receive based on your earnings record.</div></section><section><h3 sent_id=\"26\" text_id=\"38\">For Your Children</h3><div tag_id=\"12\"><u sent_id=\"27\" tag_id=\"12\"><u sent_id=\"27\" tag_id=\"12\" text_id=\"39\">Your unmarried children who are under 18 up to age 19 if attending elementary or secondary school full time can be eligible to receive Social Security benefits when you die .</u></u></div><div tag_id=\"13\"><u sent_id=\"28\" tag_id=\"13\"><u sent_id=\"28\" tag_id=\"13\" text_id=\"40\">And your child can get benefits at any age if they were disabled before age 22 and remain disabled .</u></u></div><div tag_id=\"14\"><u sent_id=\"29\" tag_id=\"14\"><u sent_id=\"29\" tag_id=\"14\" text_id=\"41\">Besides your natural children ,</u><u sent_id=\"29\" tag_id=\"14\" text_id=\"42\">your stepchildren , grandchildren , step grandchildren or adopted children may receive benefits under certain circumstances .</u></u><u sent_id=\"30\" tag_id=\"14\"><u sent_id=\"30\" tag_id=\"14\" text_id=\"43\">For further information ,</u><u sent_id=\"30\" tag_id=\"14\" text_id=\"44\">view our publication .</u></u></div></section><section><h3 sent_id=\"31\" text_id=\"45\">For Your Parents</h3><div tag_id=\"15\"><u sent_id=\"32\" tag_id=\"15\"><u sent_id=\"32\" tag_id=\"15\" text_id=\"46\">You must have been providing at least half of your parent s support and your parent must not be eligible to receive a retirement benefit that is higher than the benefit we could pay on your record .</u></u><u sent_id=\"33\" tag_id=\"15\"><u sent_id=\"33\" tag_id=\"15\" text_id=\"47\">Generally , your parent also must not have married after your death ;</u><u sent_id=\"33\" tag_id=\"15\" text_id=\"48\">however , there are some exceptions .</u></u></div><div tag_id=\"16\"><u sent_id=\"34\" tag_id=\"16\"><u sent_id=\"34\" tag_id=\"16\" text_id=\"49\">In addition to your natural parent ,</u><u sent_id=\"34\" tag_id=\"16\" text_id=\"50\">your stepparent or adoptive parent may receive benefits if they became your parent before you were age 16 .</u></u></div></section><section><h3 sent_id=\"35\" text_id=\"51\">How Much Would Your Survivors Receive</h3><div tag_id=\"17\"><u sent_id=\"36\" tag_id=\"17\"><u sent_id=\"36\" tag_id=\"17\" text_id=\"52\">How much your family could receive in benefits</u><u sent_id=\"36\" tag_id=\"17\" text_id=\"53\">depends on your average lifetime earnings .</u></u><u sent_id=\"37\" tag_id=\"17\"><u sent_id=\"37\" tag_id=\"17\" text_id=\"54\">The higher your earnings were ,</u><u sent_id=\"37\" tag_id=\"17\" text_id=\"55\">the higher their benefits would be .</u></u><u sent_id=\"38\" tag_id=\"17\"><u sent_id=\"38\" tag_id=\"17\" text_id=\"56\">We calculate a basic amount as if you had reached full retirement age at the time you die .</u></u></div><div>If you are already receiving reduced benefits when you die, survivors benefits are based on that amount.</div><div tag_id=\"18\"><u sent_id=\"39\" tag_id=\"18\"><u sent_id=\"39\" tag_id=\"18\" text_id=\"57\">These are examples of monthly benefit payments :</u></u></div><ul class=\"browser-default\" tag_id=\"18\"><li tag_id=\"18\"><u sent_id=\"40\" tag_id=\"18\"><u sent_id=\"40\" tag_id=\"18\" text_id=\"58\">Widow or widower , full retirement age or older 100 percent of your benefit amount ;</u></u></li><li tag_id=\"18\"><u sent_id=\"41\" tag_id=\"18\"><u sent_id=\"41\" tag_id=\"18\" text_id=\"59\">Widow or widower ,</u><u sent_id=\"41\" tag_id=\"18\" text_id=\"60\">age 60 to full retirement age 71 to 99 percent of your basic amount ;</u></u></li><li tag_id=\"18\"><u sent_id=\"42\" tag_id=\"18\"><u sent_id=\"42\" tag_id=\"18\" text_id=\"61\">Disabled widow or widower ,</u><u sent_id=\"42\" tag_id=\"18\" text_id=\"62\">age 50 through 59 71 percent ;</u></u></li><li tag_id=\"18\"><u sent_id=\"43\" tag_id=\"18\"><u sent_id=\"43\" tag_id=\"18\" text_id=\"63\">Widow or widower ,</u><u sent_id=\"43\" tag_id=\"18\" text_id=\"64\">any age , caring for a child under age 16 75 percent ;</u></u></li><li tag_id=\"18\"><u sent_id=\"44\" tag_id=\"18\"><u sent_id=\"44\" tag_id=\"18\" text_id=\"65\">A child under age 18 19 if still in elementary or secondary school or disabled 75 percent ;</u><u sent_id=\"44\" tag_id=\"18\" text_id=\"66\">and</u></u></li><li tag_id=\"18\"><div tag_id=\"18\"><u sent_id=\"48\" tag_id=\"18\"><u sent_id=\"48\" tag_id=\"18\" text_id=\"67\">Your dependent parent ,</u><u sent_id=\"48\" tag_id=\"18\" text_id=\"68\">age 62 or older :</u></u></div><ul class=\"browser-default\" tag_id=\"18\"><li tag_id=\"18\"><u sent_id=\"49\" tag_id=\"18\"><u sent_id=\"49\" tag_id=\"18\" text_id=\"69\">One surviving parent 82 percent .</u></u></li><li tag_id=\"18\"><u sent_id=\"50\" tag_id=\"18\"><u sent_id=\"50\" tag_id=\"18\" text_id=\"70\">Two surviving parents 75 percent to each parent .</u></u></li></ul></li></ul><div tag_id=\"19\"><u sent_id=\"51\" tag_id=\"19\"><u sent_id=\"51\" tag_id=\"19\" text_id=\"71\">Percentages for a surviving divorced spouse would be the same as above .</u></u></div><div tag_id=\"20\"><u sent_id=\"52\" tag_id=\"20\"><u sent_id=\"52\" tag_id=\"20\" text_id=\"72\">There may also be a special lump - sum death payment .</u></u></div><h3 sent_id=\"53\" text_id=\"73\">Maximum Family Amount</h3><div tag_id=\"21\"><u sent_id=\"54\" tag_id=\"21\"><u sent_id=\"54\" tag_id=\"21\" text_id=\"74\">There 's a limit to the amount that family members can receive each month .</u></u><u sent_id=\"55\" tag_id=\"21\"><u sent_id=\"55\" tag_id=\"21\" text_id=\"75\">The limit varies ,</u><u sent_id=\"55\" tag_id=\"21\" text_id=\"76\">but it is generally equal to between 150 and 180 percent of the basic benefit rate .</u></u></div><div tag_id=\"22\"><u sent_id=\"56\" tag_id=\"22\"><u sent_id=\"56\" tag_id=\"22\" text_id=\"77\">If the sum of the benefits payable to family members is greater than this limit ,</u><u sent_id=\"56\" tag_id=\"22\" text_id=\"78\">the benefits will be reduced proportionately .</u></u><u sent_id=\"57\" tag_id=\"22\"><u sent_id=\"57\" tag_id=\"22\" text_id=\"79\">Any benefits paid to a surviving divorced spouse based on disability or age wo n't count toward this maximum amount .</u></u></div><div tag_id=\"23\"><u sent_id=\"58\" tag_id=\"23\"><u sent_id=\"58\" tag_id=\"23\" text_id=\"80\">Get your online or check our Benefit Calculators for an estimate of the benefits your family could receive if you died right now .</u></u></div><h3 sent_id=\"59\" text_id=\"81\">Other Things You Need To Know</h3><div tag_id=\"24\"><u sent_id=\"60\" tag_id=\"24\"><u sent_id=\"60\" tag_id=\"24\" text_id=\"82\">There are limits on how much survivors may earn while they receive benefits .</u></u></div><div tag_id=\"25\"><u sent_id=\"61\" tag_id=\"25\"><u sent_id=\"61\" tag_id=\"25\" text_id=\"83\">Benefits for a widow , widower , or surviving divorced spouse may be affected by several additional factors :</u></u></div><div><a>If they remarry</a><section><div tag_id=\"26\"><u sent_id=\"62\" tag_id=\"26\"><u sent_id=\"62\" tag_id=\"26\" text_id=\"84\">If your widow , widower , or surviving divorced spouse remarries before they reach age 60 age 50 if disabled ,</u><u sent_id=\"62\" tag_id=\"26\" text_id=\"85\">they can not receive benefits as a surviving spouse while they 're married .</u></u></div><div tag_id=\"27\"><u sent_id=\"63\" tag_id=\"27\"><u sent_id=\"63\" tag_id=\"27\" text_id=\"86\">If your widow , widower , or surviving divorced spouse remarries after they reach age 60 age 50 if disabled ,</u><u sent_id=\"63\" tag_id=\"27\" text_id=\"87\">they will continue to qualify for benefits on your Social Security record .</u></u></div><div tag_id=\"28\"><u sent_id=\"64\" tag_id=\"28\"><u sent_id=\"64\" tag_id=\"28\" text_id=\"88\">However ,</u><u sent_id=\"64\" tag_id=\"28\" text_id=\"89\">if their current spouse is a Social Security beneficiary ,</u><u sent_id=\"64\" tag_id=\"28\" text_id=\"90\">they may want to apply for spouse 's benefits on their record .</u></u><u sent_id=\"65\" tag_id=\"28\"><u sent_id=\"65\" tag_id=\"28\" text_id=\"91\">If that amount is more than the widow 's or widower 's benefit on your record ,</u><u sent_id=\"65\" tag_id=\"28\" text_id=\"92\">they will receive a combination of benefits that equals the higher amount .</u></u></div></section></div><div><a>If they're eligible for retirement benefits on their own record</a><section><div tag_id=\"29\"><u sent_id=\"66\" tag_id=\"29\"><u sent_id=\"66\" tag_id=\"29\" text_id=\"93\">If your widow , widower , or surviving divorced spouse receives benefits on your record ,</u><u sent_id=\"66\" tag_id=\"29\" text_id=\"94\">they can switch to their own retirement benefit as early as age 62 .</u></u><u sent_id=\"67\" tag_id=\"29\"><u sent_id=\"67\" tag_id=\"29\" text_id=\"95\">This assumes they 're eligible for retirement benefits and their retirement rate is higher than their rate as a widow , widower , or surviving divorced spouse .</u></u></div><div tag_id=\"30\"><u sent_id=\"68\" tag_id=\"30\"><u sent_id=\"68\" tag_id=\"30\" text_id=\"96\">In many cases ,</u><u sent_id=\"68\" tag_id=\"30\" text_id=\"97\">a widow or widower can begin receiving one benefit at a reduced rate and then , at full retirement age , switch to the other benefit at an unreduced rate .</u></u></div><div><a>Full retirement age for retirement benefits</a> may not match full retirement age for survivors benefits.</div></section></div><div><a>If they will also receive a pension based on work not covered by Social Security</a><section><div tag_id=\"31\"><u sent_id=\"69\" tag_id=\"31\"><u sent_id=\"69\" tag_id=\"31\" text_id=\"98\">If your widow , widower , or surviving divorced spouse will also receive a pension based on work not covered by Social Security , such as government or foreign work ,</u><u sent_id=\"69\" tag_id=\"31\" text_id=\"99\">their Social Security benefits as a survivor may be affected .</u></u></div></section></div></section></article></div></section></main>",
"doc_html_raw": "<main class=\"content\" id=\"content\" role=\"main\">\n\n<section>\n\n<div>\n<h2>Benefits Planner: Survivors | Planning For Your Survivors</h2>\n</div>\n</section>\n\n<section>\n\n<div>\n\n<div>\n\n\n</div>\n\n\n\n<article>\n<section>\n<p>As you plan for the future, you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work.</p>\n<p><a>You can earn up to four credits each year</a>. In 2019, for example, you earn one credit for each $1,360 of wages or <a>self-employment</a> income. When you have earned $5,440, you have earned your four credits for the year.</p>\n<p>The number of credits needed to provide benefits for your survivors depends on your age when you die. No one needs more than 40 credits (10 years of work) to be eligible for any Social Security benefit. But, the younger a person is, the fewer credits they must have for family members to receive survivors benefits.</p>\n<p>Benefits can be paid to your children and your spouse who is caring for the children even if you don't have the required number of credits. They can get benefits if you have credit for one and one-half years of work (6 credits) in the three years just before your death.</p>\n</section>\n<section>\n<h3>For Your Widow Or Widower</h3>\n<p>There are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse's earnings record. And, for many of those survivors, particularly aged women, those benefits are keeping them out of poverty. </p>\n<p>Widows and widowers can receive:</p>\n<ul class=\"browser-default\">\n<li>reduced benefits as early as age 60 or full benefits at <a>full retirement age</a> or older.</li>\n<div>\n If widows or widowers qualify for retirement benefits on their own record, they can switch to their own retirement benefit as early as age 62.\n </div>\n<li>benefits as early as age 50 if they're disabled AND their disability started before or within seven years of your death.</li>\n<div>\n If a widow or widower who is caring for your children receives Social Security benefits, they're still eligible if their disability starts before those payments end or within seven years after they end.\n </div>\n<li>benefits at any age, if they have not remarried, and if they take care of your child who is under age 16 or disabled and receives benefits on your record.</li>\n<div>\n If a widow or widower remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.\n </div>\n</ul>\n<div>\n Widows, widowers, and surviving divorced spouses cannot apply online for survivors benefits. They should <a>contact Social Security</a> at <nobr><strong>1-800-772-1213</strong></nobr> (TTY <nobr><strong>1-800-325-0778</strong>) to request an appointment.</nobr>\n</div>\n<p>If applying for disability benefits on a deceased worker s record, they can speed up the application process if they complete an <a>Adult Disability Report</a> and have it available at the time of their appointment.</p>\n<p>We use the same <a>definition of disability</a> for widows and widowers as we do for workers.</p>\n</section>\n<section>\n<h3>For Your Surviving Divorced Spouse</h3>\n<p>If you have a surviving divorced spouse, they could get the same benefits as your widow or widower provided that your marriage lasted 10 years or more.</p>\n<div>\n If your surviving divorced spouse qualifies for retirement benefits on their own record they can switch to their own retirement benefit as early as age 62.\n </div>\n<p>Benefits paid to a surviving divorced spouse won't affect the benefit amounts your other survivors will receive based on your earnings record.</p>\n<div>\n If your surviving divorced spouse remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.\n </div>\n<p>If your former spouse is caring for your child who is under age 16 or disabled and gets benefits on your record, they will not have to meet the length-of-marriage rule. The child must be your natural or legally adopted child.</p>\n<div>\n However, if they qualify for benefits as a surviving divorced mother or father who is caring for your child, their benefits may affect the amount of benefits your other survivors will receive based on your earnings record.\n </div>\n</section>\n<section>\n<h3>For Your Children</h3>\n<p>Your unmarried children who are under 18 (up to age 19 if attending elementary or secondary school full time) can be eligible to receive Social Security benefits when you die.</p>\n<p>And your child can get benefits at any age if they were disabled before age 22 and remain disabled.</p>\n<p>Besides your natural children, your stepchildren, grandchildren, step grandchildren or adopted children may receive benefits under certain circumstances. For further information, view our <a>publication</a>.</p>\n</section>\n<section>\n<h3>For Your Parents</h3>\n<p>You must have been providing at least half of your parent s support and your parent must not be eligible to receive a retirement benefit that is higher than the benefit we could pay on your record. Generally, your parent also must not have married after your death; however, there are some exceptions.</p>\n<p>In addition to your natural parent, your stepparent or adoptive parent may receive benefits if they became your parent before you were age 16.</p>\n</section>\n<section>\n<h3>How Much Would Your Survivors Receive</h3>\n<p>How much your family could receive in benefits depends on your average lifetime earnings. The higher your earnings were, the higher their benefits would be. We calculate a basic amount as if you had reached full retirement age at the time you die.</p>\n<div>\n If you are already receiving reduced benefits when you die, survivors benefits are based on that amount.\n </div>\n<p>These are examples of monthly benefit payments:</p>\n<ul class=\"browser-default\">\n<li>Widow or widower, <a>full retirement age</a> or older 100 percent of your benefit amount;</li>\n<li>Widow or widower, age 60 to <a>full retirement age</a> 71 to 99 percent of your basic amount;</li>\n<li>Disabled widow or widower, age 50 through 59 71 percent;</li>\n<li>Widow or widower, any age, caring for a child under age 16 75 percent;</li>\n<li>A child under age 18 (19 if still in elementary or secondary school) or disabled 75 percent; and</li>\n<li>Your dependent parent(s), age 62 or older:\n <ul class=\"browser-default\">\n<li>One surviving parent 82 percent.</li>\n<li>Two surviving parents 75 percent to each parent.</li>\n</ul>\n</li>\n</ul>\n<p>Percentages for a surviving divorced spouse would be the same as above.</p>\n<p>There may also be a <a>special lump-sum death payment</a>.</p>\n<h3>Maximum Family Amount</h3>\n<p>There's a limit to the amount that family members can receive each month. <a>The limit varies</a>, but it is generally equal to between 150 and 180 percent of the basic benefit rate.</p>\n<p>If the sum of the benefits payable to family members is greater than this limit, the benefits will be reduced proportionately. (Any benefits paid to a surviving divorced spouse based on disability or age won't count toward this maximum amount.)</p>\n<p>Get your <a></a> online or check our <a>Benefit Calculators</a> for an estimate of the benefits your family could receive if you died right now.</p>\n<h3>Other Things You Need To Know</h3>\n<p>There are <a>limits on how much survivors may earn</a> while they receive benefits.</p>\n<p>Benefits for a widow, widower, or surviving divorced spouse may be affected by several additional factors:</p>\n<div>\n<a>If they remarry</a>\n<section>\n<p>If your widow, widower, or surviving divorced spouse remarries before they reach age 60 (age 50 if disabled), they cannot receive benefits as a surviving spouse while they're married.</p>\n<p>If your widow, widower, or surviving divorced spouse remarries after they reach age 60 (age 50 if disabled), they will continue to qualify for benefits on your Social Security record.</p>\n<p>However, if their current spouse is a Social Security beneficiary, they may want to apply for spouse's benefits on their record. If that amount is more than the widow's or widower's benefit on your record, they will receive a combination of benefits that equals the higher amount.</p>\n</section>\n</div>\n<div>\n<a>If they're eligible for retirement benefits on their own record</a>\n<section>\n<p>If your widow, widower, or surviving divorced spouse receives benefits on your record, they can switch to their own retirement benefit as early as age 62. This assumes they're eligible for retirement benefits and their retirement rate is higher than their rate as a widow, widower, or surviving divorced spouse.</p>\n<p>In many cases, a widow or widower can begin receiving one benefit at a reduced rate and then, at full retirement age, switch to the other benefit at an unreduced rate.</p>\n<div>\n<a>Full retirement age for retirement benefits</a> may not match full retirement age for survivors benefits.\n </div>\n</section>\n</div>\n<div>\n<a>If they will also receive a pension based on work not covered by Social Security</a>\n<section>\n<p>If your widow, widower, or surviving divorced spouse will also receive a pension based on work not covered by Social Security, such as government or foreign work, <a>their Social Security benefits as a survivor may be affected</a>.</p>\n</section>\n</div>\n</section>\n</article>\n</div>\n</section>\n</main>"
}
```
Sample data instance for `dialogue_domain` :
```
{
"dial_id": "8df07b7a98990db27c395cb1f68a962e",
"domain": "dmv",
"turns": [
{
"turn_id": 1,
"role": "user",
"da": "query_condition",
"references": [
{
"id_sp": "4",
"label": "precondition",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
}
],
"utterance": "Hello, I forgot o update my address, can you help me with that?"
},
{
"turn_id": 2,
"role": "agent",
"da": "respond_solution",
"references": [
{
"id_sp": "6",
"label": "solution",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
},
{
"id_sp": "7",
"label": "solution",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
}
],
"utterance": "hi, you have to report any change of address to DMV within 10 days after moving. You should do this both for the address associated with your license and all the addresses associated with all your vehicles."
},
{
"turn_id": 3,
"role": "user",
"da": "query_solution",
"references": [
{
"id_sp": "56",
"label": "solution",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
}
],
"utterance": "Can I do my DMV transactions online?"
}
]
}
```
### Data Fields
- `document_domain` contains the documents that are indexed by key `domain` and `doc_id` . Each document instance includes the following,
- `domain`: the domain of the document;
- `doc_id`: the ID of a document;
- `title`: the title of the document;
- `doc_text`: the text content of the document (without HTML markups);
- `spans`: key-value pairs of all spans in the document, with `id_sp` as key. Each span includes the following,
- `id_sp`: the id of a span as noted by `text_id` in `doc_html_ts`;
- `start_sp`/ `end_sp`: the start/end position of the text span in `doc_text`;
- `text_sp`: the text content of the span.
- `id_sec`: the id of the (sub)section (e.g. `<p>`) or title (`<h2>`) that contains the span.
- `start_sec` / `end_sec`: the start/end position of the (sub)section in `doc_text`.
- `text_sec`: the text of the (sub)section.
- `title`: the title of the (sub)section.
- `parent_titles`: the parent titles of the `title`.
- `doc_html_ts`: the document content with HTML markups and the annotated spans that are indicated by `text_id` attribute, which corresponds to `id_sp`.
- `doc_html_raw`: the document content with HTML markups and without span annotations.
- `dialogue_domain`
Each dialogue instance includes the following,
- `dial_id`: the ID of a dialogue;
- `domain`: the domain of the document;
- `turns`: a list of dialogue turns. Each turn includes,
- `turn_id`: the time order of the turn;
- `role`: either "agent" or "user";
- `da`: dialogue act;
- `references`: a list of spans with `id_sp` , `label` and `doc_id`. `references` is empty if a turn is for indicating previous user query not answerable or irrelevant to the document. **Note** that labels "*precondition*"/"*solution*" are fuzzy annotations that indicate whether a span is for describing a conditional context or a solution.
- `utterance`: the human-generated utterance based on the dialogue scene.
- `multidoc2dial`
Each dialogue instance includes the following,
- `id`: the ID of a QA instance
- `title`: the title of the relevant document;
- `context`: the text content of the relevant document (without HTML markups).
- `question`: user query;
- `da`: dialogue act;
- `answers`: the answers that are grounded in the associated document;
- `text`: the text content of the grounding span;
- `answer_start`: the start position of the grounding span in the associated document (context);
- `utterance`: the human-generated utterance based on the dialogue scene.
- `domain`: domain of the relevant document;
### Data Splits
Training, dev and test split for default configuration `multidoc2dial`, with respectively 21451, 4201 and 5 examples,
- Training & dev split for dialogue domain, with 3474 and 661 examples,
- Training split only for document domain, with 488 examples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Song Feng, Siva Sankalp Patel, Hui Wan, Sachindra Joshi
### Licensing Information
Creative Commons Attribution 3.0 Unported
### Citation Information
```bibtex
@inproceedings{feng2021multidoc2dial,
title={MultiDoc2Dial: Modeling Dialogues Grounded in Multiple Documents},
author={Feng, Song and Patel, Siva Sankalp and Wan, Hui and Joshi, Sachindra},
booktitle={EMNLP},
year={2021}
}
```
### Contributions
Thanks to [@songfeng](https://github.com/songfeng) and [@sivasankalpp](https://github.com/sivasankalpp) for adding this dataset. |
multilingual_librispeech | ---
pretty_name: MultiLingual LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- de
- es
- fr
- it
- nl
- pl
- pt
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: polish
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 16136430
num_examples: 25043
- name: train.9h
num_bytes: 1383232
num_examples: 2173
- name: train.1h
num_bytes: 145411
num_examples: 238
- name: validation
num_bytes: 318964
num_examples: 512
- name: test
num_bytes: 332317
num_examples: 520
download_size: 6609569551
dataset_size: 18316354
- config_name: german
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 277089334
num_examples: 469942
- name: train.9h
num_bytes: 1325460
num_examples: 2194
- name: train.1h
num_bytes: 145998
num_examples: 241
- name: validation
num_bytes: 2160779
num_examples: 3469
- name: test
num_bytes: 2131177
num_examples: 3394
download_size: 122944886305
dataset_size: 282852748
- config_name: dutch
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 218648573
num_examples: 374287
- name: train.9h
num_bytes: 1281951
num_examples: 2153
- name: train.1h
num_bytes: 141672
num_examples: 234
- name: validation
num_bytes: 1984165
num_examples: 3095
- name: test
num_bytes: 1945428
num_examples: 3075
download_size: 92158429530
dataset_size: 224001789
- config_name: french
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 162009691
num_examples: 258213
- name: train.9h
num_bytes: 1347707
num_examples: 2167
- name: train.1h
num_bytes: 146699
num_examples: 241
- name: validation
num_bytes: 1482961
num_examples: 2416
- name: test
num_bytes: 1539152
num_examples: 2426
download_size: 64474642518
dataset_size: 166526210
- config_name: spanish
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 136743162
num_examples: 220701
- name: train.9h
num_bytes: 1288180
num_examples: 2110
- name: train.1h
num_bytes: 138734
num_examples: 233
- name: validation
num_bytes: 1463115
num_examples: 2408
- name: test
num_bytes: 1464565
num_examples: 2385
download_size: 53296894035
dataset_size: 141097756
- config_name: italian
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 36008104
num_examples: 59623
- name: train.9h
num_bytes: 1325927
num_examples: 2173
- name: train.1h
num_bytes: 145006
num_examples: 240
- name: validation
num_bytes: 732210
num_examples: 1248
- name: test
num_bytes: 746977
num_examples: 1262
download_size: 15395281399
dataset_size: 38958224
- config_name: portuguese
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 23036487
num_examples: 37533
- name: train.9h
num_bytes: 1305698
num_examples: 2116
- name: train.1h
num_bytes: 143781
num_examples: 236
- name: validation
num_bytes: 512463
num_examples: 826
- name: test
num_bytes: 549893
num_examples: 871
download_size: 9982803818
dataset_size: 25548322
---
# Dataset Card for MultiLingual LibriSpeech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
- **Repository:** [Needs More Information]
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/multilingual-librispeech)
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> This legacy dataset doesn't support streaming and is not updated. Use "facebook/multilingual_librispeech" instead.</p>
</div>
Multilingual LibriSpeech (MLS) dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
### Languages
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
| | Train | Train.9h | Train.1h | Dev | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
| french | 258213 | 2167 | 241 | 2416 | 2426 |
| spanish | 220701 | 2110 | 233 | 2408 | 2385 |
| italian | 59623 | 2173 | 240 | 1248 | 1262 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
mutual_friends | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: mutualfriends
pretty_name: MutualFriends
dataset_info:
features:
- name: uuid
dtype: string
- name: scenario_uuid
dtype: string
- name: scenario_alphas
sequence: float32
- name: scenario_attributes
sequence:
- name: unique
dtype: bool_
- name: value_type
dtype: string
- name: name
dtype: string
- name: scenario_kbs
sequence:
sequence:
sequence:
sequence: string
- name: agents
struct:
- name: '1'
dtype: string
- name: '0'
dtype: string
- name: outcome_reward
dtype: int32
- name: events
struct:
- name: actions
sequence: string
- name: start_times
sequence: float32
- name: data_messages
sequence: string
- name: data_selects
sequence:
- name: attributes
sequence: string
- name: values
sequence: string
- name: agents
sequence: int32
- name: times
sequence: float32
config_name: plain_text
splits:
- name: train
num_bytes: 26979472
num_examples: 8967
- name: test
num_bytes: 3327158
num_examples: 1107
- name: validation
num_bytes: 3267881
num_examples: 1083
download_size: 41274578
dataset_size: 33574511
---
# Dataset Card for MutualFriends
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COCOA](https://stanfordnlp.github.io/cocoa/)
- **Repository:** [Github repository](https://github.com/stanfordnlp/cocoa)
- **Paper:** [Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings (ACL 2017)](https://arxiv.org/abs/1704.07130)
- **Codalab**: [Codalab](https://worksheets.codalab.org/worksheets/0xc757f29f5c794e5eb7bfa8ca9c945573/)
### Dataset Summary
Our goal is to build systems that collaborate with people by exchanging information through natural language and reasoning over structured knowledge base. In the MutualFriend task, two agents, A and B, each have a private knowledge base, which contains a list of friends with multiple attributes (e.g., name, school, major, etc.). The agents must chat with each other to find their unique mutual friend.
### Supported Tasks and Leaderboards
We consider two agents, each with a private knowledge base of items, who must communicate their knowledge to achieve a common goal. Specifically, we designed the MutualFriends task (see the figure below). Each agent has a list of friends with attributes like school, major etc. They must chat with each other to find the unique mutual friend.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example looks like this.
```
{
'uuid': 'C_423324a5fff045d78bef75a6f295a3f4'
'scenario_uuid': 'S_hvmRM4YNJd55ecT5',
'scenario_alphas': [0.30000001192092896, 1.0, 1.0],
'scenario_attributes': {
'name': ['School', 'Company', 'Location Preference'],
'unique': [False, False, False],
'value_type': ['school', 'company', 'loc_pref']
},
'scenario_kbs': [
[
[['School', 'Company', 'Location Preference'], ['Longwood College', 'Alton Steel', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Leonard Green & Partners', 'indoor']],
[['School', 'Company', 'Location Preference'], ['New Mexico Highlands University', 'Crazy Eddie', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Rhodes College', "Tully's Coffee", 'indoor']],
[['School', 'Company', 'Location Preference'], ['Sacred Heart University', 'AMR Corporation', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Molycorp', 'indoor']],
[['School', 'Company', 'Location Preference'], ['New Mexico Highlands University', 'The Hartford Financial Services Group', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Sacred Heart University', 'Molycorp', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Babson College', 'The Hartford Financial Services Group', 'indoor']]
],
[
[['School', 'Company', 'Location Preference'], ['National Technological University', 'Molycorp', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Fairmont State College', 'Leonard Green & Partners', 'outdoor']],
[['School', 'Company', 'Location Preference'], ['Johnson C. Smith University', 'Data Resources Inc.', 'outdoor']],
[['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Molycorp', 'indoor']],
[['School', 'Company', 'Location Preference'], ['Fairmont State College', 'Molycorp', 'outdoor']],
[['School', 'Company', 'Location Preference'], ['University of South Carolina - Aiken', 'Molycorp', 'indoor']],
[['School', 'Company', 'Location Preference'], ['University of South Carolina - Aiken', 'STX', 'outdoor']],
[['School', 'Company', 'Location Preference'], ['National Technological University', 'STX', 'outdoor']],
[['School', 'Company', 'Location Preference'], ['Johnson C. Smith University', 'Rockstar Games', 'indoor']]
]
],
'agents': {
'0': 'human',
'1': 'human'
},
'outcome_reward': 1,
'events': {
'actions': ['message', 'message', 'message', 'message', 'select', 'select'],
'agents': [1, 1, 0, 0, 1, 0],
'data_messages': ['Hello', 'Do you know anyone who works at Molycorp?', 'Hi. All of my friends like the indoors.', 'Ihave two friends that work at Molycorp. They went to Salisbury and Sacred Heart.', '', ''],
'data_selects': {
'attributes': [
[], [], [], [], ['School', 'Company', 'Location Preference'], ['School', 'Company', 'Location Preference']
],
'values': [
[], [], [], [], ['Salisbury State University', 'Molycorp', 'indoor'], ['Salisbury State University', 'Molycorp', 'indoor']
]
},
'start_times': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
'times': [1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0]
},
}
```
### Data Fields
- `uuid`: example id.
- `scenario_uuid`: scenario id.
- `scenario_alphas`: scenario alphas.
- `scenario_attributes`: all the attributes considered in the scenario. The dictionaries are liniearized: to reconstruct the dictionary of attribute i-th, one should extract the i-th elements of `unique`, `value_type` and `name`.
- `unique`: bool.
- `value_type`: code/type of the attribute.
- `name`: name of the attribute.
- `scenario_kbs`: descriptions of the persons present in the two users' databases. List of two (one for each user in the dialogue). `scenario_kbs[i]` is a list of persons. Each person is represented as two lists (one for attribute names and the other for attribute values). The j-th element of attribute names corresponds to the j-th element of attribute values (linearized dictionary).
- `agents`: the two users engaged in the dialogue.
- `outcome_reward`: reward of the present dialogue.
- `events`: dictionary describing the dialogue. The j-th element of each sub-element of the dictionary describes the turn along the axis of the sub-element.
- `actions`: type of turn (either `message` or `select`).
- `agents`: who is talking? Agent 1 or 0?
- `data_messages`: the string exchanged if `action==message`. Otherwise, empty string.
- `data_selects`: selection of the user if `action==select`. Otherwise, empty selection/dictionary.
- `start_times`: always -1 in these data.
- `times`: sending time.
### Data Splits
There are 8967 dialogues for training, 1083 for validation and 1107 for testing.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{he-etal-2017-learning,
title = "Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings",
author = "He, He and
Balakrishnan, Anusha and
Eric, Mihail and
Liang, Percy",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-1162",
doi = "10.18653/v1/P17-1162",
pages = "1766--1776",
abstract = "We study a \textit{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.",
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
mwsc | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Modified Winograd Schema Challenge (MWSC)
size_categories:
- n<1K
source_datasets:
- extended|winograd_wsc
task_categories:
- multiple-choice
task_ids:
- multiple-choice-coreference-resolution
paperswithcode_id: null
dataset_info:
features:
- name: sentence
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 11022
num_examples: 80
- name: test
num_bytes: 15220
num_examples: 100
- name: validation
num_bytes: 13109
num_examples: 82
download_size: 19197
dataset_size: 39351
---
# Dataset Card for The modified Winograd Schema Challenge (MWSC)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://decanlp.com](http://decanlp.com)
- **Repository:** https://github.com/salesforce/decaNLP
- **Paper:** [The Natural Language Decathlon: Multitask Learning as Question Answering](https://arxiv.org/abs/1806.08730)
- **Point of Contact:** [Bryan McCann](mailto:bmccann@salesforce.com), [Nitish Shirish Keskar](mailto:nkeskar@salesforce.com)
- **Size of downloaded dataset files:** 19.20 kB
- **Size of the generated dataset:** 39.35 kB
- **Total amount of disk used:** 58.55 kB
### Dataset Summary
Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.
This Modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.06 MB
An example looks as follows:
```
{
"sentence": "The city councilmen refused the demonstrators a permit because they feared violence.",
"question": "Who feared violence?",
"options": [ "councilmen", "demonstrators" ],
"answer": "councilmen"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `sentence`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
- `answer`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 80| 82| 100|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Our code for running decaNLP has been open sourced under BSD-3-Clause.
We chose to restrict decaNLP to datasets that were free and publicly accessible for research, but you should check their individual terms if you deviate from this use case.
From the [Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html):
> Both versions of the collections are licenced under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
If you use this in your work, please cite:
```
@article{McCann2018decaNLP,
title={The Natural Language Decathlon: Multitask Learning as Question Answering},
author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
journal={arXiv preprint arXiv:1806.08730},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
myanmar_news | ---
annotations_creators:
- found
language_creators:
- found
language:
- my
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: MyanmarNews
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype:
class_label:
names:
'0': Sport
'1': Politic
'2': Business
'3': Entertainment
splits:
- name: train
num_bytes: 3797368
num_examples: 8116
download_size: 610592
dataset_size: 3797368
---
# Dataset Card for Myanmar_News
## Dataset Description
- **Repository:** https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
### Dataset Summary
The Myanmar news dataset contains article snippets in four categories:
Business, Entertainment, Politics, and Sport.
These were collected in October 2017 by Aye Hninn Khine
### Languages
Myanmar/Burmese language
## Dataset Structure
### Data Fields
- text - text from article
- category - a topic: Business, Entertainment, **Politic**, or **Sport** (note spellings)
### Data Splits
One training set (8,116 total rows)
### Source Data
#### Initial Data Collection and Normalization
Data was collected by Aye Hninn Khine
and shared on GitHub with a GPL-3.0 license.
Multiple text files were consolidated into one labeled CSV file by Nick Doiron.
## Additional Information
### Dataset Curators
Contributors to original GitHub repo:
- https://github.com/ayehninnkhine
### Licensing Information
GPL-3.0
### Citation Information
See https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
### Contributions
Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset. |
narrativeqa | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
paperswithcode_id: narrativeqa
pretty_name: NarrativeQA
dataset_info:
features:
- name: document
struct:
- name: id
dtype: string
- name: kind
dtype: string
- name: url
dtype: string
- name: file_size
dtype: int32
- name: word_count
dtype: int32
- name: start
dtype: string
- name: end
dtype: string
- name: summary
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: answers
list:
- name: text
dtype: string
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 11565035136
num_examples: 32747
- name: test
num_bytes: 3549964281
num_examples: 10557
- name: validation
num_bytes: 1211859490
num_examples: 3461
download_size: 192528922
dataset_size: 16326858907
---
# Dataset Card for Narrative QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa)
- **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa)
- **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf)
- **Leaderboard:**
- **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com)
### Dataset Summary
NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents.
### Supported Tasks and Leaderboards
The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided.
A typical example looks like this:
```
{
"document": {
"id": "23jncj2n3534563110",
"kind": "movie",
"url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html",
"file_size": 80473,
"word_count": 41000,
"start": "MOVIE screenplay by",
"end": ". THE END",
"summary": {
"text": "Joe Bloggs begins his journey exploring...",
"tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...],
"url": "http://en.wikipedia.org/wiki/Name_of_Movie",
"title": "Name of Movie (film)"
},
"text": "MOVIE screenplay by John Doe\nSCENE 1..."
},
"question": {
"text": "Where does Joe Bloggs live?",
"tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"],
},
"answers": [
{"text": "At home", "tokens": ["At", "home"]},
{"text": "His house", "tokens": ["His", "house"]}
]
}
```
### Data Fields
- `document.id` - Unique ID for the story.
- `document.kind` - "movie" or "gutenberg" depending on the source of the story.
- `document.url` - The URL where the story was downloaded from.
- `document.file_size` - File size (in bytes) of the story.
- `document.word_count` - Number of tokens in the story.
- `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified.
- `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified.
- `document.summary.text` - Text of the wikipedia summary of the story.
- `document.summary.tokens` - Tokenized version of `document.summary.text`.
- `document.summary.url` - Wikipedia URL of the summary.
- `document.summary.title` - Wikipedia Title of the summary.
- `question` - `{"text":"...", "tokens":[...]}` for the question about the story.
- `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question.
### Data Splits
The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split):
| Train | Valid | Test |
| ------ | ----- | ----- |
| 32747 | 3461 | 10557 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)).
#### Who are the source language producers?
The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions.
### Annotations
#### Annotation process
Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors.
#### Who are the annotators?
Amazon Mechanical Turk workers.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE).
### Citation Information
```
@article{narrativeqa,
author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
Edward Grefenstette},
title = {The {NarrativeQA} Reading Comprehension Challenge},
journal = {Transactions of the Association for Computational Linguistics},
url = {https://TBD},
volume = {TBD},
year = {2018},
pages = {TBD},
}
```
### Contributions
Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset. |
narrativeqa_manual | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
paperswithcode_id: narrativeqa
pretty_name: NarrativeQA
dataset_info:
features:
- name: document
struct:
- name: id
dtype: string
- name: kind
dtype: string
- name: url
dtype: string
- name: file_size
dtype: int32
- name: word_count
dtype: int32
- name: start
dtype: string
- name: end
dtype: string
- name: summary
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: answers
list:
- name: text
dtype: string
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 9115940054
num_examples: 32747
- name: test
num_bytes: 2911702563
num_examples: 10557
- name: validation
num_bytes: 968994186
num_examples: 3461
download_size: 22638273
dataset_size: 12996636803
---
# Dataset Card for Narrative QA Manual
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa)
- **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa)
- **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf)
- **Leaderboard:**
- **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com)
### Dataset Summary
NarrativeQA Manual is an English-language dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents. THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, the links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the stories can be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`.
### Supported Tasks and Leaderboards
The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided.
A typical example looks like this:
```
{
"document": {
"id": "23jncj2n3534563110",
"kind": "movie",
"url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html",
"file_size": 80473,
"word_count": 41000,
"start": "MOVIE screenplay by",
"end": ". THE END",
"summary": {
"text": "Joe Bloggs begins his journey exploring...",
"tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...],
"url": "http://en.wikipedia.org/wiki/Name_of_Movie",
"title": "Name of Movie (film)"
},
"text": "MOVIE screenplay by John Doe\nSCENE 1..."
},
"question": {
"text": "Where does Joe Bloggs live?",
"tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"],
},
"answers": [
{"text": "At home", "tokens": ["At", "home"]},
{"text": "His house", "tokens": ["His", "house"]}
]
}
```
### Data Fields
- `document.id` - Unique ID for the story.
- `document.kind` - "movie" or "gutenberg" depending on the source of the story.
- `document.url` - The URL where the story was downloaded from.
- `document.file_size` - File size (in bytes) of the story.
- `document.word_count` - Number of tokens in the story.
- `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified.
- `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified.
- `document.summary.text` - Text of the wikipedia summary of the story.
- `document.summary.tokens` - Tokenized version of `document.summary.text`.
- `document.summary.url` - Wikipedia URL of the summary.
- `document.summary.title` - Wikipedia Title of the summary.
- `question` - `{"text":"...", "tokens":[...]}` for the question about the story.
- `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question.
### Data Splits
The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split):
| Train | Valid | Test |
| ------ | ----- | ----- |
| 32747 | 3461 | 10557 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)).
#### Who are the source language producers?
The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions.
### Annotations
#### Annotation process
Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors.
#### Who are the annotators?
Amazon Mechanical Turk workers.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE).
### Citation Information
```
@article{narrativeqa,
author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
Edward Grefenstette},
title = {The {NarrativeQA} Reading Comprehension Challenge},
journal = {Transactions of the Association for Computational Linguistics},
url = {https://TBD},
volume = {TBD},
year = {2018},
pages = {TBD},
}
```
### Contributions
Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset. |
natural_questions | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: natural-questions
pretty_name: Natural Questions
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: title
dtype: string
- name: url
dtype: string
- name: html
dtype: string
- name: tokens
sequence:
- name: token
dtype: string
- name: is_html
dtype: bool
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: start_token
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: end_byte
dtype: int64
- name: short_answers
sequence:
- name: start_token
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: end_byte
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
- name: long_answer_candidates
sequence:
- name: start_token
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: end_byte
dtype: int64
- name: top_label
dtype: bool
splits:
- name: train
num_bytes: 97445142568
num_examples: 307373
- name: validation
num_bytes: 2353975312
num_examples: 7830
download_size: 45069199013
dataset_size: 99799117880
---
# Dataset Card for Natural Questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset)
- **Repository:** [https://github.com/google-research-datasets/natural-questions](https://github.com/google-research-datasets/natural-questions)
- **Paper:** [https://research.google/pubs/pub47761/](https://research.google/pubs/pub47761/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 45.07 GB
- **Size of the generated dataset:** 99.80 GB
- **Total amount of disk used:** 144.87 GB
### Dataset Summary
The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets.
### Supported Tasks and Leaderboards
[https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions)
### Languages
en
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 45.07 GB
- **Size of the generated dataset:** 99.80 GB
- **Total amount of disk used:** 144.87 GB
An example of 'train' looks as follows. This is a toy example.
```
{
"id": "797803103760793766",
"document": {
"title": "Google",
"url": "http://www.wikipedia.org/Google",
"html": "<html><body><h1>Google Inc.</h1><p>Google was founded in 1998 By:<ul><li>Larry</li><li>Sergey</li></ul></p></body></html>",
"tokens":[
{"token": "<h1>", "start_byte": 12, "end_byte": 16, "is_html": True},
{"token": "Google", "start_byte": 16, "end_byte": 22, "is_html": False},
{"token": "inc", "start_byte": 23, "end_byte": 26, "is_html": False},
{"token": ".", "start_byte": 26, "end_byte": 27, "is_html": False},
{"token": "</h1>", "start_byte": 27, "end_byte": 32, "is_html": True},
{"token": "<p>", "start_byte": 32, "end_byte": 35, "is_html": True},
{"token": "Google", "start_byte": 35, "end_byte": 41, "is_html": False},
{"token": "was", "start_byte": 42, "end_byte": 45, "is_html": False},
{"token": "founded", "start_byte": 46, "end_byte": 53, "is_html": False},
{"token": "in", "start_byte": 54, "end_byte": 56, "is_html": False},
{"token": "1998", "start_byte": 57, "end_byte": 61, "is_html": False},
{"token": "by", "start_byte": 62, "end_byte": 64, "is_html": False},
{"token": ":", "start_byte": 64, "end_byte": 65, "is_html": False},
{"token": "<ul>", "start_byte": 65, "end_byte": 69, "is_html": True},
{"token": "<li>", "start_byte": 69, "end_byte": 73, "is_html": True},
{"token": "Larry", "start_byte": 73, "end_byte": 78, "is_html": False},
{"token": "</li>", "start_byte": 78, "end_byte": 83, "is_html": True},
{"token": "<li>", "start_byte": 83, "end_byte": 87, "is_html": True},
{"token": "Sergey", "start_byte": 87, "end_byte": 92, "is_html": False},
{"token": "</li>", "start_byte": 92, "end_byte": 97, "is_html": True},
{"token": "</ul>", "start_byte": 97, "end_byte": 102, "is_html": True},
{"token": "</p>", "start_byte": 102, "end_byte": 106, "is_html": True}
],
},
"question" :{
"text": "who founded google",
"tokens": ["who", "founded", "google"]
},
"long_answer_candidates": [
{"start_byte": 32, "end_byte": 106, "start_token": 5, "end_token": 22, "top_level": True},
{"start_byte": 65, "end_byte": 102, "start_token": 13, "end_token": 21, "top_level": False},
{"start_byte": 69, "end_byte": 83, "start_token": 14, "end_token": 17, "top_level": False},
{"start_byte": 83, "end_byte": 92, "start_token": 17, "end_token": 20 , "top_level": False}
],
"annotations": [{
"id": "6782080525527814293",
"long_answer": {"start_byte": 32, "end_byte": 106, "start_token": 5, "end_token": 22, "candidate_index": 0},
"short_answers": [
{"start_byte": 73, "end_byte": 78, "start_token": 15, "end_token": 16, "text": "Larry"},
{"start_byte": 87, "end_byte": 92, "start_token": 18, "end_token": 19, "text": "Sergey"}
],
"yes_no_answer": -1
}]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `document` a dictionary feature containing:
- `title`: a `string` feature.
- `url`: a `string` feature.
- `html`: a `string` feature.
- `tokens`: a dictionary feature containing:
- `token`: a `string` feature.
- `is_html`: a `bool` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `question`: a dictionary feature containing:
- `text`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `long_answer_candidates`: a dictionary feature containing:
- `start_token`: a `int64` feature.
- `end_token`: a `int64` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `top_level`: a `bool` feature.
- `annotations`: a dictionary feature containing:
- `id`: a `string` feature.
- `long_answers`: a dictionary feature containing:
- `start_token`: a `int64` feature.
- `end_token`: a `int64` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `candidate_index`: a `int64` feature.
- `short_answers`: a dictionary feature containing:
- `start_token`: a `int64` feature.
- `end_token`: a `int64` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `text`: a `string` feature.
- `yes_no_answer`: a classification label, with possible values including `NO` (0), `YES` (1).
### Data Splits
| name | train | validation |
|---------|-------:|-----------:|
| default | 307373 | 7830 |
| dev | N/A | 7830 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
ncbi_disease | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: ncbi-disease-1
pretty_name: NCBI Disease
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-Disease
'2': I-Disease
config_name: ncbi_disease
splits:
- name: train
num_bytes: 2355516
num_examples: 5433
- name: validation
num_bytes: 413900
num_examples: 924
- name: test
num_bytes: 422842
num_examples: 941
download_size: 1546492
dataset_size: 3192258
train-eval-index:
- config: ncbi_disease
task: token-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
tokens: text
ner_tags: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for NCBI Disease
## Table of Contents
- [Dataset Card for NCBI Disease](#dataset-card-for-ncbi-disease)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NCBI](https://www.ncbi.nlm.nih.gov/research/bionlp/Data/disease)
- **Repository:** [Github](https://github.com/spyysalo/ncbi-disease)
- **Paper:** [NCBI disease corpus: A resource for disease name recognition and concept normalization](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655)
- **Leaderboard:** [Named Entity Recognition on NCBI-disease](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ncbi-disease)
- **Point of Contact:** [email](zhiyong.lu@nih.gov)
### Dataset Summary
This dataset contains the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community.
### Supported Tasks and Leaderboards
Named Entity Recognition: [Leaderboard](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ncbi-disease)
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Instances of the dataset contain an array of `tokens`, `ner_tags` and an `id`. An example of an instance of the dataset:
```
{
'tokens': ['Identification', 'of', 'APC2', ',', 'a', 'homologue', 'of', 'the', 'adenomatous', 'polyposis', 'coli', 'tumour', 'suppressor', '.'],
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0],
'id': '0'
}
```
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
The data is split into a train (5433 instances), validation (924 instances) and test set (941 instances).
## Dataset Creation
### Curation Rationale
The goal of the dataset consists on improving the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.
### Source Data
#### Initial Data Collection and Normalization
The dataset consists on abstracts extracted from PubMed.
#### Who are the source language producers?
The source language producers are the authors of publication abstracts hosted in PubMed.
### Annotations
#### Annotation process
Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH®) or Online Mendelian Inheritance in Man (OMIM®). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency.
#### Who are the annotators?
The annotator group consisted of 14 people with backgrounds in biomedical informatics research and experience in biomedical text corpus annotation.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information. This dataset provides an annotated corpora that can be used to develop highly effective tools to automatically detect central biomedical concepts such as diseases.
### Discussion of Biases
To avoid annotator bias, pairs of annotators were chosen randomly for each set, so that each pair of annotators overlapped for at most two sets.
### Other Known Limitations
A handful of disease concepts were discovered that were not included in MEDIC. For those, we decided to include the appropriate OMIM identifiers.
In addition, certain disease mentions were found to not be easily represented using the standard categorizations.
Also, each PMID document was pre-annotated using the Inference Method developed for disease name normalization, which properly handles abbreviation recognition, robust string matching, etc. As such, human annotators were given the pre-annotated documents as a starting point and allowed to see each pre-annotation with a computed confidence.
## Additional Information
### Dataset Curators
Rezarta Islamaj Doğan, Robert Leaman, Zhiyong Lu
### Licensing Information
```
PUBLIC DOMAIN NOTICE
This work is a "United States Government Work" under the terms of the
United States Copyright Act. It was written as part of the authors'
official duties as a United States Government employee and thus cannot
be copyrighted within the United States. The data is freely available
to the public for use. The National Library of Medicine and the
U.S. Government have not placed any restriction on its use or
reproduction.
Although all reasonable efforts have been taken to ensure the accuracy
and reliability of the data and its source code, the NLM and the
U.S. Government do not and cannot warrant the performance or results
that may be obtained by using it. The NLM and the U.S. Government
disclaim all warranties, express or implied, including warranties of
performance, merchantability or fitness for any particular purpose.
Please cite the authors in any work or product based on this material:
An improved corpus of disease mentions in PubMed citations
http://aclweb.org/anthology-new/W/W12/W12-2411.pdf
NCBI Disease Corpus: A Resource for Disease Name Recognition and
Normalization http://www.ncbi.nlm.nih.gov/pubmed/24393765
Disease Name Normalization with Pairwise Learning to Rank
http://www.ncbi.nlm.nih.gov/pubmed/23969135
```
### Citation Information
```
@article{dougan2014ncbi,
title={NCBI disease corpus: a resource for disease name recognition and concept normalization},
author={Do{\u{g}}an, Rezarta Islamaj and Leaman, Robert and Lu, Zhiyong},
journal={Journal of biomedical informatics},
volume={47},
pages={1--10},
year={2014},
publisher={Elsevier}
}
```
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset. |
nchlt | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- af
- nr
- nso
- ss
- tn
- ts
- ve
- xh
- zu
license:
- cc-by-2.5
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: NCHLT
dataset_info:
- config_name: af
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3955069
num_examples: 8961
download_size: 25748344
dataset_size: 3955069
- config_name: nr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3188781
num_examples: 9334
download_size: 20040327
dataset_size: 3188781
- config_name: xh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 2365821
num_examples: 6283
download_size: 14513302
dataset_size: 2365821
- config_name: zu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3951366
num_examples: 10955
download_size: 25097584
dataset_size: 3951366
- config_name: nso-sepedi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3322296
num_examples: 7116
download_size: 22077376
dataset_size: 3322296
- config_name: nso-sesotho
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 4427898
num_examples: 9471
download_size: 30421109
dataset_size: 4427898
- config_name: tn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3812339
num_examples: 7943
download_size: 25905236
dataset_size: 3812339
- config_name: ss
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3431063
num_examples: 10797
download_size: 21882224
dataset_size: 3431063
- config_name: ve
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3941041
num_examples: 8477
download_size: 26382457
dataset_size: 3941041
- config_name: ts
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3941041
num_examples: 8477
download_size: 26382457
dataset_size: 3941041
---
# Dataset Card for NCHLT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [link](https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II)
- **Repository:** []()
- **Paper:** []()
- **Leaderboard:** []()
- **Point of Contact:** []()
### Dataset Summary
The development of linguistic resources for use in natural language processingis of utmost importance for the continued growth of research anddevelopment in the field, especially for resource-scarce languages. In this paper we describe the process and challenges of simultaneouslydevelopingmultiple linguistic resources for ten of the official languages of South Africa. The project focussed on establishing a set of foundational resources that can foster further development of both resources and technologies for the NLP industry in South Africa. The development efforts during the project included creating monolingual unannotated corpora, of which a subset of the corpora for each language was annotated on token, orthographic, morphological and morphosyntactic layers. The annotated subsetsincludes both development and test setsand were used in the creation of five core-technologies, viz. atokeniser, sentenciser,lemmatiser, part of speech tagger and morphological decomposer for each language. We report on the quality of these tools for each language and provide some more context of the importance of the resources within the South African context.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
[More Information Needed]
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Martin.Puttkammer@nwu.ac.za
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{eiselen2014developing,
title={Developing Text Resources for Ten South African Languages.},
author={Eiselen, Roald and Puttkammer, Martin J},
booktitle={LREC},
pages={3698--3703},
year={2014}
}
```
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset. |
ncslgr | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ase
- en
license:
- mit
multilinguality:
- translation
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: NCSLGR
dataset_info:
- config_name: entire_dataset
features:
- name: eaf
dtype: string
- name: sentences
sequence:
- name: gloss
dtype: string
- name: text
dtype: string
- name: videos
sequence: string
splits:
- name: train
num_bytes: 783504
num_examples: 870
download_size: 4113829143
dataset_size: 783504
- config_name: annotations
features:
- name: eaf
dtype: string
- name: sentences
sequence:
- name: gloss
dtype: string
- name: text
dtype: string
- name: videos
sequence: string
splits:
- name: train
num_bytes: 371725
num_examples: 870
download_size: 5335358
dataset_size: 371725
---
# Dataset Card for NCSLGR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.bu.edu/asllrp/ncslgr.html
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A small corpus of American Sign Language (ASL) video data from native signers, annotated with non-manual features.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- American Sign Language
- English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- eaf: path to an ELAN annotation file
- videos: sequence of strings to video paths
- sentences: sequence of parallel sentences
- gloss: American Sign Language gloss annotations
- text: English text
### Data Splits
None
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{dataset:databases2007volumes,
title={Volumes 2--7},
author={Databases, NCSLGR},
year={2007},
publisher={American Sign Language Linguistic Research Project (Distributed on CD-ROM~…}
}
```
### Contributions
Thanks to [@AmitMY](https://github.com/AmitMY) for adding this dataset. |