Datasets:
metadata
language:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
paperswithcode_id: xnli
pretty_name: Cross-lingual Natural Language Inference
configs:
- config_name: default
data_files:
- path: test/*.parquet
split: test
- path: train/*.parquet
split: train
- path: validation/*.parquet
split: validation
- config_name: ar
data_files:
- path: test/ar.parquet
split: test
- path: train/ar.parquet
split: train
- path: validation/ar.parquet
split: validation
- config_name: ru
data_files:
- path: test/ru.parquet
split: test
- path: train/ru.parquet
split: train
- path: validation/ru.parquet
split: validation
- config_name: el
data_files:
- path: test/el.parquet
split: test
- path: train/el.parquet
split: train
- path: validation/el.parquet
split: validation
- config_name: th
data_files:
- path: test/th.parquet
split: test
- path: train/th.parquet
split: train
- path: validation/th.parquet
split: validation
- config_name: fr
data_files:
- path: test/fr.parquet
split: test
- path: train/fr.parquet
split: train
- path: validation/fr.parquet
split: validation
- config_name: de
data_files:
- path: test/de.parquet
split: test
- path: train/de.parquet
split: train
- path: validation/de.parquet
split: validation
- config_name: zh
data_files:
- path: test/zh.parquet
split: test
- path: train/zh.parquet
split: train
- path: validation/zh.parquet
split: validation
- config_name: ur
data_files:
- path: test/ur.parquet
split: test
- path: train/ur.parquet
split: train
- path: validation/ur.parquet
split: validation
- config_name: sw
data_files:
- path: test/sw.parquet
split: test
- path: train/sw.parquet
split: train
- path: validation/sw.parquet
split: validation
- config_name: bg
data_files:
- path: test/bg.parquet
split: test
- path: train/bg.parquet
split: train
- path: validation/bg.parquet
split: validation
- config_name: es
data_files:
- path: test/es.parquet
split: test
- path: train/es.parquet
split: train
- path: validation/es.parquet
split: validation
- config_name: en
data_files:
- path: test/en.parquet
split: test
- path: train/en.parquet
split: train
- path: validation/en.parquet
split: validation
- config_name: vi
data_files:
- path: test/vi.parquet
split: test
- path: train/vi.parquet
split: train
- path: validation/vi.parquet
split: validation
- config_name: tr
data_files:
- path: test/tr.parquet
split: test
- path: train/tr.parquet
split: train
- path: validation/tr.parquet
split: validation
- config_name: hi
data_files:
- path: test/hi.parquet
split: test
- path: train/hi.parquet
split: train
- path: validation/hi.parquet
split: validation
Dataset Card for "xnli"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://www.nyu.edu/projects/bowman/xnli/
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 7.74 GB
- Size of the generated dataset: 3.23 GB
- Total amount of disk used: 10.97 GB
Dataset Summary
XNLI is a subset of a few thousand examples from MNLI which has been translated into a 14 different languages (some low-ish resource). As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels).
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
all_languages
- Size of downloaded dataset files: 483.96 MB
- Size of the generated dataset: 1.61 GB
- Total amount of disk used: 2.09 GB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
"label": 0,
"premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
}
ar
- Size of downloaded dataset files: 483.96 MB
- Size of the generated dataset: 109.32 MB
- Total amount of disk used: 593.29 MB
An example of 'validation' looks as follows.
{
"hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
"label": 1,
"premise": "وقال، ماما، لقد عدت للمنزل."
}
bg
- Size of downloaded dataset files: 483.96 MB
- Size of the generated dataset: 128.32 MB
- Total amount of disk used: 612.28 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
"label": 0,
"premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
}
de
- Size of downloaded dataset files: 483.96 MB
- Size of the generated dataset: 86.17 MB
- Total amount of disk used: 570.14 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
"label": 0,
"premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
}
el
- Size of downloaded dataset files: 483.96 MB
- Size of the generated dataset: 142.30 MB
- Total amount of disk used: 626.26 MB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
"label": 1,
"premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
}
Data Fields
The data fields are the same among all splits.
all_languages
premise
: a multilingualstring
variable, with possible languages includingar
,bg
,de
,el
,en
.hypothesis
: a multilingualstring
variable, with possible languages includingar
,bg
,de
,el
,en
.label
: a classification label, with possible values includingentailment
(0),neutral
(1),contradiction
(2).
ar
premise
: astring
feature.hypothesis
: astring
feature.label
: a classification label, with possible values includingentailment
(0),neutral
(1),contradiction
(2).
bg
premise
: astring
feature.hypothesis
: astring
feature.label
: a classification label, with possible values includingentailment
(0),neutral
(1),contradiction
(2).
de
premise
: astring
feature.hypothesis
: astring
feature.label
: a classification label, with possible values includingentailment
(0),neutral
(1),contradiction
(2).
el
premise
: astring
feature.hypothesis
: astring
feature.label
: a classification label, with possible values includingentailment
(0),neutral
(1),contradiction
(2).
Data Splits
name | train | validation | test |
---|---|---|---|
all_languages | 392702 | 2490 | 5010 |
ar | 392702 | 2490 | 5010 |
bg | 392702 | 2490 | 5010 |
de | 392702 | 2490 | 5010 |
el | 392702 | 2490 | 5010 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
Contributions
Thanks to @lewtun, @mariamabarham, @thomwolf, @lhoestq, @patrickvonplaten for adding this dataset.