|
--- |
|
annotations_creators: |
|
- no-annotation |
|
language_creators: |
|
- found |
|
language: |
|
- cs |
|
- de |
|
- en |
|
- fi |
|
- fr |
|
- ru |
|
license: |
|
- unknown |
|
multilinguality: |
|
- translation |
|
size_categories: |
|
- 10M<n<100M |
|
source_datasets: |
|
- extended|europarl_bilingual |
|
- extended|giga_fren |
|
- extended|news_commentary |
|
- extended|un_multi |
|
task_categories: |
|
- translation |
|
task_ids: [] |
|
pretty_name: WMT15 |
|
paperswithcode_id: wmt-2015 |
|
dataset_info: |
|
- config_name: cs-en |
|
features: |
|
- name: translation |
|
dtype: |
|
translation: |
|
languages: |
|
- cs |
|
- en |
|
splits: |
|
- name: train |
|
num_bytes: 282996942 |
|
num_examples: 959768 |
|
- name: validation |
|
num_bytes: 757817 |
|
num_examples: 3003 |
|
- name: test |
|
num_bytes: 572203 |
|
num_examples: 2656 |
|
download_size: 1740666258 |
|
dataset_size: 284326962 |
|
- config_name: de-en |
|
features: |
|
- name: translation |
|
dtype: |
|
translation: |
|
languages: |
|
- de |
|
- en |
|
splits: |
|
- name: train |
|
num_bytes: 1364002869 |
|
num_examples: 4522998 |
|
- name: validation |
|
num_bytes: 777334 |
|
num_examples: 3003 |
|
- name: test |
|
num_bytes: 522989 |
|
num_examples: 2169 |
|
download_size: 1740666258 |
|
dataset_size: 1365303192 |
|
- config_name: fi-en |
|
features: |
|
- name: translation |
|
dtype: |
|
translation: |
|
languages: |
|
- fi |
|
- en |
|
splits: |
|
- name: train |
|
num_bytes: 605146817 |
|
num_examples: 2073394 |
|
- name: validation |
|
num_bytes: 363941 |
|
num_examples: 1500 |
|
- name: test |
|
num_bytes: 306335 |
|
num_examples: 1370 |
|
download_size: 273390220 |
|
dataset_size: 605817093 |
|
- config_name: fr-en |
|
features: |
|
- name: translation |
|
dtype: |
|
translation: |
|
languages: |
|
- fr |
|
- en |
|
splits: |
|
- name: train |
|
num_bytes: 14758986622 |
|
num_examples: 40853137 |
|
- name: validation |
|
num_bytes: 1138737 |
|
num_examples: 4503 |
|
- name: test |
|
num_bytes: 298771 |
|
num_examples: 1500 |
|
download_size: 6702781608 |
|
dataset_size: 14760424130 |
|
- config_name: ru-en |
|
features: |
|
- name: translation |
|
dtype: |
|
translation: |
|
languages: |
|
- ru |
|
- en |
|
splits: |
|
- name: train |
|
num_bytes: 437752256 |
|
num_examples: 1495081 |
|
- name: validation |
|
num_bytes: 1087746 |
|
num_examples: 3003 |
|
- name: test |
|
num_bytes: 955972 |
|
num_examples: 2818 |
|
download_size: 1092059435 |
|
dataset_size: 439795974 |
|
--- |
|
|
|
# Dataset Card for "wmt15" |
|
|
|
## Table of Contents |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
- [Discussion of Biases](#discussion-of-biases) |
|
- [Other Known Limitations](#other-known-limitations) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [http://www.statmt.org/wmt15/translation-task.html](http://www.statmt.org/wmt15/translation-task.html) |
|
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Size of downloaded dataset files:** 1.74 GB |
|
- **Size of the generated dataset:** 284.34 MB |
|
- **Total amount of disk used:** 2.02 GB |
|
|
|
### Dataset Summary |
|
|
|
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> |
|
<p><b>Warning:</b> There are issues with the Common Crawl corpus data (<a href="https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz">training-parallel-commoncrawl.tgz</a>):</p> |
|
<ul> |
|
<li>Non-English files contain many English sentences.</li> |
|
<li>Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.</li> |
|
</ul> |
|
<p>We have contacted the WMT organizers, and in response, they have indicated that they do not have plans to update the Common Crawl corpus data. Their rationale pertains to the expectation that such data has been superseded, primarily by CCMatrix, and to some extent, by ParaCrawl datasets.</p> |
|
</div> |
|
|
|
Translation dataset based on the data from statmt.org. |
|
|
|
Versions exist for different years using a combination of data |
|
sources. The base `wmt` allows you to create a custom dataset by choosing |
|
your own data/language pair. This can be done as follows: |
|
|
|
```python |
|
from datasets import inspect_dataset, load_dataset_builder |
|
|
|
inspect_dataset("wmt15", "path/to/scripts") |
|
builder = load_dataset_builder( |
|
"path/to/scripts/wmt_utils.py", |
|
language_pair=("fr", "de"), |
|
subsets={ |
|
datasets.Split.TRAIN: ["commoncrawl_frde"], |
|
datasets.Split.VALIDATION: ["euelections_dev2019"], |
|
}, |
|
) |
|
|
|
# Standard version |
|
builder.download_and_prepare() |
|
ds = builder.as_dataset() |
|
|
|
# Streamable version |
|
ds = builder.as_streaming_dataset() |
|
``` |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Languages |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
#### cs-en |
|
|
|
- **Size of downloaded dataset files:** 1.74 GB |
|
- **Size of the generated dataset:** 284.34 MB |
|
- **Total amount of disk used:** 2.02 GB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
|
|
``` |
|
|
|
### Data Fields |
|
|
|
The data fields are the same among all splits. |
|
|
|
#### cs-en |
|
- `translation`: a multilingual `string` variable, with possible languages including `cs`, `en`. |
|
|
|
### Data Splits |
|
|
|
|name |train |validation|test| |
|
|-----|-----:|---------:|---:| |
|
|cs-en|959768| 3003|2656| |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
#### Who are the source language producers? |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
#### Who are the annotators? |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Personal and Sensitive Information |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Discussion of Biases |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Other Known Limitations |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Licensing Information |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Citation Information |
|
|
|
``` |
|
|
|
@InProceedings{bojar-EtAl:2015:WMT, |
|
author = {Bojar, Ond |
|
{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco}, |
|
title = {Findings of the 2015 Workshop on Statistical Machine Translation}, |
|
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation}, |
|
month = {September}, |
|
year = {2015}, |
|
address = {Lisbon, Portugal}, |
|
publisher = {Association for Computational Linguistics}, |
|
pages = {1--46}, |
|
url = {http://aclweb.org/anthology/W15-3001} |
|
} |
|
|
|
``` |
|
|
|
|
|
### Contributions |
|
|
|
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |