File size: 1,937 Bytes
4e1fee9
 
c299be4
 
 
 
 
 
 
 
 
 
4e1fee9
2a7f595
c299be4
 
2a7f595
 
 
 
 
 
 
 
 
 
 
c299be4
 
2a7f595
 
c299be4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: apache-2.0
language:
- en
- de
- ru
- zh
tags:
- mt-evaluation
- WMT
size_categories:
- 100K<n<1M
---

# Dataset Summary

This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/).

The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: MQM score
- system: MT Engine that produced the translation
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
  
You can also find the original data [here](https://github.com/google/wmt-mqm-human-evaluation). We recommend using the original repo if you are interested in annotation spans and not just the final score.


## Python usage:

```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-mqm-human-evaluation", split="train")
```

There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :

```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)

# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")

# split by domain
data = dataset.filter(lambda example: example["domain"] == "ted")
```

## Citation Information

If you use this data please cite the following works:
- [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
- [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
- [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/)