RicardoRei commited on
Commit
44d1bf3
1 Parent(s): ef005cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - de
6
+ - ru
7
+ - zh
8
+ tags:
9
+ - mt-evaluation
10
+ - WMT
11
+ - MQM
12
+ size_categories:
13
+ - 100K<n<1M
14
  ---
15
+
16
+ # Dataset Summary
17
+
18
+ This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/) in a form of error spans.
19
+
20
+ The data is organised into 8 columns:
21
+
22
+ - src: input text
23
+ - mt: translation
24
+ - ref: reference translation
25
+ - annotations: List of error spans (dictionaries with 'start', 'end', 'severity', 'text')
26
+ - lp: language pair
27
+
28
+ **Note that this is not an official release of the data** and the original data can be found [here](https://github.com/google/wmt-mqm-human-evaluation).
29
+
30
+ Also, while `en-ru` was annotated by Unbabel, `en-de` and `zh-en` was annotated by Google. This means that for en-de and zh-en you will only find minor and major errors while for en-ru you can find a few critical errors.
31
+
32
+ ## Python usage:
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+ dataset = load_dataset("RicardoRei/wmt-mqm-error-spans", split="train")
37
+ ```
38
+
39
+ There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
40
+
41
+ ```python
42
+ # split by LP
43
+ data = dataset.filter(lambda example: example["lp"] == "en-de")
44
+ ```
45
+
46
+ ## Citation Information
47
+
48
+ If you use this data please cite the following works:
49
+ - [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
50
+ - [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
51
+ - [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/)