RicardoRei
commited on
Commit
•
6a868f4
1
Parent(s):
9c1c871
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,9 @@ size_categories:
|
|
15 |
|
16 |
# Dataset Summary
|
17 |
|
18 |
-
This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/) in a form of error spans.
|
|
|
|
|
19 |
|
20 |
The data is organised into 8 columns:
|
21 |
|
@@ -25,9 +27,8 @@ The data is organised into 8 columns:
|
|
25 |
- annotations: List of error spans (dictionaries with 'start', 'end', 'severity', 'text')
|
26 |
- lp: language pair
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
Also, while `en-ru` was annotated by Unbabel, `en-de` and `zh-en` was annotated by Google. This means that for en-de and zh-en you will only find minor and major errors while for en-ru you can find a few critical errors.
|
31 |
|
32 |
## Python usage:
|
33 |
|
@@ -49,3 +50,4 @@ If you use this data please cite the following works:
|
|
49 |
- [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
|
50 |
- [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
|
51 |
- [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/)
|
|
|
|
15 |
|
16 |
# Dataset Summary
|
17 |
|
18 |
+
This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/) in a form of error spans. Moreover, it contains some hallucinations used in the training of [XCOMET models](https://huggingface.co/Unbabel/XCOMET-XXL).
|
19 |
+
|
20 |
+
** Please note that this is not an official release of the data** and the original data can be found [here](https://github.com/google/wmt-mqm-human-evaluation).
|
21 |
|
22 |
The data is organised into 8 columns:
|
23 |
|
|
|
27 |
- annotations: List of error spans (dictionaries with 'start', 'end', 'severity', 'text')
|
28 |
- lp: language pair
|
29 |
|
30 |
+
|
31 |
+
While `en-ru` was annotated by Unbabel, `en-de` and `zh-en` was annotated by Google. This means that for en-de and zh-en you will only find minor and major errors while for en-ru you can find a few critical errors.
|
|
|
32 |
|
33 |
## Python usage:
|
34 |
|
|
|
50 |
- [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
|
51 |
- [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
|
52 |
- [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/)
|
53 |
+
- [xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection](https://arxiv.org/pdf/2310.10482.pdf)
|