wmt-mqm-error-spans / README.md
RicardoRei's picture
Update README.md
96b4a56
|
raw
history blame
No virus
2.14 kB
metadata
license: apache-2.0
language:
  - en
  - de
  - ru
  - zh
tags:
  - mt-evaluation
  - WMT
  - MQM
size_categories:
  - 100K<n<1M

Dataset Summary

This dataset contains all MQM human annotations from previous WMT Metrics shared tasks and the MQM annotations from Experts, Errors, and Context in a form of error spans. Moreover, it contains some hallucinations used in the training of XCOMET models.

Please note that this is not an official release of the data and the original data can be found here.

The data is organised into 8 columns:

  • src: input text
  • mt: translation
  • ref: reference translation
  • annotations: List of error spans (dictionaries with 'start', 'end', 'severity', 'text')
  • lp: language pair

While en-ru was annotated by Unbabel, en-de and zh-en was annotated by Google. This means that for en-de and zh-en you will only find minor and major errors while for en-ru you can find a few critical errors.

Python usage:

from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-mqm-error-spans", split="train")

There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :

# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")

Citation Information

If you use this data please cite the following works: