Datasets:
dataset_info:
features:
- name: laser_score
dtype: float64
- name: lang1
dtype: string
- name: text1
dtype: string
- name: lang2
dtype: string
- name: text2
dtype: string
- name: blaser_sim
dtype: float64
splits:
- name: train
num_bytes: 2279333006
num_examples: 9983398
download_size: 1825697094
dataset_size: 2279333006
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- translation
pretty_name: nllb-200-10M-sample
size_categories:
- 1M<n<10M
language:
- ak
- am
- ar
- awa
- azj
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- da
- de
- dik
- dyu
- el
- en
- eo
- et
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- ff
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ks
- ka
- kk
- kbp
- kea
- mn
- km
- ki
- rw
- ky
- kmb
- kmr
- kr
- kr
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lv
- mag
- mai
- ml
- mr
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nb
- ne
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- fa
- plt
- pl
- pt
- prs
- qu
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- sw
- szl
- ta
- taq
- tt
- te
- tg
- tl
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uz
- vec
- vi
- war
- wo
- xh
- yi
- yo
- zh
- zh
- ms
- zu
Dataset Card for "nllb-200-10M-sample"
This is a sample of nearly 10M sentence pairs from the NLLB-200 mined dataset allenai/nllb, scored with the model facebook/blaser-2.0-qe described in the SeamlessM4T paper.
The sample is not random; instead, we just took the top n
sentence pairs from each translation direction.
The number n
was computed with the goal of upsamping the directions that contain underrepresented languages.
Nevertheless, the 187 languoids (language and script combinations) are not represented equally,
with most languoids totaling 36K to 200K sentences.
Over 60% of the sentence pairs have BLASER-QE score above 3.5.
This dataset can be used for fine-tuning massively multilingual translation models. We suggest the following scenario:
- Filter the dataset by the value of
blaser_sim
(the recommended threshold is 3.0 or 3.5); - Randomly swap the source/target roles in the sentence pairs during data loading;
- Use that data to augment the dataset while fine-tuning an NLLB-like model for a new translation direction, in order to mitigate forgetting of all the other translation directions.
The dataset is released under the terms of ODC-BY. By using this, you are also bound to the respective Terms of Use and License of the original source.
Citation:
- NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv https://arxiv.org/abs/2207.04672, 2022.
- Seamless Communication et al, SeamlessM4T — Massively Multilingual & Multimodal Machine Translation, Arxiv https://arxiv.org/abs/2308.11596, 2023.
The following language codes are supported. The mapping between languages and codes can be found in the NLLB-200 paper or in the FLORES-200 repository.
aka_Latn amh_Ethi arb_Arab awa_Deva azj_Latn bam_Latn ban_Latn bel_Cyrl bem_Latn ben_Beng bho_Deva bjn_Latn
bug_Latn bul_Cyrl cat_Latn ceb_Latn ces_Latn cjk_Latn ckb_Arab crh_Latn dan_Latn deu_Latn dik_Latn dyu_Latn
ell_Grek eng_Latn epo_Latn est_Latn ewe_Latn fao_Latn fij_Latn fin_Latn fon_Latn fra_Latn fur_Latn fuv_Latn
gaz_Latn gla_Latn gle_Latn glg_Latn grn_Latn guj_Gujr hat_Latn hau_Latn heb_Hebr hin_Deva hne_Deva hrv_Latn
hun_Latn hye_Armn ibo_Latn ilo_Latn ind_Latn isl_Latn ita_Latn jav_Latn jpn_Jpan kab_Latn kac_Latn kam_Latn
kan_Knda kas_Arab kas_Deva kat_Geor kaz_Cyrl kbp_Latn kea_Latn khk_Cyrl khm_Khmr kik_Latn kin_Latn kir_Cyrl
kmb_Latn kmr_Latn knc_Arab knc_Latn kon_Latn kor_Hang lao_Laoo lij_Latn lim_Latn lin_Latn lit_Latn lmo_Latn
ltg_Latn ltz_Latn lua_Latn lug_Latn luo_Latn lus_Latn lvs_Latn mag_Deva mai_Deva mal_Mlym mar_Deva min_Latn
mkd_Cyrl mlt_Latn mni_Beng mos_Latn mri_Latn mya_Mymr nld_Latn nob_Latn npi_Deva nso_Latn nus_Latn nya_Latn
oci_Latn ory_Orya pag_Latn pan_Guru pap_Latn pbt_Arab pes_Arab plt_Latn pol_Latn por_Latn prs_Arab quy_Latn
ron_Latn run_Latn rus_Cyrl sag_Latn san_Deva sat_Beng scn_Latn shn_Mymr sin_Sinh slk_Latn slv_Latn smo_Latn
sna_Latn snd_Arab som_Latn sot_Latn spa_Latn srd_Latn srp_Cyrl ssw_Latn sun_Latn swe_Latn swh_Latn szl_Latn
tam_Taml taq_Latn tat_Cyrl tel_Telu tgk_Cyrl tgl_Latn tir_Ethi tpi_Latn tsn_Latn tso_Latn tuk_Latn tum_Latn
tur_Latn twi_Latn tzm_Tfng uig_Arab ukr_Cyrl umb_Latn urd_Arab uzn_Latn vec_Latn vie_Latn war_Latn wol_Latn
xho_Latn ydd_Hebr yor_Latn zho_Hans zho_Hant zsm_Latn zul_Latn