Datasets:
cointegrated
commited on
Commit
•
112cd77
1
Parent(s):
5e91e6c
Update README.md
Browse files
README.md
CHANGED
@@ -214,7 +214,7 @@ mined dataset [allenai/nllb](https://huggingface.co/datasets/allenai/nllb),
|
|
214 |
scored with the model [facebook/blaser-2.0-qe](https://huggingface.co/facebook/blaser-2.0-qe)
|
215 |
described in the [SeamlessM4T](https://arxiv.org/abs/2308.11596) paper.
|
216 |
|
217 |
-
The sample is not random; instead, we just took the top `n`
|
218 |
The number `n` was computed with the goal of upsamping the directions that contain underrepresented languages.
|
219 |
Nevertheless, the 187 languoids (language and script combinations) are not represented equally,
|
220 |
with most languoids totaling 36K to 200K sentences.
|
@@ -223,7 +223,7 @@ Over 60% of the sentence pairs have BLASER-QE score above 3.5.
|
|
223 |
This dataset can be used for fine-tuning massively multilingual translation models.
|
224 |
We suggest the following scenario:
|
225 |
- Filter the dataset by the value of `blaser_sim` (the recommended threshold is 3.0 or 3.5);
|
226 |
-
- Randomly
|
227 |
- Use that data to augment the dataset while fine-tuning an NLLB-like model for a new translation direction,
|
228 |
in order to mitigate forgetting of all the other translation directions.
|
229 |
|
@@ -231,8 +231,8 @@ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/
|
|
231 |
By using this, you are also bound to the respective Terms of Use and License of the original source.
|
232 |
|
233 |
Citation:
|
234 |
-
- NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation
|
235 |
-
- Seamless Communication et al, SeamlessM4T — Massively Multilingual & Multimodal Machine Translation
|
236 |
|
237 |
The following language codes are supported. The mapping between languages and codes can be found in the [NLLB-200 paper](https://arxiv.org/abs/2207.04672)
|
238 |
or in the [FLORES-200 repository](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
|
|
|
214 |
scored with the model [facebook/blaser-2.0-qe](https://huggingface.co/facebook/blaser-2.0-qe)
|
215 |
described in the [SeamlessM4T](https://arxiv.org/abs/2308.11596) paper.
|
216 |
|
217 |
+
The sample is not random; instead, we just took the top `n` sentence pairs from each translation direction.
|
218 |
The number `n` was computed with the goal of upsamping the directions that contain underrepresented languages.
|
219 |
Nevertheless, the 187 languoids (language and script combinations) are not represented equally,
|
220 |
with most languoids totaling 36K to 200K sentences.
|
|
|
223 |
This dataset can be used for fine-tuning massively multilingual translation models.
|
224 |
We suggest the following scenario:
|
225 |
- Filter the dataset by the value of `blaser_sim` (the recommended threshold is 3.0 or 3.5);
|
226 |
+
- Randomly swap the source/target roles in the sentence pairs during data loading;
|
227 |
- Use that data to augment the dataset while fine-tuning an NLLB-like model for a new translation direction,
|
228 |
in order to mitigate forgetting of all the other translation directions.
|
229 |
|
|
|
231 |
By using this, you are also bound to the respective Terms of Use and License of the original source.
|
232 |
|
233 |
Citation:
|
234 |
+
- NLLB Team et al, *No Language Left Behind: Scaling Human-Centered Machine Translation*, Arxiv https://arxiv.org/abs/2207.04672, 2022.
|
235 |
+
- Seamless Communication et al, *SeamlessM4T — Massively Multilingual & Multimodal Machine Translation*, Arxiv https://arxiv.org/abs/2308.11596, 2023.
|
236 |
|
237 |
The following language codes are supported. The mapping between languages and codes can be found in the [NLLB-200 paper](https://arxiv.org/abs/2207.04672)
|
238 |
or in the [FLORES-200 repository](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
|