Update README.md with CO2 emissions
#6
by
m-ric
HF staff
- opened
README.md
CHANGED
@@ -210,6 +210,11 @@ metrics:
|
|
210 |
- spbleu
|
211 |
- chrf++
|
212 |
inference: false
|
|
|
|
|
|
|
|
|
|
|
213 |
---
|
214 |
|
215 |
# NLLB-200
|
@@ -219,12 +224,11 @@ This is the model card of NLLB-200's distilled 1.3B variant.
|
|
219 |
Here are the [metrics](https://tinyurl.com/nllb200densedst1bmetrics) for that particular checkpoint.
|
220 |
|
221 |
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
|
222 |
-
- Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation
|
223 |
- License: CC-BY-NC
|
224 |
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
|
225 |
|
226 |
|
227 |
-
|
228 |
## Intended Use
|
229 |
- Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data.
|
230 |
- Primary intended users: Primary users are researchers and machine translation research community.
|
|
|
210 |
- spbleu
|
211 |
- chrf++
|
212 |
inference: false
|
213 |
+
|
214 |
+
co2_eq_emissions:
|
215 |
+
emissions: 104_310_000
|
216 |
+
source: "Paper: No Language Left Behind: Scaling Human-Centered Machine Translation. This is the number for the whole NLLB-200 project, that includes other models."
|
217 |
+
hardware_used: "NVIDIA A100"
|
218 |
---
|
219 |
|
220 |
# NLLB-200
|
|
|
224 |
Here are the [metrics](https://tinyurl.com/nllb200densedst1bmetrics) for that particular checkpoint.
|
225 |
|
226 |
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
|
227 |
+
- Paper or other resource for more information: [NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation](https://huggingface.co/papers/2207.04672)
|
228 |
- License: CC-BY-NC
|
229 |
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
|
230 |
|
231 |
|
|
|
232 |
## Intended Use
|
233 |
- Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data.
|
234 |
- Primary intended users: Primary users are researchers and machine translation research community.
|