|
--- |
|
license: cc-by-nc-4.0 |
|
library_name: peft |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- bleu |
|
- rouge |
|
base_model: facebook/nllb-200-3.3B |
|
model-index: |
|
- name: nllb-200-3.3B-Malayalam_English_Translationt_nllb6 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# nllb-200-3.3B-Malayalam_English_Translationt_nllb6 |
|
|
|
This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.0031 |
|
- Bleu: 37.4644 |
|
- Rouge: {'rouge1': 0.6947858221991348, 'rouge2': 0.47528501248267296, 'rougeL': 0.643592904253675, 'rougeLsum': 0.6438336053077185} |
|
- Chrf: {'score': 63.562323751931785, 'char_order': 6, 'word_order': 0, 'beta': 2} |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0002 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- num_epochs: 5 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Chrf | |
|
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:| |
|
| 1.1299 | 1.0 | 9400 | 1.0473 | 35.4794 | {'rouge1': 0.6827076206592405, 'rouge2': 0.4567713815837643, 'rougeL': 0.6303031579761407, 'rougeLsum': 0.6303637744842896} | {'score': 62.07772367684291, 'char_order': 6, 'word_order': 0, 'beta': 2} | |
|
| 1.0391 | 2.0 | 18800 | 1.0172 | 36.5551 | {'rouge1': 0.6898802619220783, 'rouge2': 0.4678566080033477, 'rougeL': 0.6376152634193879, 'rougeLsum': 0.6378050818770977} | {'score': 62.79493404105809, 'char_order': 6, 'word_order': 0, 'beta': 2} | |
|
| 0.9772 | 3.0 | 28200 | 1.0047 | 37.1999 | {'rouge1': 0.6940761673780116, 'rouge2': 0.4729467289482048, 'rougeL': 0.6422221741064402, 'rougeLsum': 0.6423854506325695} | {'score': 63.383659426629755, 'char_order': 6, 'word_order': 0, 'beta': 2} | |
|
| 0.9322 | 4.0 | 37600 | 1.0021 | 37.3505 | {'rouge1': 0.6946177869994575, 'rouge2': 0.47460537713160267, 'rougeL': 0.643360432984222, 'rougeLsum': 0.6434552650502989} | {'score': 63.44418689943615, 'char_order': 6, 'word_order': 0, 'beta': 2} | |
|
| 0.9109 | 5.0 | 47000 | 1.0031 | 37.4644 | {'rouge1': 0.6947858221991348, 'rouge2': 0.47528501248267296, 'rougeL': 0.643592904253675, 'rougeLsum': 0.6438336053077185} | {'score': 63.562323751931785, 'char_order': 6, 'word_order': 0, 'beta': 2} | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.7.2.dev0 |
|
- Transformers 4.36.1 |
|
- Pytorch 2.0.1+cu117 |
|
- Datasets 2.13.1 |
|
- Tokenizers 0.15.0 |