llama-7b-finnish-instruct-v0.2_Fi__CMP_TR_size_304_epochs_10_2024-06-22_21-11-23_3558622
This model is a fine-tuned version of Finnish-NLP/llama-7b-finnish-instruct-v0.2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5913
- Accuracy: 0.77
- Chrf: 0.432
- Bleu: 0.34
- Sacrebleu: 0.3
- Rouge1: 0.532
- Rouge2: 0.364
- Rougel: 0.516
- Rougelsum: 0.515
- Meteor: 0.538
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 304
- training_steps: 3040
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.8914 | 1.0 | 304 | 1.6479 | 0.774 | 0.154 | 0.09 | 0.1 | 0.332 | 0.255 | 0.326 | 0.316 | 0.359 |
0.0574 | 2.0 | 608 | 1.7437 | 0.77 | 0.005 | 0.0 | 0.0 | 0.042 | 0.014 | 0.042 | 0.042 | 0.025 |
0.0458 | 3.0 | 912 | 1.4545 | 0.776 | 0.075 | 0.03 | 0.0 | 0.28 | 0.249 | 0.28 | 0.28 | 0.285 |
0.0053 | 4.0 | 1216 | 1.1830 | 0.775 | 0.233 | 0.155 | 0.2 | 0.374 | 0.255 | 0.371 | 0.371 | 0.443 |
0.8303 | 5.0 | 1520 | 1.1190 | 0.778 | 0.223 | 0.143 | 0.1 | 0.448 | 0.309 | 0.433 | 0.44 | 0.449 |
0.0337 | 6.0 | 1824 | 1.1156 | 0.771 | 0.308 | 0.173 | 0.2 | 0.447 | 0.291 | 0.439 | 0.439 | 0.448 |
0.0199 | 7.0 | 2128 | 1.0473 | 0.771 | 0.302 | 0.213 | 0.2 | 0.475 | 0.304 | 0.466 | 0.467 | 0.509 |
0.027 | 8.0 | 2432 | 0.7524 | 0.77 | 0.361 | 0.267 | 0.3 | 0.485 | 0.302 | 0.467 | 0.475 | 0.535 |
0.0059 | 9.0 | 2736 | 0.6130 | 0.77 | 0.419 | 0.313 | 0.3 | 0.515 | 0.34 | 0.501 | 0.506 | 0.539 |
0.025 | 10.0 | 3040 | 0.5913 | 0.77 | 0.432 | 0.34 | 0.3 | 0.532 | 0.364 | 0.516 | 0.515 | 0.538 |
Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
- Downloads last month
- 0
Model tree for vdavidr/llama-7b-finnish-instruct-v0.2_Fi__CMP_TR_size_304_epochs_10_2024-06-22_21-11-23_3558622
Base model
Finnish-NLP/llama-7b-finnish-instruct-v0.2