CodeLlama-13b-Instruct-hf_En__components_size_252_epochs_10_2024-06-21_16-38-40_3556558
This model is a fine-tuned version of codellama/CodeLlama-13b-Instruct-hf on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.9627
- Accuracy: 0.475
- Chrf: 0.304
- Bleu: 0.214
- Sacrebleu: 0.2
- Rouge1: 0.462
- Rouge2: 0.235
- Rougel: 0.433
- Rougelsum: 0.455
- Meteor: 0.515
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 252
- training_steps: 2520
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.0529 | 4.0 | 252 | 3.7602 | 0.473 | 0.065 | 0.0 | 0.0 | 0.043 | 0.0 | 0.037 | 0.041 | 0.197 |
0.0603 | 8.0 | 504 | 2.7810 | 0.481 | 0.17 | 0.117 | 0.1 | 0.299 | 0.204 | 0.292 | 0.295 | 0.44 |
0.076 | 12.0 | 756 | 2.6938 | 0.474 | 0.187 | 0.115 | 0.1 | 0.298 | 0.202 | 0.292 | 0.296 | 0.41 |
1.4593 | 16.0 | 1008 | 2.7991 | 0.48 | 0.187 | 0.084 | 0.1 | 0.322 | 0.185 | 0.316 | 0.321 | 0.345 |
0.1332 | 20.0 | 1260 | 2.4287 | 0.475 | 0.237 | 0.146 | 0.1 | 0.344 | 0.19 | 0.335 | 0.344 | 0.425 |
0.3344 | 24.0 | 1512 | 2.2035 | 0.477 | 0.26 | 0.158 | 0.2 | 0.401 | 0.196 | 0.388 | 0.401 | 0.43 |
0.019 | 28.0 | 1764 | 2.1072 | 0.474 | 0.272 | 0.145 | 0.1 | 0.372 | 0.198 | 0.352 | 0.367 | 0.46 |
0.1253 | 32.0 | 2016 | 2.0500 | 0.475 | 0.293 | 0.209 | 0.2 | 0.448 | 0.241 | 0.426 | 0.448 | 0.516 |
0.0306 | 36.0 | 2268 | 1.9983 | 0.475 | 0.298 | 0.196 | 0.2 | 0.426 | 0.22 | 0.405 | 0.419 | 0.511 |
0.0274 | 40.0 | 2520 | 1.9627 | 0.475 | 0.304 | 0.214 | 0.2 | 0.462 | 0.235 | 0.433 | 0.455 | 0.515 |
Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
Model tree for vdavidr/CodeLlama-13b-Instruct-hf_En__components_size_252_epochs_10_2024-06-21_16-38-40_3556558
Base model
codellama/CodeLlama-13b-Instruct-hf