metadata
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- generator
metrics:
- bleu
- rouge
model-index:
- name: Meta-Llama-3-8B-Instruct-advisegpt-v0.2
results: []
Meta-Llama-3-8B-Instruct-advisegpt-v0.2
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.6891
- Bleu: {'bleu': 0.7794801643070653, 'precisions': [0.8826931860836374, 0.7921738670614986, 0.7521498106470706, 0.7302911239298923], 'brevity_penalty': 0.9901418189906349, 'length_ratio': 0.9901900930687305, 'translation_length': 663363, 'reference_length': 669935}
- Rouge: {'rouge1': 0.8797610930416109, 'rouge2': 0.7838158722398209, 'rougeL': 0.8517529678496154, 'rougeLsum': 0.8731754875691802}
- Exact Match: {'exact_match': 0.0}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 60
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Exact Match |
---|---|---|---|---|---|---|
0.1221 | 0.9967 | 175 | 0.6891 | {'bleu': 0.7794801643070653, 'precisions': [0.8826931860836374, 0.7921738670614986, 0.7521498106470706, 0.7302911239298923], 'brevity_penalty': 0.9901418189906349, 'length_ratio': 0.9901900930687305, 'translation_length': 663363, 'reference_length': 669935} | {'rouge1': 0.8797610930416109, 'rouge2': 0.7838158722398209, 'rougeL': 0.8517529678496154, 'rougeLsum': 0.8731754875691802} | {'exact_match': 0.0} |
0.1091 | 1.9991 | 351 | 0.6977 | {'bleu': 0.7805322713844085, 'precisions': [0.8833412231532545, 0.7931277801953389, 0.7535080094374768, 0.7317717661200727], 'brevity_penalty': 0.9900498636013274, 'length_ratio': 0.990099039459052, 'translation_length': 663302, 'reference_length': 669935} | {'rouge1': 0.88033924999596, 'rouge2': 0.7849601251129642, 'rougeL': 0.8519921287058778, 'rougeLsum': 0.8736913571890462} | {'exact_match': 0.0} |
0.1067 | 2.9900 | 525 | 0.7051 | {'bleu': 0.7808878497559923, 'precisions': [0.8838378429742967, 0.7938818670645449, 0.7542948740286441, 0.7326395901316979], 'brevity_penalty': 0.9895748787367024, 'length_ratio': 0.9896288445894005, 'translation_length': 662987, 'reference_length': 669935} | {'rouge1': 0.8806020535666979, 'rouge2': 0.7857024053578856, 'rougeL': 0.8520805662216797, 'rougeLsum': 0.8739154999822791} | {'exact_match': 0.0} |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1