Edit model card

Llama-31-8B_task-3_180-samples_config-2_full

This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-3, the GaetanMichelet/chat-120_ft_task-3 and the GaetanMichelet/chat-180_ft_task-3 datasets. It achieves the following results on the evaluation set:

  • Loss: 1.1381

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss
1.695 0.9412 8 1.6759
1.5868 2.0 17 1.5454
1.473 2.9412 25 1.4198
1.2035 4.0 34 1.2535
1.1293 4.9412 42 1.1991
1.1361 6.0 51 1.1733
1.1333 6.9412 59 1.1571
1.0612 8.0 68 1.1462
0.9895 8.9412 76 1.1392
0.9858 10.0 85 1.1381
0.939 10.9412 93 1.1420
0.8747 12.0 102 1.1664
0.8694 12.9412 110 1.1780
0.8188 14.0 119 1.2246
0.697 14.9412 127 1.2348
0.6048 16.0 136 1.3102
0.5898 16.9412 144 1.3190

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.1.2+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
7
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for GaetanMichelet/Llama-31-8B_task-3_180-samples_config-2_full

Adapter
(452)
this model

Collection including GaetanMichelet/Llama-31-8B_task-3_180-samples_config-2_full