MikaSie's picture
End of training
824f9ee verified
|
raw
history blame
No virus
2.03 kB
metadata
base_model: meta-llama/Meta-Llama-3-8B
datasets:
  - generator
library_name: peft
license: llama3
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: Llama3_no_extraction_V2
    results: []

Llama3_no_extraction_V2

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3288

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
No log 0.9903 51 1.3462
No log 2.0 103 1.3212
No log 2.9903 154 1.3157
No log 4.0 206 1.3153
No log 4.9903 257 1.3163
No log 6.0 309 1.3196
No log 6.9903 360 1.3231
No log 8.0 412 1.3249
No log 8.9903 463 1.3273
1.1964 9.9029 510 1.3288

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1