lewtun's picture
lewtun HF staff
End of training
ebb0305 verified
|
raw
history blame
2.31 kB
metadata
license: other
base_model: lewtun/gemma-7b-sft-full-deita-10k-v0
tags:
  - alignment-handbook
  - trl
  - dpo
  - generated_from_trainer
  - trl
  - dpo
  - generated_from_trainer
datasets:
  - argilla/dpo-mix-7k
model-index:
  - name: gemma-7b-dpo-full-mix1-beta-0.05-epoch-2
    results: []

gemma-7b-dpo-full-mix1-beta-0.05-epoch-2

This model is a fine-tuned version of lewtun/gemma-7b-sft-full-deita-10k-v0 on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4897
  • Rewards/chosen: -2.0476
  • Rewards/rejected: -3.4508
  • Rewards/accuracies: 0.7083
  • Rewards/margins: 1.4032
  • Logps/rejected: -520.5655
  • Logps/chosen: -494.3556
  • Logits/rejected: 93.1721
  • Logits/chosen: 99.2168

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 2
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.193 1.9 100 0.4773 -2.0019 -3.4640 0.7292 1.4620 -520.8292 -493.4434 93.2355 99.2919

Framework versions

  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.15.1