tsavage68's picture
End of training
7c0a2eb verified
metadata
library_name: transformers
license: apache-2.0
base_model: tsavage68/Na_M2_1000steps_1e7_SFT
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: Na_M2_200steps_1e7rate_01beta_cSFTDPO
    results: []

Na_M2_200steps_1e7rate_01beta_cSFTDPO

This model is a fine-tuned version of tsavage68/Na_M2_1000steps_1e7_SFT on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Rewards/chosen: 2.3496
  • Rewards/rejected: -12.1508
  • Rewards/accuracies: 1.0
  • Rewards/margins: 14.5003
  • Logps/rejected: -201.4312
  • Logps/chosen: -24.6368
  • Logits/rejected: -2.3632
  • Logits/chosen: -2.3967

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 200

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.0 0.2667 50 0.0000 2.1916 -10.0354 1.0 12.2270 -180.2773 -26.2167 -2.3982 -2.4261
0.0 0.5333 100 0.0000 2.3125 -11.6667 1.0 13.9792 -196.5901 -25.0075 -2.3692 -2.4015
0.0 0.8 150 0.0000 2.3477 -12.1123 1.0 14.4600 -201.0466 -24.6557 -2.3646 -2.3980
0.0 1.0667 200 0.0000 2.3496 -12.1508 1.0 14.5003 -201.4312 -24.6368 -2.3632 -2.3967

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1