metadata
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_1000steps_1e7rate_05beta_CSFTDPO
results: []
Summary_L3_1000steps_1e7rate_05beta_CSFTDPO
This model is a fine-tuned version of tsavage68/Summary_L3_1000steps_1e7rate_SFT2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5962
- Rewards/chosen: 0.0959
- Rewards/rejected: -1.3470
- Rewards/accuracies: 0.1400
- Rewards/margins: 1.4430
- Logps/rejected: -17.9578
- Logps/chosen: -9.1909
- Logits/rejected: -1.1008
- Logits/chosen: -1.1023
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6835 | 0.2004 | 50 | 0.6724 | 0.0066 | -0.0411 | 0.1350 | 0.0477 | -15.3460 | -9.3696 | -1.0959 | -1.0974 |
0.6728 | 0.4008 | 100 | 0.6273 | 0.0168 | -0.1873 | 0.1400 | 0.2041 | -15.6383 | -9.3492 | -1.0942 | -1.0958 |
0.6258 | 0.6012 | 150 | 0.5991 | 0.0579 | -0.5769 | 0.1400 | 0.6348 | -16.4175 | -9.2670 | -1.0922 | -1.0939 |
0.6069 | 0.8016 | 200 | 0.5969 | 0.0750 | -0.8979 | 0.1400 | 0.9729 | -17.0596 | -9.2328 | -1.0945 | -1.0962 |
0.6239 | 1.0020 | 250 | 0.5966 | 0.0810 | -1.0669 | 0.1400 | 1.1479 | -17.3976 | -9.2207 | -1.0969 | -1.0985 |
0.6238 | 1.2024 | 300 | 0.5965 | 0.0913 | -1.1354 | 0.1400 | 1.2267 | -17.5345 | -9.2001 | -1.0979 | -1.0995 |
0.6239 | 1.4028 | 350 | 0.5963 | 0.0832 | -1.2037 | 0.1400 | 1.2869 | -17.6712 | -9.2164 | -1.0994 | -1.1009 |
0.5723 | 1.6032 | 400 | 0.5963 | 0.0939 | -1.2663 | 0.1400 | 1.3602 | -17.7963 | -9.1950 | -1.0995 | -1.1010 |
0.5892 | 1.8036 | 450 | 0.5962 | 0.0906 | -1.3049 | 0.1400 | 1.3956 | -17.8736 | -9.2015 | -1.1002 | -1.1017 |
0.5719 | 2.0040 | 500 | 0.5962 | 0.0919 | -1.3133 | 0.1400 | 1.4052 | -17.8904 | -9.1991 | -1.1004 | -1.1018 |
0.5719 | 2.2044 | 550 | 0.5963 | 0.0928 | -1.3222 | 0.1400 | 1.4150 | -17.9082 | -9.1971 | -1.1003 | -1.1018 |
0.5545 | 2.4048 | 600 | 0.5962 | 0.0967 | -1.3312 | 0.1400 | 1.4279 | -17.9262 | -9.1895 | -1.1006 | -1.1020 |
0.5199 | 2.6052 | 650 | 0.5962 | 0.0910 | -1.3466 | 0.1400 | 1.4376 | -17.9569 | -9.2007 | -1.1008 | -1.1023 |
0.624 | 2.8056 | 700 | 0.5962 | 0.0912 | -1.3547 | 0.1400 | 1.4459 | -17.9732 | -9.2004 | -1.1006 | -1.1021 |
0.6065 | 3.0060 | 750 | 0.5962 | 0.0952 | -1.3445 | 0.1400 | 1.4397 | -17.9527 | -9.1924 | -1.1007 | -1.1022 |
0.6412 | 3.2064 | 800 | 0.5962 | 0.0965 | -1.3521 | 0.1400 | 1.4486 | -17.9680 | -9.1898 | -1.1008 | -1.1023 |
0.6585 | 3.4068 | 850 | 0.5962 | 0.0984 | -1.3572 | 0.1400 | 1.4556 | -17.9781 | -9.1860 | -1.1005 | -1.1020 |
0.6238 | 3.6072 | 900 | 0.5962 | 0.0967 | -1.3456 | 0.1400 | 1.4423 | -17.9550 | -9.1894 | -1.1010 | -1.1024 |
0.5372 | 3.8076 | 950 | 0.5962 | 0.0959 | -1.3470 | 0.1400 | 1.4430 | -17.9578 | -9.1909 | -1.1008 | -1.1023 |
0.6238 | 4.0080 | 1000 | 0.5962 | 0.0959 | -1.3470 | 0.1400 | 1.4430 | -17.9578 | -9.1909 | -1.1008 | -1.1023 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1