IE_L3_350steps_1e8rate_03beta_cSFTDPO
This model is a fine-tuned version of tsavage68/IE_L3_1000steps_1e6rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6896
- Rewards/chosen: -0.0071
- Rewards/rejected: -0.0198
- Rewards/accuracies: 0.4400
- Rewards/margins: 0.0127
- Logps/rejected: -75.6932
- Logps/chosen: -82.8214
- Logits/rejected: -0.7977
- Logits/chosen: -0.7408
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 350
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6912 | 0.4 | 50 | 0.6940 | -0.0075 | -0.0104 | 0.4000 | 0.0029 | -75.6618 | -82.8226 | -0.7964 | -0.7393 |
0.6947 | 0.8 | 100 | 0.6925 | 0.0014 | -0.0057 | 0.3850 | 0.0070 | -75.6461 | -82.7931 | -0.7963 | -0.7394 |
0.6881 | 1.2 | 150 | 0.7003 | -0.0102 | -0.0020 | 0.375 | -0.0082 | -75.6340 | -82.8318 | -0.7969 | -0.7398 |
0.6776 | 1.6 | 200 | 0.6938 | -0.0057 | -0.0098 | 0.375 | 0.0041 | -75.6601 | -82.8168 | -0.7970 | -0.7399 |
0.6859 | 2.0 | 250 | 0.6850 | -0.0033 | -0.0250 | 0.4350 | 0.0217 | -75.7105 | -82.8087 | -0.7975 | -0.7405 |
0.7024 | 2.4 | 300 | 0.6893 | -0.0075 | -0.0207 | 0.4400 | 0.0132 | -75.6964 | -82.8228 | -0.7977 | -0.7408 |
0.6802 | 2.8 | 350 | 0.6896 | -0.0071 | -0.0198 | 0.4400 | 0.0127 | -75.6932 | -82.8214 | -0.7977 | -0.7408 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.0.0+cu117
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tsavage68/IE_L3_350steps_1e8rate_03beta_cSFTDPO
Base model
meta-llama/Meta-Llama-3-8B-Instruct
Finetuned
tsavage68/IE_L3_1000steps_1e6rate_SFT