zephyr-7b-dpo-full
This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5107
- Rewards/chosen: -1.4645
- Rewards/rejected: -2.3555
- Rewards/accuracies: 0.7718
- Rewards/margins: 0.8911
- Logps/rejected: -491.4778
- Logps/chosen: -426.3907
- Logits/rejected: 1.4587
- Logits/chosen: 0.9514
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6339 | 0.1 | 100 | 0.6366 | -0.4251 | -0.6280 | 0.6766 | 0.2029 | -318.7289 | -322.4543 | -1.7266 | -1.8550 |
0.5801 | 0.21 | 200 | 0.5761 | -0.9339 | -1.4916 | 0.7242 | 0.5577 | -405.0862 | -373.3335 | -1.7791 | -1.8866 |
0.5298 | 0.31 | 300 | 0.5505 | -0.9519 | -1.6203 | 0.7401 | 0.6684 | -417.9537 | -375.1365 | -0.9729 | -1.1938 |
0.5055 | 0.42 | 400 | 0.5331 | -1.3809 | -2.1858 | 0.7540 | 0.8048 | -474.5050 | -418.0395 | 0.2901 | -0.0376 |
0.5243 | 0.52 | 500 | 0.5240 | -1.5398 | -2.3578 | 0.7718 | 0.8180 | -491.7054 | -433.9210 | 1.1167 | 0.7245 |
0.5024 | 0.63 | 600 | 0.5212 | -1.6677 | -2.5319 | 0.75 | 0.8643 | -509.1215 | -446.7127 | 1.3224 | 0.8469 |
0.4855 | 0.73 | 700 | 0.5156 | -1.5293 | -2.4112 | 0.7579 | 0.8819 | -497.0490 | -432.8780 | 1.5165 | 1.0177 |
0.5048 | 0.84 | 800 | 0.5121 | -1.4754 | -2.3714 | 0.7698 | 0.8960 | -493.0640 | -427.4831 | 1.3869 | 0.8797 |
0.5193 | 0.94 | 900 | 0.5109 | -1.4545 | -2.3434 | 0.7738 | 0.8889 | -490.2650 | -425.3930 | 1.4499 | 0.9411 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.0
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.