OrpoLlama-3-8B-Instruct
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0517
- Rewards/chosen: -0.0621
- Rewards/rejected: -0.0634
- Rewards/accuracies: 0.6000
- Rewards/margins: 0.0013
- Logps/rejected: -0.6340
- Logps/chosen: -0.6207
- Logits/rejected: -0.2825
- Logits/chosen: -0.2736
- Nll Loss: 0.9848
- Log Odds Ratio: -0.6691
- Log Odds Chosen: 0.1127
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.2564 | 0.5980 | 74 | 1.8521 | -0.1306 | -0.1270 | 0.6000 | -0.0036 | -1.2696 | -1.3055 | -0.4912 | -0.5850 | 1.7776 | -0.7454 | -0.0391 |
1.8749 | 1.1960 | 148 | 1.3145 | -0.0855 | -0.0879 | 0.6000 | 0.0024 | -0.8795 | -0.8553 | -0.2945 | -0.2832 | 1.2482 | -0.6628 | 0.1091 |
1.1933 | 1.7939 | 222 | 1.1033 | -0.0662 | -0.0667 | 0.6000 | 0.0005 | -0.6671 | -0.6624 | -0.2268 | -0.2101 | 1.0354 | -0.6787 | 0.0828 |
0.8761 | 2.3919 | 296 | 1.0517 | -0.0621 | -0.0634 | 0.6000 | 0.0013 | -0.6340 | -0.6207 | -0.2825 | -0.2736 | 0.9848 | -0.6691 | 0.1127 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 64
Model tree for Samhita/OrpoLlama-3-8B-Instruct
Base model
meta-llama/Meta-Llama-3-8B-Instruct