Edit model card

pairwise-reward-zephyr-7b-sft-qlora-ultrafeedback-binarized-20241012-122158

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4755
  • Accuracy: 0.7602

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6032 0.0526 100 0.6264 0.6919
0.6212 0.1052 200 0.5681 0.7245
0.5956 0.1578 300 0.5203 0.7336
0.5409 0.2104 400 0.5159 0.7521
0.4935 0.2630 500 0.5160 0.7336
0.4991 0.3155 600 0.5060 0.7541
0.5413 0.3681 700 0.5036 0.7491
0.5118 0.4207 800 0.4934 0.7501
0.5378 0.4733 900 0.4951 0.7652
0.5199 0.5259 1000 0.4857 0.7657
0.4511 0.5785 1100 0.4964 0.7587
0.4821 0.6311 1200 0.4814 0.7632
0.542 0.6837 1300 0.4847 0.7572
0.4378 0.7363 1400 0.4807 0.7607
0.4358 0.7889 1500 0.4802 0.7607
0.505 0.8414 1600 0.4794 0.7627
0.4415 0.8940 1700 0.4779 0.7622
0.452 0.9466 1800 0.4774 0.7582
0.4533 0.9992 1900 0.4755 0.7602

Framework versions

  • PEFT 0.12.0
  • Transformers 4.45.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.20.0
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for sahandrez/pairwise-reward-zephyr-7b-sft-qlora-ultrafeedback

Adapter
(1172)
this model