OPO Mistral-7B
Collection
8 items
•
Updated
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the yangzhao02/ListUltraFeedback dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Logps | Logits |
---|---|---|---|---|---|
-0.9908 | 0.4275 | 200 | -0.9910 | -365.4714 | -0.6106 |
-0.9924 | 0.8549 | 400 | -0.9923 | -393.3539 | -0.6678 |
Base model
mistralai/Mistral-7B-v0.1