mistral7B-fyp-project-finetune-final-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.7206
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.0008 | 0.0710 | 25 | 2.0254 |
2.0393 | 0.1420 | 50 | 1.9910 |
1.98 | 0.2131 | 75 | 1.9675 |
1.9571 | 0.2841 | 100 | 1.9420 |
1.9311 | 0.3551 | 125 | 1.9205 |
1.8959 | 0.4261 | 150 | 1.9025 |
1.9593 | 0.4972 | 175 | 1.8818 |
1.8716 | 0.5682 | 200 | 1.8582 |
1.8744 | 0.6392 | 225 | 1.8391 |
1.821 | 0.7102 | 250 | 1.8200 |
1.8151 | 0.7812 | 275 | 1.8022 |
1.7938 | 0.8523 | 300 | 1.7878 |
1.7817 | 0.9233 | 325 | 1.7720 |
1.8036 | 0.9943 | 350 | 1.7580 |
1.6723 | 1.0653 | 375 | 1.7478 |
1.596 | 1.1364 | 400 | 1.7394 |
1.6096 | 1.2074 | 425 | 1.7315 |
1.5972 | 1.2784 | 450 | 1.7270 |
1.5861 | 1.3494 | 475 | 1.7239 |
1.6314 | 1.4205 | 500 | 1.7206 |
Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.43.0.dev0
- Pytorch 2.2.1
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for Shobish07/mistral7B-fyp-project-finetune-final-v1
Base model
mistralai/Mistral-7B-v0.1