Mistral_Sparse_refined_web_70p_2024-02-16
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.1914
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
3.0372 | 0.0 | 25 | 3.1256 |
2.6176 | 0.01 | 50 | 2.8951 |
2.5321 | 0.01 | 75 | 2.7409 |
2.4603 | 0.02 | 100 | 2.6753 |
2.4033 | 0.02 | 125 | 2.6424 |
2.4821 | 0.02 | 150 | 2.6147 |
2.4008 | 0.03 | 175 | 2.5858 |
2.3651 | 0.03 | 200 | 2.5688 |
2.3873 | 0.04 | 225 | 2.5565 |
2.4145 | 0.04 | 250 | 2.5470 |
2.3295 | 0.04 | 275 | 2.5321 |
2.3458 | 0.05 | 300 | 2.5185 |
2.3587 | 0.05 | 325 | 2.5146 |
2.1873 | 0.06 | 350 | 2.5093 |
2.3502 | 0.06 | 375 | 2.5093 |
2.3837 | 0.06 | 400 | 2.5021 |
2.3747 | 0.07 | 425 | 2.4994 |
2.3292 | 0.07 | 450 | 2.4957 |
2.2438 | 0.08 | 475 | 2.4940 |
2.3102 | 0.08 | 500 | 2.4889 |
2.3791 | 0.08 | 525 | 2.4858 |
2.2743 | 0.09 | 550 | 2.4827 |
2.4148 | 0.09 | 575 | 2.4813 |
2.2115 | 0.1 | 600 | 2.4830 |
2.2963 | 0.1 | 625 | 2.4834 |
2.3762 | 0.1 | 650 | 2.4805 |
2.3657 | 0.11 | 675 | 2.4764 |
2.3219 | 0.11 | 700 | 2.4746 |
2.3166 | 0.12 | 725 | 2.4712 |
2.2193 | 0.12 | 750 | 2.4747 |
2.2629 | 0.12 | 775 | 2.4703 |
2.3504 | 0.13 | 800 | 2.4732 |
2.3523 | 0.13 | 825 | 2.4662 |
2.3362 | 0.14 | 850 | 2.4645 |
2.202 | 0.14 | 875 | 2.4659 |
2.2795 | 0.14 | 900 | 2.4682 |
2.2254 | 0.15 | 925 | 2.4621 |
2.3507 | 0.15 | 950 | 2.4642 |
2.2825 | 0.16 | 975 | 2.4624 |
2.3301 | 0.16 | 1000 | 2.4603 |
2.3299 | 0.16 | 1025 | 2.4642 |
2.3583 | 0.17 | 1050 | 2.4617 |
2.3819 | 0.17 | 1075 | 2.4616 |
2.2945 | 0.18 | 1100 | 2.4572 |
2.3334 | 0.18 | 1125 | 2.4584 |
2.2964 | 0.18 | 1150 | 2.4624 |
2.346 | 0.19 | 1175 | 2.4567 |
2.2106 | 0.19 | 1200 | 2.4539 |
2.2917 | 0.2 | 1225 | 2.4603 |
2.2817 | 0.2 | 1250 | 2.4583 |
2.3261 | 0.2 | 1275 | 2.4557 |
2.3473 | 0.21 | 1300 | 2.4571 |
2.3228 | 0.21 | 1325 | 2.4563 |
2.2124 | 0.22 | 1350 | 2.4556 |
2.2967 | 0.22 | 1375 | 2.4560 |
2.3051 | 0.22 | 1400 | 2.4586 |
2.2448 | 0.23 | 1425 | 2.4607 |
2.23 | 0.23 | 1450 | 2.4550 |
2.1959 | 0.24 | 1475 | 2.4576 |
2.2542 | 0.24 | 1500 | 2.4619 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support model that require custom code execution.
Model tree for thrunlab/Mistral_Sparse_refined_web_70p_2024-02-16
Base model
mistralai/Mistral-7B-v0.1