--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit library_name: peft license: apache-2.0 tags: - unsloth - generated_from_trainer model-index: - name: Mistral-7B-v0.3_metamath_ortho results: [] --- # Mistral-7B-v0.3_metamath_ortho This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7827 | 0.0211 | 13 | 1.0186 | | 7.6813 | 0.0421 | 26 | 7.3723 | | 7.0799 | 0.0632 | 39 | 6.6456 | | 6.356 | 0.0842 | 52 | 6.2633 | | 6.2517 | 0.1053 | 65 | 6.2857 | | 6.2899 | 0.1264 | 78 | 6.2679 | | 6.285 | 0.1474 | 91 | 6.2945 | | 6.3073 | 0.1685 | 104 | 6.3867 | | 6.2797 | 0.1896 | 117 | 6.2024 | | 6.083 | 0.2106 | 130 | 5.9188 | | 5.8629 | 0.2317 | 143 | 5.7044 | | 5.6092 | 0.2527 | 156 | 5.3934 | | 5.3102 | 0.2738 | 169 | 5.2099 | | 5.2155 | 0.2949 | 182 | 5.1111 | | 5.0531 | 0.3159 | 195 | 4.9263 | | 4.8718 | 0.3370 | 208 | 4.8186 | | 4.7175 | 0.3580 | 221 | 4.6831 | | 4.641 | 0.3791 | 234 | 4.6348 | | 4.5275 | 0.4002 | 247 | 4.5482 | | 4.4863 | 0.4212 | 260 | 4.4328 | | 4.4633 | 0.4423 | 273 | 4.3950 | | 4.4026 | 0.4633 | 286 | 4.3332 | | 4.3761 | 0.4844 | 299 | 4.2790 | | 4.2027 | 0.5055 | 312 | 4.1886 | | 4.1631 | 0.5265 | 325 | 4.1493 | | 4.0923 | 0.5476 | 338 | 4.1405 | | 4.1048 | 0.5687 | 351 | 4.0457 | | 4.0592 | 0.5897 | 364 | 3.9616 | | 4.0107 | 0.6108 | 377 | 3.9935 | | 4.021 | 0.6318 | 390 | 3.8987 | | 3.8899 | 0.6529 | 403 | 3.9228 | | 3.8158 | 0.6740 | 416 | 3.8781 | | 3.9124 | 0.6950 | 429 | 3.8955 | | 3.8687 | 0.7161 | 442 | 3.8612 | | 3.824 | 0.7371 | 455 | 3.8042 | | 3.7742 | 0.7582 | 468 | 3.7946 | | 3.7309 | 0.7793 | 481 | 3.7436 | | 3.7528 | 0.8003 | 494 | 3.7428 | | 3.7297 | 0.8214 | 507 | 3.7325 | | 3.6943 | 0.8424 | 520 | 3.7126 | | 3.6788 | 0.8635 | 533 | 3.7202 | | 3.6632 | 0.8846 | 546 | 3.6981 | | 3.7316 | 0.9056 | 559 | 3.6925 | | 3.6737 | 0.9267 | 572 | 3.6602 | | 3.6142 | 0.9478 | 585 | 3.6731 | | 3.6347 | 0.9688 | 598 | 3.6691 | | 3.6248 | 0.9899 | 611 | 3.6676 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1