--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit library_name: peft license: apache-2.0 tags: - unsloth - generated_from_trainer model-index: - name: Mistral-7B-v0.3_pct_ortho results: [] --- # Mistral-7B-v0.3_pct_ortho This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.1305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.145 | 0.0206 | 8 | 2.3112 | | 10.5488 | 0.0413 | 16 | 9.3681 | | 7.9067 | 0.0619 | 24 | 7.7434 | | 7.8557 | 0.0825 | 32 | 7.7804 | | 7.7959 | 0.1032 | 40 | 7.6185 | | 7.6599 | 0.1238 | 48 | 7.6087 | | 7.6244 | 0.1444 | 56 | 7.6251 | | 7.5899 | 0.1651 | 64 | 7.7005 | | 7.7052 | 0.1857 | 72 | 7.6484 | | 7.713 | 0.2063 | 80 | 7.6468 | | 7.7653 | 0.2270 | 88 | 7.7124 | | 7.7345 | 0.2476 | 96 | 7.6673 | | 7.7397 | 0.2682 | 104 | 7.6182 | | 7.6695 | 0.2888 | 112 | 7.6591 | | 7.6495 | 0.3095 | 120 | 7.6658 | | 7.7849 | 0.3301 | 128 | 7.7384 | | 7.6106 | 0.3507 | 136 | 7.5564 | | 7.6111 | 0.3714 | 144 | 7.6163 | | 7.7065 | 0.3920 | 152 | 7.6644 | | 7.6184 | 0.4126 | 160 | 7.5686 | | 7.616 | 0.4333 | 168 | 7.5656 | | 7.5454 | 0.4539 | 176 | 7.5597 | | 7.6258 | 0.4745 | 184 | 7.5779 | | 7.5473 | 0.4952 | 192 | 7.4016 | | 7.2947 | 0.5158 | 200 | 7.2932 | | 7.2767 | 0.5364 | 208 | 7.2147 | | 7.3113 | 0.5571 | 216 | 7.2121 | | 7.259 | 0.5777 | 224 | 7.1758 | | 7.0926 | 0.5983 | 232 | 7.1164 | | 7.1931 | 0.6190 | 240 | 7.1261 | | 7.2239 | 0.6396 | 248 | 7.3268 | | 7.3554 | 0.6602 | 256 | 7.1475 | | 7.1835 | 0.6809 | 264 | 7.3103 | | 7.2527 | 0.7015 | 272 | 7.3083 | | 7.2578 | 0.7221 | 280 | 7.0611 | | 7.3073 | 0.7427 | 288 | 7.2281 | | 7.0778 | 0.7634 | 296 | 7.0481 | | 7.196 | 0.7840 | 304 | 7.0595 | | 7.194 | 0.8046 | 312 | 7.1102 | | 7.1961 | 0.8253 | 320 | 7.1123 | | 7.1855 | 0.8459 | 328 | 7.1000 | | 7.263 | 0.8665 | 336 | 7.1072 | | 7.1925 | 0.8872 | 344 | 7.1302 | | 7.1506 | 0.9078 | 352 | 7.1543 | | 7.2592 | 0.9284 | 360 | 7.1034 | | 7.1493 | 0.9491 | 368 | 7.1163 | | 7.2687 | 0.9697 | 376 | 7.1285 | | 7.2225 | 0.9903 | 384 | 7.1305 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1