imdatta0's picture
End of training
fe5dc2f verified
|
raw
history blame
3.89 kB
---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
library_name: peft
license: apache-2.0
tags:
- unsloth
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.3_metamath_default
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.3_metamath_default
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7559 | 0.0211 | 13 | 8.3985 |
| 9.4604 | 0.0421 | 26 | 6.8141 |
| 6.7353 | 0.0632 | 39 | 6.4223 |
| 6.4242 | 0.0842 | 52 | 6.2759 |
| 6.1115 | 0.1053 | 65 | 6.0333 |
| 5.9214 | 0.1264 | 78 | 5.8343 |
| 5.6735 | 0.1474 | 91 | 5.5846 |
| 5.557 | 0.1685 | 104 | 5.4916 |
| 5.3297 | 0.1896 | 117 | 5.2345 |
| 5.1963 | 0.2106 | 130 | 5.1310 |
| 5.1252 | 0.2317 | 143 | 5.0674 |
| 4.983 | 0.2527 | 156 | 4.9390 |
| 4.8933 | 0.2738 | 169 | 4.8252 |
| 4.7722 | 0.2949 | 182 | 4.7449 |
| 4.7722 | 0.3159 | 195 | 4.7386 |
| 4.6446 | 0.3370 | 208 | 4.6346 |
| 4.5823 | 0.3580 | 221 | 4.5544 |
| 4.576 | 0.3791 | 234 | 4.5238 |
| 4.5056 | 0.4002 | 247 | 4.6538 |
| 4.5501 | 0.4212 | 260 | 4.4766 |
| 4.5197 | 0.4423 | 273 | 4.4369 |
| 4.6259 | 0.4633 | 286 | 4.4561 |
| 4.546 | 0.4844 | 299 | 4.4278 |
| 4.3478 | 0.5055 | 312 | 4.3790 |
| 4.3754 | 0.5265 | 325 | 4.3635 |
| 4.2714 | 0.5476 | 338 | 4.3611 |
| 4.3724 | 0.5687 | 351 | 4.3629 |
| 4.2961 | 0.5897 | 364 | 4.2578 |
| 4.2806 | 0.6108 | 377 | 4.2863 |
| 4.3088 | 0.6318 | 390 | 4.2221 |
| 4.2165 | 0.6529 | 403 | 4.2158 |
| 4.1776 | 0.6740 | 416 | 4.1896 |
| 4.2615 | 0.6950 | 429 | 4.3146 |
| 4.2536 | 0.7161 | 442 | 4.2153 |
| 4.1308 | 0.7371 | 455 | 4.1701 |
| 4.1749 | 0.7582 | 468 | 4.1346 |
| 4.1219 | 0.7793 | 481 | 4.1276 |
| 4.136 | 0.8003 | 494 | 4.1162 |
| 4.1453 | 0.8214 | 507 | 4.1070 |
| 4.1025 | 0.8424 | 520 | 4.1167 |
| 4.1207 | 0.8635 | 533 | 4.0925 |
| 4.0847 | 0.8846 | 546 | 4.0926 |
| 4.1504 | 0.9056 | 559 | 4.0795 |
| 4.1211 | 0.9267 | 572 | 4.0711 |
| 4.038 | 0.9478 | 585 | 4.0763 |
| 4.0944 | 0.9688 | 598 | 4.0744 |
| 4.0771 | 0.9899 | 611 | 4.0734 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1