phi-3-mini-LoRA
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5586
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7527 | 0.1131 | 250 | 0.6002 |
0.5924 | 0.2262 | 500 | 0.5809 |
0.5811 | 0.3393 | 750 | 0.5759 |
0.5827 | 0.4524 | 1000 | 0.5717 |
0.5767 | 0.5655 | 1250 | 0.5704 |
0.5711 | 0.6787 | 1500 | 0.5678 |
0.5691 | 0.7918 | 1750 | 0.5672 |
0.5635 | 0.9049 | 2000 | 0.5654 |
0.5712 | 1.0180 | 2250 | 0.5650 |
0.5611 | 1.1311 | 2500 | 0.5647 |
0.555 | 1.2442 | 2750 | 0.5631 |
0.5505 | 1.3573 | 3000 | 0.5628 |
0.5657 | 1.4704 | 3250 | 0.5624 |
0.563 | 1.5835 | 3500 | 0.5617 |
0.5577 | 1.6966 | 3750 | 0.5614 |
0.5578 | 1.8098 | 4000 | 0.5603 |
0.5552 | 1.9229 | 4250 | 0.5604 |
0.5514 | 2.0360 | 4500 | 0.5600 |
0.5473 | 2.1491 | 4750 | 0.5603 |
0.5573 | 2.2622 | 5000 | 0.5596 |
0.5423 | 2.3753 | 5250 | 0.5599 |
0.5579 | 2.4884 | 5500 | 0.5595 |
0.5403 | 2.6015 | 5750 | 0.5591 |
0.5475 | 2.7146 | 6000 | 0.5593 |
0.5477 | 2.8277 | 6250 | 0.5590 |
0.5438 | 2.9408 | 6500 | 0.5586 |
Framework versions
- PEFT 0.12.0
- Transformers 4.43.1
- Pytorch 2.4.0a0+3bcc3cddb5.nv24.07
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 1
Model tree for esawtooth/phi-3-mini-LoRA
Base model
microsoft/Phi-3-mini-4k-instruct