Edit model card

phi-3-mini-LoRA-MEDQA-Extended-V3

This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6233

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
0.7825 0.0882 200 0.6760
0.6593 0.1764 400 0.6488
0.6454 0.2646 600 0.6424
0.6424 0.3528 800 0.6382
0.6382 0.4410 1000 0.6358
0.6342 0.5292 1200 0.6340
0.6355 0.6174 1400 0.6327
0.6355 0.7055 1600 0.6315
0.6336 0.7937 1800 0.6307
0.6321 0.8819 2000 0.6298
0.6321 0.9701 2200 0.6291
0.6298 1.0583 2400 0.6286
0.6285 1.1465 2600 0.6280
0.628 1.2347 2800 0.6275
0.6282 1.3229 3000 0.6271
0.6278 1.4111 3200 0.6267
0.6257 1.4993 3400 0.6264
0.6276 1.5875 3600 0.6260
0.6253 1.6757 3800 0.6256
0.6253 1.7639 4000 0.6253
0.6242 1.8521 4200 0.6250
0.6252 1.9402 4400 0.6247
0.6239 2.0284 4600 0.6246
0.6222 2.1166 4800 0.6244
0.6226 2.2048 5000 0.6242
0.6219 2.2930 5200 0.6241
0.6227 2.3812 5400 0.6240
0.6195 2.4694 5600 0.6239
0.6219 2.5576 5800 0.6237
0.6221 2.6458 6000 0.6236
0.6238 2.7340 6200 0.6235
0.621 2.8222 6400 0.6234
0.621 2.9104 6600 0.6234
0.6222 2.9986 6800 0.6233

Framework versions

  • PEFT 0.12.0
  • Transformers 4.43.3
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for KrithikV/phi-3-mini-LoRA-MEDQA-Extended-V3

Adapter
this model