Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.base_model_name_or_path" must be a string

genProj_mistral-7B

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6692

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00025
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss
1.0215 0.0634 100 1.2005
0.9657 0.1267 200 0.9706
0.8585 0.1901 300 0.9458
0.842 0.2535 400 0.8757
0.8587 0.3169 500 0.8758
0.8501 0.3802 600 0.7813
0.8453 0.4436 700 0.9362
0.789 0.5070 800 0.7702
0.7213 0.5703 900 0.7641
0.7335 0.6337 1000 0.7707
0.6989 0.6971 1100 0.7480
0.7739 0.7605 1200 0.7210
0.6785 0.8238 1300 0.7150
0.7198 0.8872 1400 0.7126
0.7726 0.9506 1500 0.6919
0.495 1.0139 1600 0.6501
0.5318 1.0773 1700 0.6376
0.5124 1.1407 1800 0.6373
0.5404 1.2041 1900 0.6379
0.5133 1.2674 2000 0.6633
0.516 1.3308 2100 0.6579
0.5092 1.3942 2200 0.6525
0.6288 1.4575 2300 0.6438
0.4935 1.5209 2400 0.6255
0.5334 1.5843 2500 0.6246
0.6021 1.6477 2600 0.6102
0.5625 1.7110 2700 0.6191
0.5425 1.7744 2800 0.6301
0.5302 1.8378 2900 0.6058
0.5443 1.9011 3000 0.6218
0.5023 1.9645 3100 0.6129
0.3902 2.0279 3200 0.6478
0.4046 2.0913 3300 0.6345
0.424 2.1546 3400 0.6489
0.47 2.2180 3500 0.6729
0.419 2.2814 3600 0.6524
0.433 2.3447 3700 0.6450
0.3993 2.4081 3800 0.6598
0.469 2.4715 3900 0.6608
0.4909 2.5349 4000 0.6856
0.4797 2.5982 4100 0.6924
0.4186 2.6616 4200 0.6857
0.5057 2.7250 4300 0.6717
0.4601 2.7883 4400 0.6723
0.4862 2.8517 4500 0.7063
0.4926 2.9151 4600 0.6399
0.4886 2.9785 4700 0.6538
0.4015 3.0418 4800 0.6485
0.3844 3.1052 4900 0.6756
0.4271 3.1686 5000 0.6801
0.4245 3.2319 5100 0.6789
0.4393 3.2953 5200 0.6881
0.436 3.3587 5300 0.6710
0.4717 3.4221 5400 0.6746
0.4221 3.4854 5500 0.7194
0.487 3.5488 5600 0.6693
0.4547 3.6122 5700 0.6742
0.4949 3.6755 5800 0.6795
0.4865 3.7389 5900 0.7108
0.5139 3.8023 6000 0.6612
0.4512 3.8657 6100 0.6799
0.5094 3.9290 6200 0.6759
0.4989 3.9924 6300 0.6649
0.3635 4.0558 6400 0.6683
0.3599 4.1191 6500 0.6765
0.3789 4.1825 6600 0.7041
0.3897 4.2459 6700 0.6771
0.3753 4.3093 6800 0.6831
0.389 4.3726 6900 0.6954
0.4111 4.4360 7000 0.7050
0.4016 4.4994 7100 0.6762
0.3798 4.5627 7200 0.7055
0.4121 4.6261 7300 0.6707
0.4127 4.6895 7400 0.6976
0.4523 4.7529 7500 0.6520
0.4035 4.8162 7600 0.7363
0.4152 4.8796 7700 0.7270
0.4615 4.9430 7800 0.6692

Framework versions

  • PEFT 0.11.0
  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
1
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for codewizardUV/genProj_mistral-7B

Adapter
(1171)
this model