tinyllama_magiccoder_default

This model is a fine-tuned version of TinyLlama/TinyLlama_v1.1 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4775

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.8283 0.0262 4 1.9099
1.8156 0.0523 8 1.8872
1.7063 0.0785 12 1.8112
1.591 0.1047 16 1.6729
1.5878 0.1308 20 1.6188
1.5204 0.1570 24 1.6055
1.5278 0.1832 28 1.6151
1.6098 0.2093 32 1.6174
1.5112 0.2355 36 1.5811
1.6158 0.2617 40 1.5749
1.5373 0.2878 44 1.5431
1.5924 0.3140 48 1.5410
1.5528 0.3401 52 1.5142
1.5049 0.3663 56 1.5183
1.5983 0.3925 60 1.5109
1.5452 0.4186 64 1.5045
1.4746 0.4448 68 1.4973
1.4949 0.4710 72 1.4907
1.451 0.4971 76 1.4963
1.5701 0.5233 80 1.4952
1.5791 0.5495 84 1.4858
1.484 0.5756 88 1.4869
1.4175 0.6018 92 1.4846
1.4127 0.6280 96 1.4826
1.4919 0.6541 100 1.4814
1.4907 0.6803 104 1.4830
1.4656 0.7065 108 1.4812
1.4957 0.7326 112 1.4795
1.4742 0.7588 116 1.4785
1.4694 0.7850 120 1.4759
1.5036 0.8111 124 1.4754
1.4752 0.8373 128 1.4762
1.3607 0.8635 132 1.4766
1.5251 0.8896 136 1.4768
1.3971 0.9158 140 1.4773
1.4457 0.9419 144 1.4771
1.4743 0.9681 148 1.4769
1.4915 0.9943 152 1.4775

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for imdatta0/tinyllama_magiccoder_default

Adapter
(60)
this model