gemma-7b_alpaca-clean_l0.0002_64
This model is a fine-tuned version of google/gemma-7b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.2948
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1466 | 0.0003 | 1 | 2.6456 |
2.2897 | 0.0590 | 187 | 1.8654 |
1.4546 | 0.1179 | 374 | 1.8646 |
1.1608 | 0.1769 | 561 | 1.8576 |
2.1812 | 0.2359 | 748 | 1.9192 |
1.9201 | 0.2949 | 935 | 1.8343 |
1.4152 | 0.3538 | 1122 | 1.8069 |
1.2418 | 0.4128 | 1309 | 1.8154 |
2.1073 | 0.4718 | 1496 | 1.7860 |
1.7073 | 0.5307 | 1683 | 1.7823 |
1.3343 | 0.5897 | 1870 | 1.7854 |
1.2886 | 0.6487 | 2057 | 1.8016 |
2.6903 | 0.7077 | 2244 | 1.7907 |
1.8329 | 0.7666 | 2431 | 1.7798 |
1.2104 | 0.8256 | 2618 | 1.7796 |
1.2292 | 0.8846 | 2805 | 1.7953 |
2.3654 | 0.9436 | 2992 | 1.7791 |
0.9446 | 1.0025 | 3179 | 1.7860 |
2.8 | 1.0615 | 3366 | 1.8287 |
1.2969 | 1.1205 | 3553 | 1.8370 |
1.1514 | 1.1794 | 3740 | 1.8155 |
0.9721 | 1.2384 | 3927 | 1.8249 |
2.2142 | 1.2974 | 4114 | 1.8456 |
1.0431 | 1.3564 | 4301 | 1.8362 |
0.9909 | 1.4153 | 4488 | 1.8122 |
1.1657 | 1.4743 | 4675 | 1.8486 |
2.2592 | 1.5333 | 4862 | 1.8264 |
1.3508 | 1.5922 | 5049 | 1.8099 |
0.9854 | 1.6512 | 5236 | 1.8185 |
1.1068 | 1.7102 | 5423 | 1.8429 |
2.0478 | 1.7692 | 5610 | 1.8474 |
1.0497 | 1.8281 | 5797 | 1.8336 |
1.0125 | 1.8871 | 5984 | 1.8025 |
2.8623 | 1.9461 | 6171 | 1.8461 |
0.6782 | 2.0050 | 6358 | 1.8869 |
0.8107 | 2.0640 | 6545 | 2.0083 |
1.5273 | 2.1230 | 6732 | 2.0355 |
0.8085 | 2.1820 | 6919 | 1.9945 |
0.8308 | 2.2409 | 7106 | 1.9152 |
1.0441 | 2.2999 | 7293 | 2.1163 |
1.2238 | 2.3589 | 7480 | 2.0499 |
0.9381 | 2.4178 | 7667 | 2.0011 |
0.826 | 2.4768 | 7854 | 1.9196 |
1.273 | 2.5358 | 8041 | 2.0879 |
1.4716 | 2.5948 | 8228 | 1.9994 |
0.8203 | 2.6537 | 8415 | 1.9708 |
0.7646 | 2.7127 | 8602 | 1.9228 |
1.8707 | 2.7717 | 8789 | 2.0330 |
1.2341 | 2.8307 | 8976 | 1.9844 |
0.8499 | 2.8896 | 9163 | 1.9616 |
0.8405 | 2.9486 | 9350 | 1.9436 |
0.5003 | 3.0076 | 9537 | 2.1620 |
0.5453 | 3.0665 | 9724 | 2.1553 |
1.4864 | 3.1255 | 9911 | 2.3530 |
Framework versions
- PEFT 0.12.1.dev0
- Transformers 4.45.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for alexander-hm/gemma-7b_alpaca-clean_l0.0002_64
Base model
google/gemma-7b