gemma-7b_alpaca-clean_l0.0002_32-32
This model is a fine-tuned version of google/gemma-7b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.2862
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1466 | 0.0003 | 1 | 2.6494 |
2.2743 | 0.0590 | 187 | 1.8818 |
1.4643 | 0.1179 | 374 | 1.8742 |
1.1845 | 0.1769 | 561 | 1.9893 |
2.2425 | 0.2359 | 748 | 1.9409 |
1.9557 | 0.2949 | 935 | 1.8788 |
1.448 | 0.3538 | 1122 | 1.8471 |
1.2879 | 0.4128 | 1309 | 1.8804 |
2.2375 | 0.4718 | 1496 | 1.8555 |
1.778 | 0.5307 | 1683 | 1.8499 |
1.4082 | 0.5897 | 1870 | 1.8627 |
1.3452 | 0.6487 | 2057 | 1.8682 |
2.8115 | 0.7077 | 2244 | 1.8803 |
1.8711 | 0.7666 | 2431 | 1.8475 |
1.2821 | 0.8256 | 2618 | 1.8560 |
1.2943 | 0.8846 | 2805 | 1.8666 |
2.535 | 0.9436 | 2992 | 1.8579 |
0.9723 | 1.0025 | 3179 | 1.8711 |
2.962 | 1.0615 | 3366 | 1.9227 |
1.3686 | 1.1205 | 3553 | 1.9320 |
1.1434 | 1.1794 | 3740 | 1.9103 |
1.0128 | 1.2384 | 3927 | 1.9004 |
2.2098 | 1.2974 | 4114 | 1.9571 |
1.0847 | 1.3564 | 4301 | 1.9256 |
1.0635 | 1.4153 | 4488 | 1.9156 |
1.242 | 1.4743 | 4675 | 1.9359 |
2.2656 | 1.5333 | 4862 | 1.9373 |
1.4033 | 1.5922 | 5049 | 1.9102 |
1.066 | 1.6512 | 5236 | 1.9053 |
1.214 | 1.7102 | 5423 | 1.9475 |
2.0875 | 1.7692 | 5610 | 1.9373 |
1.1555 | 1.8281 | 5797 | 1.9202 |
1.0816 | 1.8871 | 5984 | 1.9039 |
2.9213 | 1.9461 | 6171 | 1.9437 |
0.7327 | 2.0050 | 6358 | 1.9802 |
0.9288 | 2.0640 | 6545 | 2.1237 |
1.4847 | 2.1230 | 6732 | 2.2272 |
0.8673 | 2.1820 | 6919 | 2.0954 |
0.8972 | 2.2409 | 7106 | 2.0114 |
1.171 | 2.2999 | 7293 | 2.2171 |
1.3381 | 2.3589 | 7480 | 2.1423 |
1.0032 | 2.4178 | 7667 | 2.0822 |
0.8967 | 2.4768 | 7854 | 1.9955 |
1.3569 | 2.5358 | 8041 | 2.1730 |
1.552 | 2.5948 | 8228 | 2.0954 |
0.9403 | 2.6537 | 8415 | 2.0874 |
0.8441 | 2.7127 | 8602 | 1.9917 |
2.0487 | 2.7717 | 8789 | 2.1445 |
1.3355 | 2.8307 | 8976 | 2.0624 |
0.9621 | 2.8896 | 9163 | 2.0430 |
0.9307 | 2.9486 | 9350 | 2.0186 |
0.6211 | 3.0076 | 9537 | 2.2474 |
0.6472 | 3.0665 | 9724 | 2.1474 |
1.5749 | 3.1255 | 9911 | 2.3950 |
Framework versions
- PEFT 0.12.1.dev0
- Transformers 4.45.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 4
Model tree for alexander-hm/gemma-7b_alpaca-clean_l0.0002_32-32
Base model
google/gemma-7b