gemma-7b_alpaca-clean_l0.0002_32-8-8-8-8
This model is a fine-tuned version of google/gemma-7b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.3532
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1466 | 0.0003 | 1 | 2.9296 |
2.3939 | 0.0590 | 187 | 2.0299 |
1.5836 | 0.1179 | 374 | 1.9817 |
1.3175 | 0.1769 | 561 | 1.9971 |
2.3523 | 0.2359 | 748 | 2.1284 |
2.1619 | 0.2949 | 935 | 2.0387 |
1.6171 | 0.3538 | 1122 | 2.0279 |
1.4277 | 0.4128 | 1309 | 2.0409 |
2.4139 | 0.4718 | 1496 | 2.1133 |
1.9443 | 0.5307 | 1683 | 2.0527 |
1.6019 | 0.5897 | 1870 | 2.0364 |
1.4827 | 0.6487 | 2057 | 2.0594 |
2.8911 | 0.7077 | 2244 | 2.1275 |
2.0816 | 0.7666 | 2431 | 2.0610 |
1.4489 | 0.8256 | 2618 | 2.0408 |
1.4928 | 0.8846 | 2805 | 2.0754 |
2.8961 | 0.9436 | 2992 | 2.1001 |
1.1546 | 1.0025 | 3179 | 2.0630 |
2.9013 | 1.0615 | 3366 | 2.1973 |
1.6045 | 1.1205 | 3553 | 2.1100 |
1.4034 | 1.1794 | 3740 | 2.0966 |
1.2669 | 1.2384 | 3927 | 2.1248 |
2.3861 | 1.2974 | 4114 | 2.1561 |
1.3788 | 1.3564 | 4301 | 2.1148 |
1.2945 | 1.4153 | 4488 | 2.0826 |
1.4735 | 1.4743 | 4675 | 2.1314 |
2.4479 | 1.5333 | 4862 | 2.1525 |
1.6459 | 1.5922 | 5049 | 2.0996 |
1.2684 | 1.6512 | 5236 | 2.1000 |
1.4872 | 1.7102 | 5423 | 2.1830 |
2.395 | 1.7692 | 5610 | 2.1637 |
1.4388 | 1.8281 | 5797 | 2.1094 |
1.3474 | 1.8871 | 5984 | 2.0809 |
3.1565 | 1.9461 | 6171 | 2.2983 |
1.0147 | 2.0050 | 6358 | 2.1396 |
1.2512 | 2.0640 | 6545 | 2.2280 |
2.0456 | 2.1230 | 6732 | 2.3183 |
1.2343 | 2.1820 | 6919 | 2.2138 |
1.2557 | 2.2409 | 7106 | 2.1965 |
1.5886 | 2.2999 | 7293 | 2.2894 |
1.7202 | 2.3589 | 7480 | 2.2589 |
1.4203 | 2.4178 | 7667 | 2.1865 |
1.2808 | 2.4768 | 7854 | 2.1714 |
1.6484 | 2.5358 | 8041 | 2.3307 |
2.126 | 2.5948 | 8228 | 2.2027 |
1.3388 | 2.6537 | 8415 | 2.2036 |
1.1905 | 2.7127 | 8602 | 2.1663 |
2.423 | 2.7717 | 8789 | 2.2298 |
1.8212 | 2.8307 | 8976 | 2.1926 |
1.3779 | 2.8896 | 9163 | 2.1547 |
1.3081 | 2.9486 | 9350 | 2.1397 |
0.9656 | 3.0076 | 9537 | 2.2575 |
1.0841 | 3.0665 | 9724 | 2.2428 |
1.975 | 3.1255 | 9911 | 2.4409 |
Framework versions
- PEFT 0.12.1.dev0
- Transformers 4.45.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for alexander-hm/gemma-7b_alpaca-clean_l0.0002_32-8-8-8-8
Base model
google/gemma-7b