gemma-2b-it-laacks / README.md
manan228's picture
End of training
fbe8c7e verified
metadata
license: gemma
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: google/gemma-2b
datasets:
  - generator
model-index:
  - name: gemma-2b-it-laacks
    results: []

gemma-2b-it-laacks

This model is a fine-tuned version of google/gemma-2b on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0521

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • training_steps: 2500

Training results

Training Loss Epoch Step Validation Loss
3.1593 0.2908 100 2.9485
2.5908 0.5816 200 2.4415
2.3649 0.8724 300 2.3041
2.2468 1.1632 400 2.2207
2.1819 1.4540 500 2.1656
2.1336 1.7448 600 2.1341
2.1159 2.0356 700 2.1147
2.0967 2.3264 800 2.1016
2.0911 2.6172 900 2.0917
2.0663 2.9080 1000 2.0843
2.057 3.1988 1100 2.0781
2.0521 3.4896 1200 2.0732
2.0585 3.7804 1300 2.0691
2.0546 4.0712 1400 2.0659
2.048 4.3621 1500 2.0629
2.0428 4.6529 1600 2.0606
2.0339 4.9437 1700 2.0587
2.0295 5.2345 1800 2.0572
2.037 5.5253 1900 2.0558
2.0279 5.8161 2000 2.0545
2.0322 6.1069 2100 2.0535
2.0344 6.3977 2200 2.0528
2.0197 6.6885 2300 2.0525
2.0332 6.9793 2400 2.0521
2.0242 7.2701 2500 2.0521

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.0
  • Pytorch 2.0.1a0+cxx11.abi
  • Datasets 2.19.0
  • Tokenizers 0.19.1