RylanSchaeffer's picture
End of training
59a1b25 verified
metadata
license: gemma
base_model: google/gemma-2-2b
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: collapse_gemma-2-2b_hs2_accumulatesubsample_iter2_sftsd1
    results: []

collapse_gemma-2-2b_hs2_accumulatesubsample_iter2_sftsd1

This model is a fine-tuned version of google/gemma-2-2b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1158
  • Num Input Tokens Seen: 5428320

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-06
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 1
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
No log 0 0 1.3909 0
1.5252 0.0528 5 1.2603 290304
1.2884 0.1056 10 1.1731 575792
1.2154 0.1584 15 1.1435 868776
1.1343 0.2112 20 1.1179 1155912
1.0689 0.2640 25 1.1161 1444904
1.1257 0.3168 30 1.1142 1735312
1.0078 0.3696 35 1.1146 2027704
1.0816 0.4224 40 1.1123 2321120
0.9497 0.4752 45 1.1184 2612344
1.0082 0.5281 50 1.1232 2907048
0.9434 0.5809 55 1.1215 3190304
0.8983 0.6337 60 1.1277 3476720
0.8889 0.6865 65 1.1243 3763904
0.8076 0.7393 70 1.1252 4049648
0.8421 0.7921 75 1.1192 4333672
0.8074 0.8449 80 1.1250 4624536
0.8398 0.8977 85 1.1192 4918568
0.8359 0.9505 90 1.1157 5198200

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1