imdatta0's picture
End of training
2cf4a3d verified
metadata
base_model: google/gemma-2-9b
library_name: peft
license: gemma
tags:
  - unsloth
  - generated_from_trainer
model-index:
  - name: gemma-2-9b_pct_ortho_r32
    results: []

gemma-2-9b_pct_ortho_r32

This model is a fine-tuned version of google/gemma-2-9b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 9.3620

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
2.3352 0.0206 8 2.9038
11.3417 0.0412 16 11.9083
11.9918 0.0618 24 11.9774
11.9549 0.0824 32 11.9675
11.974 0.1030 40 11.9736
11.9403 0.1236 48 11.9468
11.9321 0.1442 56 11.8809
11.876 0.1648 64 11.8218
11.7886 0.1854 72 11.7345
11.6471 0.2060 80 11.6236
11.5982 0.2266 88 11.3718
11.7088 0.2472 96 11.6792
11.7296 0.2678 104 11.6883
11.6508 0.2885 112 11.4420
10.7655 0.3091 120 8.8174
8.5075 0.3297 128 9.0568
8.912 0.3503 136 9.4162
11.0052 0.3709 144 10.3473
9.103 0.3915 152 9.6451
8.9631 0.4121 160 8.6492
9.9634 0.4327 168 9.4401
9.814 0.4533 176 10.4748
10.507 0.4739 184 10.1910
9.6613 0.4945 192 9.2201
9.0448 0.5151 200 10.3913
9.4984 0.5357 208 8.5434
7.4393 0.5563 216 8.4350
10.0883 0.5769 224 10.2584
10.7162 0.5975 232 10.6899
10.4785 0.6181 240 10.4417
10.023 0.6387 248 9.6244
9.2272 0.6593 256 8.9308
9.1518 0.6799 264 9.2269
9.1733 0.7005 272 9.2434
9.3347 0.7211 280 9.2831
9.468 0.7417 288 9.1046
8.9402 0.7623 296 9.0102
9.1051 0.7829 304 9.2617
9.2223 0.8035 312 9.3921
9.3359 0.8241 320 9.3277
9.1508 0.8447 328 9.2755
9.5364 0.8654 336 9.3031
9.4429 0.8860 344 9.3229
9.3958 0.9066 352 9.3408
9.3778 0.9272 360 9.3577
9.1859 0.9478 368 9.3607
9.4256 0.9684 376 9.3622
9.3454 0.9890 384 9.3620

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1