sft_small / README.md
stojchet's picture
End of training
2b52bb7 verified
metadata
base_model: deepseek-ai/deepseek-coder-1.3b-base
datasets:
  - generator
library_name: peft
license: other
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: sft_small
    results: []

Visualize in Weights & Biases

sft_small

This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 3.9430

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.41e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
4.437 0.1 1 3.9886
4.2418 0.2 2 3.9798
4.3555 0.3 3 3.9724
3.1845 0.4 4 3.9651
3.0337 0.5 5 3.9591
4.6927 0.6 6 3.9539
4.5557 0.7 7 3.9497
3.9312 0.8 8 3.9464
4.0857 0.9 9 3.9441
4.1279 1.0 10 3.9430

Framework versions

  • PEFT 0.10.0
  • Transformers 4.43.0.dev0
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1