Hanhpt23's picture
End of training
8fe3378 verified
|
raw
history blame
2.59 kB
metadata
language:
  - en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-tiny
    results: []

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the pphuc25/EngMed dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4853
  • Wer: 66.7045

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.2768 1.0 2506 1.8614 66.7187
0.9077 2.0 5012 1.9243 66.6383
0.6044 3.0 7518 2.0816 71.3474
0.4112 4.0 10024 2.2672 69.5064
0.2788 5.0 12530 2.4746 65.6728
0.1736 6.0 15036 2.6423 68.1196
0.096 7.0 17542 2.7603 66.7897
0.0632 8.0 20048 2.9008 68.4746
0.046 9.0 22554 3.0145 69.8850
0.0338 10.0 25060 3.0977 66.8749
0.0203 11.0 27566 3.1614 67.5186
0.0207 12.0 30072 3.2117 65.2847
0.011 13.0 32578 3.3028 66.4253
0.007 14.0 35084 3.3854 68.1102
0.0071 15.0 37590 3.3962 66.8702
0.0041 16.0 40096 3.4312 66.8323
0.0043 17.0 42602 3.4244 66.5294
0.0036 18.0 45108 3.4340 66.8512
0.0019 19.0 47614 3.4810 67.6558
0.0003 20.0 50120 3.4853 66.7045

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1