Edit model card

whisper3

This model is a fine-tuned version of openai/whisper-tiny.en on the tiny dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5509
  • Wer: 26.9488

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 300

Training results

Training Loss Epoch Step Validation Loss Wer
3.8281 0.2778 10 3.7929 80.4009
3.209 0.5556 20 3.0014 68.3742
2.1066 0.8333 30 1.7613 63.9198
0.9963 1.1111 40 0.8741 52.4340
0.6922 1.3889 50 0.7009 35.8256
0.5816 1.6667 60 0.6238 31.1486
0.5684 1.9444 70 0.5698 35.4757
0.427 2.2222 80 0.5380 27.2669
0.4395 2.5 90 0.5162 32.7394
0.3861 2.7778 100 0.4953 24.5307
0.3745 3.0556 110 0.4837 24.6262
0.2487 3.3333 120 0.4733 23.5762
0.2343 3.6111 130 0.4652 24.9443
0.2429 3.8889 140 0.4581 24.0853
0.1286 4.1667 150 0.4673 24.2762
0.1304 4.4444 160 0.4698 31.7213
0.1361 4.7222 170 0.4690 33.0894
0.1447 5.0 180 0.4812 24.6580
0.0617 5.2778 190 0.4871 29.9395
0.0617 5.5556 200 0.4884 24.8489
0.0577 5.8333 210 0.4998 26.8533
0.038 6.1111 220 0.5007 24.8489
0.0269 6.3889 230 0.5123 27.1397
0.0321 6.6667 240 0.5005 23.3535
0.0296 6.9444 250 0.5332 31.8804
0.0207 7.2222 260 0.5237 30.0668
0.0215 7.5 270 0.5223 25.5488
0.0198 7.7778 280 0.5157 30.1941
0.0273 8.0556 290 0.5290 27.5533
0.0197 8.3333 300 0.5509 26.9488

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1.dev0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for khaingsmon/whisper3

Finetuned
(60)
this model