Edit model card

whisper-tiny-fi

This model is a fine-tuned version of openai/whisper-tiny on the common_voice_11_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7638
  • Wer: 309.7839

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.986 0.3690 100 1.5979 83.1116
0.755 0.7380 200 0.7632 82.2813
0.57 1.1070 300 0.7001 75.4128
0.517 1.4760 400 0.6558 76.1110
0.4948 1.8450 500 0.6328 71.7426
0.3598 2.2140 600 0.6191 69.7519
0.3708 2.5830 700 0.6093 71.5067
0.3379 2.9520 800 0.5944 70.6010
0.2184 3.3210 900 0.5993 69.8085
0.2335 3.6900 1000 0.5836 69.1197
0.1763 4.0590 1100 0.5925 69.6292
0.1648 4.4280 1200 0.5940 72.7805
0.1471 4.7970 1300 0.5947 74.0542
0.0922 5.1661 1400 0.6138 72.4974
0.0989 5.5351 1500 0.6071 73.5541
0.095 5.9041 1600 0.6121 75.1392
0.0554 6.2731 1700 0.6237 76.0732
0.0606 6.6421 1800 0.6240 79.8000
0.0544 7.0111 1900 0.6418 83.9419
0.0372 7.3801 2000 0.6391 91.3105
0.0414 7.7491 2100 0.6471 81.3850
0.0223 8.1181 2200 0.6521 104.4249
0.0256 8.4871 2300 0.6587 104.8684
0.0233 8.8561 2400 0.6669 119.1056
0.0159 9.2251 2500 0.6907 107.2271
0.0162 9.5941 2600 0.6879 140.2585
0.0156 9.9631 2700 0.6933 185.6024
0.01 10.3321 2800 0.6958 259.4584
0.0099 10.7011 2900 0.7037 205.2363
0.0074 11.0701 3000 0.7080 246.1836
0.0074 11.4391 3100 0.7141 240.3906
0.0074 11.8081 3200 0.7159 196.5185
0.0053 12.1771 3300 0.7246 216.1242
0.0057 12.5461 3400 0.7310 215.3033
0.0056 12.9151 3500 0.7343 232.3521
0.0044 13.2841 3600 0.7374 234.0976
0.0047 13.6531 3700 0.7420 248.5989
0.0046 14.0221 3800 0.7482 245.2684
0.0041 14.3911 3900 0.7480 270.2236
0.0038 14.7601 4000 0.7481 294.0466
0.0037 15.1292 4100 0.7547 263.7513
0.0037 15.4982 4200 0.7551 280.0359
0.0035 15.8672 4300 0.7568 270.1198
0.0032 16.2362 4400 0.7574 286.9327
0.0032 16.6052 4500 0.7611 286.9516
0.0035 16.9742 4600 0.7618 309.7368
0.0032 17.3432 4700 0.7632 298.6508
0.0031 17.7122 4800 0.7632 304.3778
0.0029 18.0812 4900 0.7637 304.8306
0.003 18.4502 5000 0.7638 309.7839

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for mmtg/whisper-tiny-fi

Finetuned
(1216)
this model

Evaluation results