Edit model card

whisper-small-enhanced-hindi

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3627
  • Wer: 95.1323

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.0111 0.61 50 2.6830 173.1800
2.1003 1.22 100 2.0362 114.2054
1.884 1.83 150 1.8518 196.2390
1.7085 2.44 200 1.7161 104.2625
1.5726 3.05 250 1.5840 99.8011
1.3908 3.66 300 1.4421 98.4524
1.2463 4.27 350 1.3700 95.3484
1.2047 4.88 400 1.3240 97.6050
1.069 5.49 450 1.2952 96.8442
1.0265 6.1 500 1.2916 95.0026
0.9474 6.71 550 1.2525 94.6222
0.8052 7.32 600 1.2687 94.3109
0.7971 7.93 650 1.2595 92.5990
0.6551 8.54 700 1.2944 93.0140
0.6156 9.15 750 1.3545 93.7749
0.5445 9.76 800 1.3548 93.5328
0.4183 10.37 850 1.4263 93.6798
0.4311 10.98 900 1.4321 94.1726
0.3244 11.59 950 1.4985 94.0169
0.2604 12.2 1000 1.6052 93.9997
0.2403 12.8 1050 1.6366 94.4665
0.1669 13.41 1100 1.7287 94.4752
0.1648 14.02 1150 1.7606 96.0661
0.1184 14.63 1200 1.8454 94.9507
0.086 15.24 1250 1.9234 94.8037
0.0845 15.85 1300 1.9631 94.7000
0.0504 16.46 1350 2.0461 94.1121
0.0525 17.07 1400 2.1092 93.9478
0.0381 17.68 1450 2.1563 95.0718
0.0261 18.29 1500 2.2302 94.3109
0.0282 18.9 1550 2.2574 94.0775
0.0209 19.51 1600 2.2939 94.7432
0.0187 20.12 1650 2.3627 95.1323

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
2
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Chenxi-Chelsea-Liu/whisper-small-enhanced-hindi

Finetuned
(1905)
this model