Edit model card

whisper-small-ur

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0029
  • Wer: 76.1310

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 700

Training results

Training Loss Epoch Step Validation Loss Wer
1.1678 2.27 100 1.0347 110.8789
0.3171 4.55 200 0.2896 54.6101
0.1408 6.82 300 0.0993 98.5782
0.0447 9.09 400 0.0261 38.4102
0.0101 11.36 500 0.0083 74.2783
0.004 13.64 600 0.0035 72.9642
0.0031 15.91 700 0.0029 76.1310

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.1
  • Tokenizers 0.13.3
Downloads last month
43
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hadiqa123/whisper-small-ur

Finetuned
(1946)
this model

Spaces using hadiqa123/whisper-small-ur 3