Edit model card

whisper-tiny-hi

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2091
  • Wer: 86.2832

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.5332 3.57 25 1.7565 89.3805
0.3504 7.14 50 1.6160 90.2655
0.0551 10.71 75 1.8524 93.3628
0.0285 14.29 100 1.9261 123.0088
0.022 17.86 125 2.0688 92.9204
0.0075 21.43 150 2.0535 89.3805
0.0054 25.0 175 2.1533 86.7257
0.0016 28.57 200 2.1682 91.1504
0.001 32.14 225 2.2014 87.6106
0.001 35.71 250 2.1406 87.1681
0.0037 39.29 275 2.1968 88.0531
0.0012 42.86 300 2.1761 107.0796
0.0004 46.43 325 2.1874 88.0531
0.0003 50.0 350 2.2005 87.1681
0.0003 53.57 375 2.2018 87.1681
0.0002 57.14 400 2.2041 87.1681
0.0002 60.71 425 2.2055 86.7257
0.0002 64.29 450 2.2072 86.2832
0.0002 67.86 475 2.2089 87.1681
0.0002 71.43 500 2.2091 86.2832

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
5
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ja-le/whisper-tiny-hi

Finetuned
(1206)
this model