Edit model card

whisper-medium-r22-e_v231109

This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3784
  • Wer: 100.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5
  • training_steps: 800
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
6.0027 0.06 10 3.8236 29.3631
2.668 0.12 20 1.8668 27.7539
1.5247 0.18 30 1.0451 25.8273
0.7177 0.24 40 0.3820 100.0
0.342 0.3 50 0.3398 100.0
0.331 0.36 60 0.3243 100.0340
0.3139 0.42 70 0.3175 100.0227
0.291 0.48 80 0.2983 100.0340
0.3178 0.54 90 0.2907 100.0340
0.2516 0.6 100 0.2933 100.0567
0.3004 0.66 110 0.2860 100.0907
0.2923 0.72 120 0.2962 100.1587
0.3067 0.78 130 0.2887 100.0340
0.2967 0.84 140 0.2802 100.0
0.3059 0.9 150 0.2734 100.0
0.2465 0.96 160 0.2686 100.0
0.1953 1.02 170 0.2677 100.0793
0.1611 1.08 180 0.2665 100.0453
0.1548 1.14 190 0.2644 100.0
0.1379 1.2 200 0.2781 100.0
0.1593 1.27 210 0.2765 100.0
0.1266 1.33 220 0.2805 100.0
0.1407 1.39 230 0.2669 100.0567
0.1301 1.45 240 0.2708 100.0793
0.1546 1.51 250 0.2713 100.0793
0.1447 1.57 260 0.2723 100.0793
0.1762 1.63 270 0.2689 100.0
0.148 1.69 280 0.2693 100.0680
0.1468 1.75 290 0.2682 100.0340
0.1747 1.81 300 0.2688 100.0340
0.106 1.87 310 0.2606 100.0
0.1517 1.93 320 0.2606 100.0
0.143 1.99 330 0.2644 100.0
0.085 2.05 340 0.2644 100.0
0.0733 2.11 350 0.2840 100.0
0.0606 2.17 360 0.2879 100.0
0.071 2.23 370 0.2851 100.0
0.0518 2.29 380 0.2975 100.0
0.068 2.35 390 0.2936 100.0
0.0553 2.41 400 0.3062 100.0
0.049 2.47 410 0.3019 100.0
0.0621 2.53 420 0.3021 100.0
0.0593 2.59 430 0.2941 100.0
0.0604 2.65 440 0.2960 100.0
0.0711 2.71 450 0.2996 100.0
0.0643 2.77 460 0.2907 100.0
0.0554 2.83 470 0.2902 100.0
0.0595 2.89 480 0.2992 100.0
0.0693 2.95 490 0.2936 99.8527
0.0411 3.01 500 0.2937 100.0
0.0192 3.07 510 0.3174 100.0
0.0105 3.13 520 0.3468 100.0
0.0339 3.19 530 0.3439 100.0
0.0222 3.25 540 0.3571 100.0
0.0372 3.31 550 0.3393 100.0
0.0219 3.37 560 0.3468 100.0
0.0223 3.43 570 0.3341 100.0
0.0239 3.49 580 0.3393 100.0
0.0322 3.55 590 0.3378 100.0
0.0299 3.61 600 0.3296 100.0
0.0223 3.67 610 0.3367 100.0
0.0234 3.73 620 0.3345 100.0
0.0191 3.8 630 0.3395 100.0
0.0207 3.86 640 0.3439 100.0
0.0258 3.92 650 0.3440 100.0
0.0209 3.98 660 0.3442 100.0
0.0164 4.04 670 0.3551 100.0
0.0067 4.1 680 0.3559 100.0
0.0094 4.16 690 0.3628 100.0
0.0096 4.22 700 0.3661 100.0
0.0073 4.28 710 0.3682 100.0
0.0106 4.34 720 0.3717 100.0
0.0067 4.4 730 0.3749 100.0
0.005 4.46 740 0.3785 100.0
0.0101 4.52 750 0.3803 100.0
0.0084 4.58 760 0.3784 100.0
0.0079 4.64 770 0.3770 100.0
0.0038 4.7 780 0.3772 100.0
0.0057 4.76 790 0.3780 100.0
0.0103 4.82 800 0.3784 100.0

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.7.dev0
  • Tokenizers 0.14.1
Downloads last month
2
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kujirahand/whisper-medium-r22-e

Finetuned
(455)
this model