golesheed's picture
End of training
23f04b3 verified
|
raw
history blame
5.1 kB
metadata
library_name: transformers
language:
  - nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper Large V2
    results: []

Whisper Large V2

This model is a fine-tuned version of openai/whisper-large-v2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4599
  • Wer: 24.2092

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 12
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Wer
0.8662 0.0833 15 0.5819 33.0573
0.5255 0.1667 30 0.4760 37.3151
0.4103 0.25 45 0.4493 28.9104
0.4344 0.3333 60 0.4362 25.2990
0.3882 0.4167 75 0.4279 37.7798
0.402 0.5 90 0.4190 40.5043
0.4105 0.5833 105 0.4136 42.3254
0.4126 0.6667 120 0.3959 24.2446
0.3672 0.75 135 0.3956 36.5367
0.4401 0.8333 150 0.3805 24.8744
0.3698 0.9167 165 0.3834 44.2526
0.3728 1.0 180 0.3742 42.0447
0.1642 1.0833 195 0.3942 27.6555
0.1904 1.1667 210 0.3793 25.7967
0.1801 1.25 225 0.3859 23.6879
0.1693 1.3333 240 0.3934 25.1928
0.1839 1.4167 255 0.3853 29.6629
0.19 1.5 270 0.3763 27.3489
0.1977 1.5833 285 0.3764 21.0436
0.1922 1.6667 300 0.3719 30.1040
0.185 1.75 315 0.3716 25.9736
0.1873 1.8333 330 0.3671 22.8127
0.1802 1.9167 345 0.3621 21.2582
0.1931 2.0 360 0.3662 24.4262
0.0848 2.0833 375 0.3989 34.8949
0.0823 2.1667 390 0.3888 23.3718
0.0817 2.25 405 0.3914 22.8057
0.0952 2.3333 420 0.3784 23.8530
0.0961 2.4167 435 0.3917 33.5315
0.0954 2.5 450 0.3822 20.7959
0.0909 2.5833 465 0.3877 22.4282
0.084 2.6667 480 0.3878 26.7025
0.0769 2.75 495 0.3890 21.8597
0.0879 2.8333 510 0.3899 24.0724
0.0835 2.9167 525 0.3788 20.9327
0.0845 3.0 540 0.3807 26.7379
0.0383 3.0833 555 0.4227 24.4333
0.0408 3.1667 570 0.4173 31.6868
0.0393 3.25 585 0.4202 21.5413
0.035 3.3333 600 0.4141 23.4355
0.034 3.4167 615 0.4193 24.1927
0.0383 3.5 630 0.4160 27.6319
0.0295 3.5833 645 0.4243 26.9644
0.0323 3.6667 660 0.4201 25.1221
0.0337 3.75 675 0.4195 26.5445
0.0328 3.8333 690 0.4229 23.7775
0.0344 3.9167 705 0.4213 23.9025
0.0257 4.0 720 0.4209 23.6643
0.0134 4.0833 735 0.4392 22.7372
0.0113 4.1667 750 0.4556 21.2511
0.0122 4.25 765 0.4596 21.6899
0.0117 4.3333 780 0.4652 21.7890
0.0111 4.4167 795 0.4637 21.6946
0.0115 4.5 810 0.4627 23.1571
0.0127 4.5833 825 0.4567 24.0040
0.0108 4.6667 840 0.4592 23.0415
0.0107 4.75 855 0.4610 23.4661
0.0094 4.8333 870 0.4602 25.0112
0.0104 4.9167 885 0.4599 24.6621
0.0125 5.0 900 0.4599 24.2092

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1