Edit model card

Visualize in Weights & Biases

whisper-medium-lug-only

This model is a fine-tuned version of openai/whisper-medium on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1551
  • Wer: 9.7662

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 8000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.1474 0.025 200 0.7380 71.3893
0.7879 0.05 400 0.4461 44.7043
0.6541 0.075 600 0.3394 32.3246
0.5203 0.1 800 0.2949 26.5475
0.509 0.125 1000 0.2774 24.2091
0.4753 0.15 1200 0.2505 20.4952
0.4726 0.175 1400 0.2375 20.7703
0.4145 0.2 1600 0.2313 18.2944
0.418 0.225 1800 0.2265 18.8446
0.4032 0.25 2000 0.2267 18.7070
0.3797 0.275 2200 0.2184 16.2311
0.3773 0.3 2400 0.2084 14.4429
0.3497 0.325 2600 0.1993 15.2682
0.3657 0.35 2800 0.1951 15.4058
0.3686 0.375 3000 0.1882 13.2050
0.3363 0.4 3200 0.1848 14.3054
0.3286 0.425 3400 0.1769 13.8927
0.3193 0.45 3600 0.1786 12.5172
0.3352 0.475 3800 0.1758 11.9670
0.3182 0.5 4000 0.1737 13.3425
0.2967 0.525 4200 0.1699 12.9298
0.3078 0.55 4400 0.1719 12.3796
0.2788 0.575 4600 0.1663 12.2421
0.2302 1.0075 4800 0.1678 11.4168
0.2109 1.0325 5000 0.1696 11.1417
0.1932 1.0575 5200 0.1713 11.2792
0.2128 1.0825 5400 0.1663 12.6547
0.2269 1.1075 5600 0.1621 12.2421
0.2324 1.1325 5800 0.1581 11.2792
0.2083 1.1575 6000 0.1579 11.1417
0.2156 1.1825 6200 0.1543 10.4539
0.2113 1.2075 6400 0.1551 9.7662
0.2235 1.2325 6600 0.1550 10.5915
0.2137 1.2575 6800 0.1537 10.4539
0.1989 1.2825 7000 0.1536 9.9037
0.2014 1.3075 7200 0.1515 10.1788
0.2109 1.3325 7400 0.1488 10.3164
0.1975 1.3575 7600 0.1500 10.5915
0.1754 1.3825 7800 0.1494 10.0413
0.182 1.4075 8000 0.1487 10.0413

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.2.0
  • Datasets 2.16.1
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for akera/whisper-medium-lug-only

Finetuned
(455)
this model

Evaluation results