jindaznb's picture
End of training
a07a1af verified
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: torgo_tiny_finetune_M03_frozen_encoder
    results: []

torgo_tiny_finetune_M03_frozen_encoder

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3051
  • Wer: 41.5959

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7806 0.85 500 0.2631 52.1222
0.0945 1.71 1000 0.2804 34.4652
0.071 2.56 1500 0.2464 22.5806
0.0455 3.41 2000 0.2476 21.3073
0.0335 4.27 2500 0.2581 21.2224
0.0253 5.12 3000 0.2617 25.0424
0.0177 5.97 3500 0.2898 26.4007
0.0127 6.83 4000 0.3068 24.5331
0.0111 7.68 4500 0.2925 41.9355
0.0087 8.53 5000 0.3179 23.2598
0.0064 9.39 5500 0.2884 29.8812
0.0056 10.24 6000 0.2952 35.4839
0.0037 11.09 6500 0.2956 26.4007
0.0035 11.95 7000 0.2839 27.3345
0.0028 12.8 7500 0.2975 28.3531
0.0019 13.65 8000 0.3129 42.3599
0.0018 14.51 8500 0.2932 31.5789
0.0015 15.36 9000 0.3047 32.0883
0.0008 16.21 9500 0.3071 37.4363
0.0008 17.06 10000 0.3081 39.8981
0.0006 17.92 10500 0.3064 39.5586
0.0003 18.77 11000 0.3052 40.2377
0.0002 19.62 11500 0.3051 41.5959

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.7
  • Tokenizers 0.13.3