jindaznb's picture
End of training
d011e7b verified
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: torgo_tiny_finetune_M04_frozen_encoder
    results: []

torgo_tiny_finetune_M04_frozen_encoder

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2842
  • Wer: 39.5586

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7695 0.84 500 0.2502 52.2920
0.0895 1.69 1000 0.2592 39.9830
0.069 2.53 1500 0.2494 22.3260
0.0465 3.37 2000 0.2667 29.6265
0.0311 4.22 2500 0.2489 20.4584
0.0241 5.06 3000 0.2731 23.1749
0.0156 5.9 3500 0.2608 30.3056
0.0127 6.75 4000 0.2944 25.2971
0.0102 7.59 4500 0.2818 25.8913
0.008 8.43 5000 0.2610 25.1273
0.0079 9.27 5500 0.2632 24.6180
0.0054 10.12 6000 0.2776 29.4567
0.0047 10.96 6500 0.2758 28.0985
0.003 11.8 7000 0.2744 26.9949
0.0033 12.65 7500 0.2875 22.0713
0.0022 13.49 8000 0.2842 34.7199
0.0019 14.33 8500 0.2776 29.7963
0.0012 15.18 9000 0.2850 35.2292
0.0012 16.02 9500 0.2770 28.9474
0.0006 16.86 10000 0.2797 56.3667
0.0006 17.71 10500 0.2807 37.0119
0.0002 18.55 11000 0.2849 36.7572
0.0002 19.39 11500 0.2842 39.5586

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.7
  • Tokenizers 0.13.3