Edit model card

torgo_tiny_finetune_M01_frozen_encoder

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2864
  • Wer: 45.6706

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.768 0.85 500 0.2601 25.8913
0.0914 1.7 1000 0.2569 99.8302
0.0699 2.55 1500 0.2626 39.3039
0.042 3.4 2000 0.2691 26.0611
0.0336 4.24 2500 0.2619 25.4669
0.0229 5.09 3000 0.2613 29.2020
0.0166 5.94 3500 0.2525 30.0509
0.0112 6.79 4000 0.2843 30.7301
0.0113 7.64 4500 0.2862 25.8913
0.0085 8.49 5000 0.2726 29.5416
0.0059 9.34 5500 0.2782 35.6537
0.0052 10.19 6000 0.2971 39.6435
0.0041 11.04 6500 0.2886 26.9949
0.0043 11.88 7000 0.2952 29.2869
0.0031 12.73 7500 0.2858 34.3803
0.0022 13.58 8000 0.2844 35.9083
0.0019 14.43 8500 0.2749 33.7861
0.0013 15.28 9000 0.2882 41.3413
0.0014 16.13 9500 0.2817 44.3973
0.0008 16.98 10000 0.2872 39.7284
0.0006 17.83 10500 0.2846 41.8506
0.0003 18.68 11000 0.2900 45.2462
0.0003 19.52 11500 0.2864 45.6706

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.7
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jindaznb/torgo_tiny_finetune_M01_frozen_encoder

Finetuned
(1216)
this model