jindaznb's picture
End of training
1fb505e verified
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: torgo_tiny_finetune_F03_frozen_encoder
    results: []

torgo_tiny_finetune_F03_frozen_encoder

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0487
  • Wer: 34.9794

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7886 0.85 500 0.0571 14.9520
0.0987 1.69 1000 0.0536 50.3429
0.0695 2.54 1500 0.0480 4.3896
0.0479 3.39 2000 0.0534 7.9561
0.0314 4.24 2500 0.0542 5.0754
0.0239 5.08 3000 0.0438 5.0754
0.0173 5.93 3500 0.0399 7.8189
0.0122 6.78 4000 0.0402 7.4074
0.0099 7.63 4500 0.0384 5.0754
0.0091 8.47 5000 0.0380 4.6639
0.0077 9.32 5500 0.0400 9.6022
0.0057 10.17 6000 0.0361 8.0933
0.0043 11.02 6500 0.0377 15.9122
0.0028 11.86 7000 0.0338 15.6379
0.0026 12.71 7500 0.0407 16.7353
0.0025 13.56 8000 0.0404 16.3237
0.0022 14.41 8500 0.0387 13.3059
0.0014 15.25 9000 0.0373 19.4787
0.0012 16.1 9500 0.0414 25.2401
0.0006 16.95 10000 0.0475 28.3951
0.0004 17.8 10500 0.0435 30.3155
0.0004 18.64 11000 0.0480 32.0988
0.0002 19.49 11500 0.0487 34.9794

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.7
  • Tokenizers 0.13.3