jindaznb's picture
End of training
9a1f30f verified
|
raw
history blame
2.79 kB
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: torgo_tiny_finetune_F04_frozen_encoder
    results: []

torgo_tiny_finetune_F04_frozen_encoder

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2948
  • Wer: 46.1800

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7886 0.85 500 0.2527 38.2003
0.0987 1.69 1000 0.2771 51.7827
0.0695 2.54 1500 0.2463 38.6248
0.0479 3.39 2000 0.2699 26.8251
0.0314 4.24 2500 0.2857 23.2598
0.0239 5.08 3000 0.2698 23.6842
0.0173 5.93 3500 0.2771 25.2122
0.0122 6.78 4000 0.2733 26.7402
0.0099 7.63 4500 0.2812 26.5705
0.0091 8.47 5000 0.2773 23.4295
0.0077 9.32 5500 0.2839 30.5603
0.0057 10.17 6000 0.2722 23.7691
0.0043 11.02 6500 0.2959 34.3803
0.0028 11.86 7000 0.2783 33.0221
0.0026 12.71 7500 0.3000 32.7674
0.0025 13.56 8000 0.2865 32.6825
0.0022 14.41 8500 0.2946 38.8795
0.0014 15.25 9000 0.2858 38.3701
0.0012 16.1 9500 0.2953 63.8370
0.0006 16.95 10000 0.2928 42.9542
0.0004 17.8 10500 0.2910 43.7182
0.0004 18.64 11000 0.2947 44.8217
0.0002 19.49 11500 0.2948 46.1800

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.7
  • Tokenizers 0.13.3