controngo's picture
End of training
dc3061f verified
|
raw
history blame
2.12 kB
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-tinyfinacial
    results: []

whisper-tinyfinacial

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.1540
  • Wer: 154.4944

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 600
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 25.0 50 1.4261 66.8539
No log 50.0 100 1.3916 86.5169
No log 75.0 150 1.6553 165.1685
No log 100.0 200 2.6574 134.8315
No log 125.0 250 2.7460 142.6966
No log 150.0 300 3.4242 157.3034
No log 175.0 350 3.7021 165.7303
No log 200.0 400 3.9109 168.5393
No log 225.0 450 4.0157 198.8764
3.7169 250.0 500 4.1466 164.6067
3.7169 275.0 550 4.1483 152.8090
3.7169 300.0 600 4.1540 154.4944

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1