Hanhpt23's picture
End of training
88cd85b verified
metadata
language:
  - fr
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-tiny
    results: []

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the pphuc25/FranceMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9080
  • Wer: 56.1584

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.3284 1.0 215 1.3160 94.5748
0.8758 2.0 430 1.3129 84.3842
0.5092 3.0 645 1.3872 57.4047
0.295 4.0 860 1.4638 70.3079
0.1646 5.0 1075 1.5513 69.0616
0.0939 6.0 1290 1.6665 60.6305
0.0585 7.0 1505 1.7200 54.8387
0.0411 8.0 1720 1.6980 54.1789
0.0295 9.0 1935 1.7985 56.0850
0.023 10.0 2150 1.8064 57.1114
0.0106 11.0 2365 1.8492 54.3255
0.011 12.0 2580 1.8559 57.2581
0.0087 13.0 2795 1.8624 59.5308
0.0069 14.0 3010 1.8581 57.1114
0.0028 15.0 3225 1.8776 58.1378
0.002 16.0 3440 1.8993 56.5982
0.0023 17.0 3655 1.8775 56.5249
0.0011 18.0 3870 1.9031 56.8915
0.0009 19.0 4085 1.9053 55.9384
0.0009 20.0 4300 1.9080 56.1584

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1