File size: 990 Bytes
7ff8b89 c8d091c 7ff8b89 d94966f 7ff8b89 d94966f c8d091c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
-Whisper Small Ar- Martha
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
Loss: 0.5854
Wer: 70.2071
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 1e-05
train_batch_size: 16
eval_batch_size: 8
seed: 42
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
lr_scheduler_warmup_steps: 500
training_steps: 500
mixed_precision_training: Native AMP
Training results
Training Loss Epoch Step Validation Loss Wer
0.9692 0.14 125 1.3372 173.0952
0.5716 0.29 250 0.9058 148.6795
0.3297 0.43 375 0.5825 63.6709
0.3083 0.57 500 0.5854 70.2071
Framework versions
Transformers 4.26.0.dev0
Pytorch 1.13.0+cu116
Datasets 2.7.1
Tokenizers 0.13.2
|