Martha-987's picture
Update README.md
7ff8b89
|
raw
history blame
990 Bytes
---
-Whisper Small Ar- Martha
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
Loss: 0.5854
Wer: 70.2071
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 1e-05
train_batch_size: 16
eval_batch_size: 8
seed: 42
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
lr_scheduler_warmup_steps: 500
training_steps: 500
mixed_precision_training: Native AMP
Training results
Training Loss Epoch Step Validation Loss Wer
0.9692 0.14 125 1.3372 173.0952
0.5716 0.29 250 0.9058 148.6795
0.3297 0.43 375 0.5825 63.6709
0.3083 0.57 500 0.5854 70.2071
Framework versions
Transformers 4.26.0.dev0
Pytorch 1.13.0+cu116
Datasets 2.7.1
Tokenizers 0.13.2