Finetune Whisper on Frisian and English
Collection
Assessing Knowledge-Distillation Based Compression of Whisper Model for Frisian ASR
•
12 items
•
Updated
This model is a fine-tuned version of distil-small.en on the mozilla-foundation/common_voice_6_fy_NL dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.9504 | 0.5348 | 500 | 1.0939 | 57.4122 |
0.4656 | 1.0695 | 1000 | 0.8241 | 45.7316 |
0.4533 | 1.6043 | 1500 | 0.7285 | 41.3474 |
0.1745 | 2.1390 | 2000 | 0.6875 | 37.7009 |
0.1701 | 2.6738 | 2500 | 0.6261 | 34.7603 |
0.0709 | 3.2086 | 3000 | 0.6566 | 33.4415 |
0.0731 | 3.7433 | 3500 | 0.5880 | 30.5650 |
0.0234 | 4.2781 | 4000 | 0.5949 | 28.8754 |
0.0192 | 4.8128 | 4500 | 0.5799 | 27.7063 |
0.0038 | 5.3476 | 5000 | 0.5755 | 26.9114 |