Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
Whisper Small Ru ORD 0.7 PEFT LoRA - Mizoru
This model is a fine-tuned version of openai/whisper-small on the ORD_0.7 dataset. It achieves the following results on the evaluation set:
- Loss: 1.4241
- Wer: 74.5818
- Cer: 38.6379
- Clean Wer: 60.9733
- Clean Cer: 30.4133
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 2000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Wer | Clean Cer |
---|---|---|---|---|---|---|---|
1.3116 | 1.0 | 196 | 1.2595 | 75.2427 | 37.5133 | 59.8253 | 30.3495 |
1.1944 | 2.0 | 392 | 1.2362 | 75.3239 | 38.0798 | 61.7830 | 31.1456 |
1.137 | 3.0 | 588 | 1.2391 | 76.7957 | 39.3375 | 62.8324 | 31.6562 |
1.0293 | 4.0 | 784 | 1.2472 | 74.9783 | 38.2671 | 58.8861 | 30.9703 |
0.9835 | 5.0 | 980 | 1.2720 | 72.7755 | 37.0461 | 58.5421 | 28.9427 |
0.92 | 6.0 | 1176 | 1.2871 | 72.1532 | 37.6018 | 62.6024 | 30.6224 |
0.849 | 7.0 | 1372 | 1.3214 | 72.6281 | 37.3213 | 58.2328 | 29.4501 |
0.7708 | 8.0 | 1568 | 1.3527 | 72.0761 | 37.4703 | 60.3471 | 29.9594 |
0.7397 | 9.0 | 1764 | 1.3923 | 74.3780 | 38.5124 | 60.8670 | 30.1533 |
0.6614 | 10.0 | 1960 | 1.4241 | 74.5818 | 38.6379 | 60.9733 | 30.4133 |
Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for mizoru/whisper-small-ru-ORD_0.7_peft_0.2
Base model
openai/whisper-small