metadata
base_model: openai/whisper-large-v3
datasets:
- '-'
language:
- id
library_name: peft
tags:
- id-asr-leaderboard
- generated_from_trainer
model-index:
- name: zeon8985army/IndonesiaLukas-largeV3
results: []
zeon8985army/IndonesiaLukas-largeV3
This model is a fine-tuned version of zeon8985army/IndonesiaLukas-largeV3 on the Mozilla & GoogleFleur dataset. It achieves the following results on the evaluation set:
- Loss: 0.2076
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 12
- training_steps: 276
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.9886 | 0.0785 | 23 | 0.7911 |
0.7915 | 0.1570 | 46 | 0.5595 |
0.5391 | 0.2355 | 69 | 0.4659 |
0.4979 | 0.3140 | 92 | 0.3941 |
0.3862 | 0.3925 | 115 | 0.3290 |
0.3009 | 0.4710 | 138 | 0.2649 |
0.2777 | 0.5495 | 161 | 0.2264 |
0.2603 | 0.6280 | 184 | 0.2169 |
0.2258 | 0.7065 | 207 | 0.2120 |
0.23 | 0.7850 | 230 | 0.2091 |
0.2183 | 0.8635 | 253 | 0.2083 |
0.2392 | 0.9420 | 276 | 0.2076 |
Framework versions
- PEFT 0.9.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1