Singhamarjeet8130's picture
End of training
26f5dc7 verified
metadata
library_name: transformers
language:
  - ta
license: apache-2.0
base_model: Singhamarjeet8130/whisper-medium-hi
tags:
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_13_0
metrics:
  - wer
model-index:
  - name: Whisper Medium Hi ta - Amarjeet
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 13.0
          type: mozilla-foundation/common_voice_13_0
          config: ta
          split: test
          args: 'config: ta, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 37.38483391323612

Whisper Medium Hi ta - Amarjeet

This model is a fine-tuned version of Singhamarjeet8130/whisper-medium-hi on the Common Voice 13.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1794
  • Wer: 37.3848

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.133 0.2894 1000 0.2347 45.2937
0.1146 0.5787 2000 0.2040 41.4025
0.099 0.8681 3000 0.1835 38.8261
0.0652 1.1574 4000 0.1794 37.3848

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.21.0