Whisper Hindi Large-v2
This model is a fine-tuned version of openai/whisper-large-v2 on the Hindi data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint.
NOTE: The code used to train this model is available for re-use in the whisper-finetune repository.
Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the whisper-finetune repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-hindi-large-v2", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
For faster inference of whisper models, the whisper-jax library can be used. Please follow the necessary installation steps as mentioned here, before using the following code snippet:
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-hindi-large-v2", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
Training and evaluation data
Training Data:
Evaluation Data:
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25000
- training_steps: 57000 (Initially set to 116255 steps)
- mixed_precision_training: True
Acknowledgement
This work was done at Speech Lab, IIT Madras.
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
- Downloads last month
- 971
Model tree for vasista22/whisper-hindi-large-v2
Spaces using vasista22/whisper-hindi-large-v2 5
Evaluation results
- WER on google/fleurstest set self-reported6.800
- WER on mozilla-foundation/common_voice_11_0test set self-reported10.980