satarupa22/indic-en-bn
Viewer • Updated • 60k • 23
How to use satarupa22/Wishper-small-asr-bn with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="satarupa22/Wishper-small-asr-bn") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("satarupa22/Wishper-small-asr-bn")
model = AutoModelForSpeechSeq2Seq.from_pretrained("satarupa22/Wishper-small-asr-bn")This model is a fine-tuned version of openai/whisper-small on the Indic Bengali dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.1073 | 0.3811 | 1000 | 0.0995 | 31.1461 |
| 0.0831 | 0.7622 | 2000 | 0.0781 | 26.3300 |
| 0.0525 | 1.1433 | 3000 | 0.0689 | 24.3359 |
| 0.0435 | 1.5244 | 4000 | 0.0605 | 21.2507 |
| 0.0391 | 1.9055 | 5000 | 0.0543 | 19.3544 |
| 0.018 | 2.2866 | 6000 | 0.0564 | 18.5793 |
| 0.0175 | 2.6677 | 7000 | 0.0519 | 18.1804 |
| 0.0063 | 3.0488 | 8000 | 0.0514 | 17.3903 |
Base model
openai/whisper-small