Automatic Speech Recognition
Malayalam
ctranslate2
audio
vegam
kurianbenoy's picture
Update README.md
0df848c
metadata
language:
  - ml
tags:
  - audio
  - automatic-speech-recognition
  - vegam
license: mit
datasets:
  - google/fleurs
  - thennal/IMaSC
  - mozilla-foundation/common_voice_11_0
library_name: ctranslate2

vegam-whipser-medium-ml-fp16 (വേഗം)

This just support float16 quantization only.

Note: Model file size is 1.53 GB.

This is a conversion of thennal/whisper-medium-ml to the CTranslate2 model format.

This model can be used in CTranslate2 or projects based on CTranslate2 such as faster-whisper.

Installation

pip install faster-whisper
  • Install git-lfs for using this project. Note that git-lfs is just for downloading model from hugging-face.
apt-get install git-lfs
  • Download the model weights
git lfs install
git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16

Usage

from faster_whisper import WhisperModel

model_path = "vegam-whisper-medium-ml-fp16"

# Run on GPU with FP16
model = WhisperModel(model_path, device="cuda", compute_type="float16")

segments, info = model.transcribe("audio.mp3", beam_size=5)

print("Detected language '%s' with probability %f" % (info.language, info.language_probability))

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

Example

from faster_whisper import WhisperModel

model_path = "vegam-whisper-medium-ml-fp16"

model = WhisperModel(model_path, device="cuda", compute_type="float16")


segments, info = model.transcribe("00b38e80-80b8-4f70-babf-566e848879fc.webm", beam_size=5)

print("Detected language '%s' with probability %f" % (info.language, info.language_probability))

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

Detected language 'ta' with probability 0.353516

[0.00s -> 4.74s] പാലം കടുക്കുവോളം നാരായണ പാലം കടന്നാലൊ കൂരായണ

Note: The audio file 00b38e80-80b8-4f70-babf-566e848879fc.webm is from Malayalam Speech Corpus and is stored along with model weights.

Conversion Details

This conversion was possible with wonderful CTranslate2 library leveraging the Transformers converter for OpenAI Whisper.The original model was converted with the following command:

ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-fp16 \
--quantization float16

Many Thanks to

  • Creators of CTranslate2 and faster-whisper
  • Thennal D K
  • Santhosh Thottingal