--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su - yue tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper large-v3 model for CTranslate2 This repository contains the conversion of [whisper-turbo](https://github.com/openai/whisper) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. ## Example ```python from huggingface_hub import snapshot_download from faster_whisper import WhisperModel repo_id = "jootanehorror/faster-whisper-large-v3-turbo-ct2" local_dir = "faster-whisper-large-v3-turbo-ct2" snapshot_download(repo_id=repo_id, local_dir=local_dir, repo_type="model") model = WhisperModel(local_dir, device='cpu', compute_type='int8') segments, info = model.transcribe("sample.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## More information **For more information about the model, see its [official github page](https://github.com/openai/whisper).**