--- license: apache-2.0 language: - sw tags: - hf-asr-leaderboard - generated-from-trainer datasets: - mozilla_foundation/common_voice_11_0 metrics: - WER --- ## Model * Name: Whisper Large-v2 Swahili * Description: Fine-tuned Whisper weights for speech-to-text task. * Dataset: - Train and validation splits for Swahili subsets of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0). - Train, validation and test splits for Swahili subsets of [Google Fleurs](https://huggingface.co/datasets/google/fleurs/). * Performance: **19.887087 WER** ## Weights * Date of release: 12.09.2022 * Size: * License: MIT ## Usage To use these weights in HuggingFace's `transformers` library, you can do the following: ```python from transformers import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("hedronstone/whisper-large-v2-sw") ```