patrickvonplaten's picture
add readme
e751878
|
raw
history blame
1.36 kB
metadata
language: lt
tags:
  - audio
  - automatic-speech-recognition
  - voxpopuli
datasets:
  - voxpopuli
license: cc-by-nc-4.0
inference: false

Wav2Vec2-base-VoxPopuli

Facebook's Wav2Vec2 base model pretrained only in lt on 14.4k unlabeled datat of the VoxPopuli corpus.

The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.

Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data in lt. Check out this blog for a more in-detail explanation of how to fine-tune the model.

Paper: VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation

Authors: Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux from Facebook AI.

See the official website for more information, here.