wav2vec2-xls-r-300m-lm-hebrew
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset with adding ngram models according to Boosting Wav2Vec2 with n-grams in 🤗 Transformers
Usage
check package: https://github.com/imvladikon/wav2vec2-hebrew
or use transformers pipeline:
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "imvladikon/wav2vec2-xls-r-300m-lm-hebrew"
sample_iter = iter(load_dataset("google/fleurs", "he_il", split="test", streaming=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), sample["audio"]["sampling_rate"], 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
print(transcription)
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
Training results
Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
- Downloads last month
- 34
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for imvladikon/wav2vec2-xls-r-300m-lm-hebrew
Base model
facebook/wav2vec2-xls-r-300m