wav2vec2-large-xls-r-300m-hsb

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Upper Sorbian common_voice_11_0 dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0006
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 1

Training results

Step Training Loss Validation Loss Wer
200 4.408800 2.981909 0.979601
400 1.089200 0.900384 0.821805
600 0.166900 0.946962 0.755920
800 0.091500 0.877633 0.682767
1000 0.064100 0.883517 0.657913
1200 0.053000 0.865288 0.630715
1400 0.037800 0.867455 0.615475
1600 0.028900 0.834865 0.590621
1800 0.023800 0.845873 0.589215
2000 0.019600 0.830817 0.561313
2200 0.016300 0.836810 0.560610

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
122
Safetensors
Model size
315M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for TiMauzi/wav2vec2-large-xls-r-300m-hsb

Finetuned
(557)
this model
Finetunes
1 model

Dataset used to train TiMauzi/wav2vec2-large-xls-r-300m-hsb