evanarlian's picture
update model card README.md
8632566
|
raw
history blame
3.38 kB
metadata
tags:
  - generated_from_trainer
datasets:
  - evanarlian/common_voice_11_0_id_filtered
metrics:
  - wer
model-index:
  - name: wav2vec2-xls-r-164m-id
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: evanarlian/common_voice_11_0_id_filtered
          type: evanarlian/common_voice_11_0_id_filtered
        metrics:
          - name: Wer
            type: wer
            value: 0.2990499031454663

wav2vec2-xls-r-164m-id

This model is a fine-tuned version of evanarlian/wav2vec2-xls-r-164m-id on the evanarlian/common_voice_11_0_id_filtered dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3510
  • Wer: 0.2990

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 50.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.089 1.84 2000 0.3205 0.3168
0.0882 3.67 4000 0.3243 0.3203
0.0868 5.51 6000 0.3272 0.3183
0.0926 7.35 8000 0.3365 0.3209
0.0943 9.18 10000 0.3400 0.3221
0.0979 11.02 12000 0.3269 0.3192
0.09 12.86 14000 0.3384 0.3164
0.0877 14.69 16000 0.3284 0.3183
0.0808 16.53 18000 0.3366 0.3189
0.0835 18.37 20000 0.3306 0.3156
0.08 20.2 22000 0.3384 0.3133
0.0806 22.04 24000 0.3307 0.3109
0.0749 23.88 26000 0.3493 0.3118
0.073 25.71 28000 0.3479 0.3088
0.0754 27.55 30000 0.3482 0.3109
0.0697 29.38 32000 0.3515 0.3090
0.07 31.22 34000 0.3532 0.3101
0.0672 33.06 36000 0.3668 0.3086
0.0713 34.89 38000 0.3560 0.3048
0.0637 36.73 40000 0.3522 0.3028
0.0695 38.57 42000 0.3407 0.3014
0.0657 40.4 44000 0.3456 0.3025
0.0598 42.24 46000 0.3498 0.3013
0.059 44.08 48000 0.3563 0.3012
0.0645 45.91 50000 0.3514 0.3002
0.0595 47.75 52000 0.3545 0.3000
0.064 49.59 54000 0.3510 0.2990

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.9.1.dev0
  • Tokenizers 0.13.2