wav2vec2_common_voice_accents_3
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:
- Loss: 0.0042
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
4.584 | 1.27 | 400 | 1.1439 |
0.481 | 2.55 | 800 | 0.1986 |
0.2384 | 3.82 | 1200 | 0.1060 |
0.1872 | 5.1 | 1600 | 0.1016 |
0.158 | 6.37 | 2000 | 0.0942 |
0.1427 | 7.64 | 2400 | 0.0646 |
0.1306 | 8.92 | 2800 | 0.0612 |
0.1197 | 10.19 | 3200 | 0.0423 |
0.1129 | 11.46 | 3600 | 0.0381 |
0.1054 | 12.74 | 4000 | 0.0326 |
0.0964 | 14.01 | 4400 | 0.0293 |
0.0871 | 15.29 | 4800 | 0.0239 |
0.0816 | 16.56 | 5200 | 0.0168 |
0.0763 | 17.83 | 5600 | 0.0202 |
0.0704 | 19.11 | 6000 | 0.0224 |
0.0669 | 20.38 | 6400 | 0.0208 |
0.063 | 21.66 | 6800 | 0.0074 |
0.0585 | 22.93 | 7200 | 0.0126 |
0.0548 | 24.2 | 7600 | 0.0086 |
0.0512 | 25.48 | 8000 | 0.0080 |
0.0487 | 26.75 | 8400 | 0.0052 |
0.0455 | 28.03 | 8800 | 0.0062 |
0.0433 | 29.3 | 9200 | 0.0042 |
Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.