model_phoneme_onSet2

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1464
  • 0 Precision: 1.0
  • 0 Recall: 0.9677
  • 0 F1-score: 0.9836
  • 0 Support: 31
  • 1 Precision: 0.9167
  • 1 Recall: 1.0
  • 1 F1-score: 0.9565
  • 1 Support: 22
  • 2 Precision: 1.0
  • 2 Recall: 0.9333
  • 2 F1-score: 0.9655
  • 2 Support: 30
  • 3 Precision: 0.9333
  • 3 Recall: 1.0
  • 3 F1-score: 0.9655
  • 3 Support: 14
  • Accuracy: 0.9691
  • Macro avg Precision: 0.9625
  • Macro avg Recall: 0.9753
  • Macro avg F1-score: 0.9678
  • Macro avg Support: 97
  • Weighted avg Precision: 0.9715
  • Weighted avg Recall: 0.9691
  • Weighted avg F1-score: 0.9693
  • Weighted avg Support: 97
  • Wer: 0.1380
  • Mtrix: [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 22, 0, 0], [2, 0, 1, 28, 1], [3, 0, 0, 0, 14]]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 70
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss 0 Precision 0 Recall 0 F1-score 0 Support 1 Precision 1 Recall 1 F1-score 1 Support 2 Precision 2 Recall 2 F1-score 2 Support 3 Precision 3 Recall 3 F1-score 3 Support Accuracy Macro avg Precision Macro avg Recall Macro avg F1-score Macro avg Support Weighted avg Precision Weighted avg Recall Weighted avg F1-score Weighted avg Support Wer Mtrix
3.9429 4.16 100 3.3748 0.0 0.0 0.0 31 0.0 0.0 0.0 22 0.0 0.0 0.0 30 0.1011 0.6429 0.1748 14 0.0928 0.0253 0.1607 0.0437 97 0.0146 0.0928 0.0252 97 0.9980 [[0, 1, 2, 3], [0, 0, 0, 0, 31], [1, 0, 0, 0, 22], [2, 3, 0, 0, 27], [3, 5, 0, 0, 9]]
3.3504 8.33 200 3.1724 0.0 0.0 0.0 31 0.0 0.0 0.0 22 0.0 0.0 0.0 30 0.1011 0.6429 0.1748 14 0.0928 0.0253 0.1607 0.0437 97 0.0146 0.0928 0.0252 97 0.9980 [[0, 1, 2, 3], [0, 0, 0, 0, 31], [1, 0, 0, 0, 22], [2, 3, 0, 0, 27], [3, 5, 0, 0, 9]]
3.155 12.49 300 3.1448 0.0 0.0 0.0 31 0.0 0.0 0.0 22 0.0 0.0 0.0 30 0.1011 0.6429 0.1748 14 0.0928 0.0253 0.1607 0.0437 97 0.0146 0.0928 0.0252 97 0.9980 [[0, 1, 2, 3], [0, 0, 0, 0, 31], [1, 0, 0, 0, 22], [2, 3, 0, 0, 27], [3, 5, 0, 0, 9]]
3.0282 16.65 400 2.9990 0.0 0.0 0.0 31 0.2268 1.0 0.3697 22 0.0 0.0 0.0 30 0.0 0.0 0.0 14 0.2268 0.0567 0.25 0.0924 97 0.0514 0.2268 0.0839 97 1.0 [[0, 1, 2, 3], [0, 0, 31, 0, 0], [1, 0, 22, 0, 0], [2, 0, 30, 0, 0], [3, 0, 14, 0, 0]]
2.744 20.82 500 2.6658 0.8378 1.0 0.9118 31 0.3889 0.6364 0.4828 22 0.4583 0.3667 0.4074 30 0.0 0.0 0.0 14 0.5773 0.4213 0.5008 0.4505 97 0.4977 0.5773 0.5269 97 1.0 [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 6, 14, 2, 0], [2, 0, 19, 11, 0], [3, 0, 3, 11, 0]]
2.2503 24.98 600 2.0915 0.9677 0.9677 0.9677 31 0.8571 0.8182 0.8372 22 0.875 0.9333 0.9032 30 0.9231 0.8571 0.8889 14 0.9072 0.9057 0.8941 0.8993 97 0.9075 0.9072 0.9068 97 0.9609 [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 1, 18, 2, 1], [2, 0, 2, 28, 0], [3, 0, 0, 2, 12]]
1.8687 29.16 700 1.7109 1.0 0.9355 0.9667 31 0.7857 1.0 0.88 22 1.0 0.9333 0.9655 30 1.0 0.8571 0.9231 14 0.9381 0.9464 0.9315 0.9338 97 0.9514 0.9381 0.9404 97 0.9373 [[0, 1, 2, 3], [0, 29, 2, 0, 0], [1, 0, 22, 0, 0], [2, 0, 2, 28, 0], [3, 0, 2, 0, 12]]
1.4444 33.33 800 1.3295 1.0 0.9677 0.9836 31 0.88 1.0 0.9362 22 1.0 0.9667 0.9831 30 1.0 0.9286 0.9630 14 0.9691 0.97 0.9657 0.9664 97 0.9728 0.9691 0.9697 97 0.9142 [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 22, 0, 0], [2, 0, 1, 29, 0], [3, 0, 1, 0, 13]]
0.95 37.49 900 0.8782 1.0 1.0 1.0 31 0.9167 1.0 0.9565 22 1.0 0.9333 0.9655 30 0.9286 0.9286 0.9286 14 0.9691 0.9613 0.9655 0.9627 97 0.9708 0.9691 0.9692 97 0.8545 [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 22, 0, 0], [2, 0, 1, 28, 1], [3, 0, 1, 0, 13]]
0.5303 41.65 1000 0.4750 1.0 1.0 1.0 31 0.9167 1.0 0.9565 22 1.0 0.9333 0.9655 30 1.0 1.0 1.0 14 0.9794 0.9792 0.9833 0.9805 97 0.9811 0.9794 0.9795 97 0.6026 [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 22, 0, 0], [2, 0, 2, 28, 0], [3, 0, 0, 0, 14]]
0.3054 45.82 1100 0.2919 0.9688 1.0 0.9841 31 0.9130 0.9545 0.9333 22 1.0 0.9 0.9474 30 0.9333 1.0 0.9655 14 0.9588 0.9538 0.9636 0.9576 97 0.9607 0.9588 0.9586 97 0.2373 [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 1, 21, 0, 0], [2, 0, 2, 27, 1], [3, 0, 0, 0, 14]]
0.1609 49.98 1200 0.1727 1.0 1.0 1.0 31 0.88 1.0 0.9362 22 1.0 0.9 0.9474 30 0.9286 0.9286 0.9286 14 0.9588 0.9521 0.9571 0.9530 97 0.9625 0.9588 0.9589 97 0.1646 [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 22, 0, 0], [2, 0, 2, 27, 1], [3, 0, 1, 0, 13]]
0.1204 54.16 1300 0.1430 1.0 1.0 1.0 31 0.9167 1.0 0.9565 22 1.0 0.9333 0.9655 30 0.9286 0.9286 0.9286 14 0.9691 0.9613 0.9655 0.9627 97 0.9708 0.9691 0.9692 97 0.1370 [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 22, 0, 0], [2, 0, 1, 28, 1], [3, 0, 1, 0, 13]]
0.0924 58.33 1400 0.1494 0.9677 0.9677 0.9677 31 0.9130 0.9545 0.9333 22 1.0 0.9333 0.9655 30 0.9333 1.0 0.9655 14 0.9588 0.9535 0.9639 0.9580 97 0.9603 0.9588 0.9589 97 0.1581 [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 1, 21, 0, 0], [2, 0, 1, 28, 1], [3, 0, 0, 0, 14]]
0.0596 62.49 1500 0.1484 1.0 0.9677 0.9836 31 0.9167 1.0 0.9565 22 1.0 0.9333 0.9655 30 0.9333 1.0 0.9655 14 0.9691 0.9625 0.9753 0.9678 97 0.9715 0.9691 0.9693 97 0.1370 [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 22, 0, 0], [2, 0, 1, 28, 1], [3, 0, 0, 0, 14]]
0.0592 66.65 1600 0.1464 1.0 0.9677 0.9836 31 0.9167 1.0 0.9565 22 1.0 0.9333 0.9655 30 0.9333 1.0 0.9655 14 0.9691 0.9625 0.9753 0.9678 97 0.9715 0.9691 0.9693 97 0.1380 [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 22, 0, 0], [2, 0, 1, 28, 1], [3, 0, 0, 0, 14]]

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 2.8.0
  • Tokenizers 0.13.2
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.