Edit model card

vakyansh-wav2vec2-punjabi-pam-10-audio-abuse-feature

This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7070
  • Accuracy: 0.7112
  • Macro F1-score: 0.7112

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy Macro F1-score
6.7326 0.77 10 6.7224 0.0 0.0
6.682 1.54 20 6.5714 0.2888 0.0298
6.523 2.31 30 6.3268 0.4877 0.3619
6.247 3.08 40 6.0039 0.4768 0.3229
6.0644 3.85 50 5.7134 0.4796 0.3241
5.7866 4.62 60 5.4332 0.4796 0.3241
5.5229 5.38 70 5.2026 0.4796 0.3241
5.2712 6.15 80 4.9856 0.4796 0.3241
5.1268 6.92 90 4.7918 0.4796 0.3241
4.9768 7.69 100 4.5999 0.4796 0.3241
4.7137 8.46 110 4.3958 0.4796 0.3241
4.5863 9.23 120 4.1988 0.4796 0.3241
4.3386 10.0 130 3.9983 0.4796 0.3241
4.1936 10.77 140 3.7938 0.4796 0.3241
3.9752 11.54 150 3.5906 0.4796 0.3241
3.9035 12.31 160 3.3854 0.4796 0.3241
3.652 13.08 170 3.1907 0.4796 0.3241
3.3045 13.85 180 2.9781 0.4796 0.3241
3.135 14.62 190 2.7764 0.4796 0.3241
2.9589 15.38 200 2.5827 0.4796 0.3241
2.7405 16.15 210 2.3901 0.4796 0.3241
2.5482 16.92 220 2.2042 0.4796 0.3241
2.4126 17.69 230 2.0318 0.4796 0.3241
2.2721 18.46 240 1.8672 0.4796 0.3241
2.0507 19.23 250 1.7156 0.4796 0.3241
1.8895 20.0 260 1.5721 0.4796 0.3241
1.7304 20.77 270 1.4453 0.4796 0.3241
1.5756 21.54 280 1.3330 0.4796 0.3241
1.4961 22.31 290 1.2238 0.6594 0.6321
1.4065 23.08 300 1.1468 0.6621 0.6356
1.4168 23.85 310 1.0636 0.6839 0.6632
1.1788 24.62 320 0.9818 0.7411 0.7325
1.06 25.38 330 0.9203 0.7466 0.7438
1.0021 26.15 340 0.8806 0.7629 0.7629
1.0249 26.92 350 0.8698 0.6894 0.6690
0.8521 27.69 360 0.7970 0.7602 0.7562
0.8504 28.46 370 0.7724 0.7602 0.7602
0.7939 29.23 380 0.7440 0.7466 0.7461
0.7805 30.0 390 0.7283 0.7520 0.7511
0.6974 30.77 400 0.7311 0.7384 0.7377
0.7533 31.54 410 0.7270 0.7112 0.6979
0.7528 32.31 420 0.6796 0.7357 0.7298
0.6679 33.08 430 0.6834 0.7357 0.7357
0.6732 33.85 440 0.6851 0.7248 0.7248
0.6001 34.62 450 0.6585 0.7548 0.7530
0.6731 35.38 460 0.6727 0.7411 0.7380
0.5601 36.15 470 0.6688 0.7330 0.7321
0.5488 36.92 480 0.6879 0.7439 0.7420
0.5892 37.69 490 0.6809 0.7166 0.7148
0.5651 38.46 500 0.6877 0.7193 0.7181
0.5595 39.23 510 0.6874 0.7221 0.7218
0.4983 40.0 520 0.6789 0.7166 0.7166
0.4869 40.77 530 0.6912 0.7221 0.7221
0.5352 41.54 540 0.7038 0.7003 0.6984
0.444 42.31 550 0.6778 0.7330 0.7321
0.5637 43.08 560 0.6873 0.6975 0.6973
0.5065 43.85 570 0.6736 0.7411 0.7405
0.4921 44.62 580 0.6859 0.7384 0.7381
0.4426 45.38 590 0.6995 0.7221 0.7221
0.4679 46.15 600 0.6967 0.7275 0.7271
0.6099 46.92 610 0.7145 0.6948 0.6947
0.4779 47.69 620 0.7026 0.7139 0.7139
0.4743 48.46 630 0.7036 0.7139 0.7137
0.4687 49.23 640 0.7060 0.7112 0.7112
0.459 50.0 650 0.7070 0.7112 0.7112

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for callmesan/vakyansh-wav2vec2-punjabi-pam-10-audio-abuse-feature

Finetuned
(2)
this model