Edit model card

vakyansh-wav2vec2-hindi-him-4200-audio-abuse-feature

This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6541
  • Accuracy: 0.6938
  • Macro F1-score: 0.6938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy Macro F1-score
6.7199 0.77 10 6.7179 0.0 0.0
6.699 1.54 20 6.6606 0.1192 0.0134
6.6237 2.31 30 6.5410 0.4499 0.0852
6.4723 3.08 40 6.3277 0.5014 0.2226
6.2904 3.85 50 6.0526 0.5041 0.3351
6.0074 4.62 60 5.7219 0.5041 0.3351
5.7233 5.38 70 5.3914 0.5041 0.3351
5.4426 6.15 80 5.0902 0.5041 0.3351
5.1776 6.92 90 4.8584 0.5041 0.3351
5.0097 7.69 100 4.6328 0.5041 0.3351
4.7851 8.46 110 4.4098 0.5041 0.3351
4.6801 9.23 120 4.2064 0.5041 0.3351
4.4144 10.0 130 3.9980 0.5041 0.3351
4.1631 10.77 140 3.7914 0.5041 0.3351
4.0093 11.54 150 3.5793 0.5041 0.3351
3.7803 12.31 160 3.3708 0.5041 0.3351
3.645 13.08 170 3.1635 0.5041 0.3351
3.3334 13.85 180 2.9564 0.5041 0.3351
3.0942 14.62 190 2.7570 0.5041 0.3351
3.0844 15.38 200 2.5619 0.5041 0.3351
2.749 16.15 210 2.3733 0.5041 0.3351
2.5448 16.92 220 2.1896 0.5041 0.3351
2.3636 17.69 230 2.0160 0.5041 0.3351
2.1303 18.46 240 1.8579 0.5041 0.3351
2.1702 19.23 250 1.7136 0.5041 0.3351
1.8911 20.0 260 1.5809 0.5041 0.3351
1.7695 20.77 270 1.4511 0.5041 0.3351
1.5466 21.54 280 1.3433 0.5041 0.3351
1.4228 22.31 290 1.2479 0.5041 0.3351
1.4089 23.08 300 1.1632 0.5041 0.3351
1.2252 23.85 310 1.0900 0.5041 0.3351
1.2236 24.62 320 1.0268 0.5041 0.3351
1.0727 25.38 330 0.9742 0.5041 0.3351
1.0036 26.15 340 0.9273 0.5041 0.3351
0.95 26.92 350 0.8892 0.5041 0.3351
0.9304 27.69 360 0.8592 0.5041 0.3351
0.9426 28.46 370 0.8355 0.5041 0.3351
0.8967 29.23 380 0.8136 0.5041 0.3351
0.862 30.0 390 0.7942 0.5041 0.3351
0.8609 30.77 400 0.7799 0.5041 0.3351
0.8013 31.54 410 0.7667 0.5041 0.3351
0.7845 32.31 420 0.7572 0.5041 0.3351
0.77 33.08 430 0.7425 0.5041 0.3351
0.7952 33.85 440 0.7309 0.5041 0.3351
0.7433 34.62 450 0.7119 0.7046 0.7036
0.7791 35.38 460 0.7002 0.7019 0.6992
0.7409 36.15 470 0.6932 0.7073 0.7029
0.7233 36.92 480 0.6887 0.6911 0.6904
0.716 37.69 490 0.6820 0.6992 0.6986
0.6994 38.46 500 0.6821 0.6883 0.6881
0.6899 39.23 510 0.6701 0.6938 0.6930
0.7014 40.0 520 0.6683 0.6965 0.6965
0.7252 40.77 530 0.6632 0.7019 0.7011
0.6763 41.54 540 0.6650 0.6883 0.6882
0.7115 42.31 550 0.6615 0.6829 0.6829
0.6703 43.08 560 0.6628 0.6911 0.6908
0.6757 43.85 570 0.6626 0.6938 0.6935
0.6672 44.62 580 0.6545 0.7046 0.7043
0.6518 45.38 590 0.6559 0.6965 0.6965
0.6661 46.15 600 0.6610 0.6802 0.6800
0.679 46.92 610 0.6580 0.6856 0.6856
0.7079 47.69 620 0.6558 0.6911 0.6911
0.7103 48.46 630 0.6536 0.6992 0.6992
0.6613 49.23 640 0.6536 0.6965 0.6965
0.6326 50.0 650 0.6541 0.6938 0.6938

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for callmesan/vakyansh-wav2vec2-hindi-him-4200-audio-abuse-feature

Finetuned
(6)
this model