wav2vec2-base-finetuned-mednames

This model is a fine-tuned version of facebook/wav2vec2-base on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1443
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 5
  • eval_batch_size: 5
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 20
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.2789 1.0 36 2.2639 0.2222
2.1925 2.0 72 2.1404 0.3833
1.8763 3.0 108 1.7607 0.5556
1.5035 4.0 144 1.3599 0.7667
1.0598 5.0 180 0.9607 0.8778
0.8303 6.0 216 0.7327 0.8722
0.6418 7.0 252 0.5544 0.8778
0.4481 8.0 288 0.4377 0.9722
0.3438 9.0 324 0.2963 0.9833
0.1831 10.0 360 0.1443 1.0
0.1129 11.0 396 0.0890 1.0
0.0877 12.0 432 0.0664 1.0
0.0657 13.0 468 0.0509 1.0
0.0528 14.0 504 0.0406 1.0
0.0428 15.0 540 0.0388 1.0
0.0369 16.0 576 0.0526 0.9944
0.0858 17.0 612 0.0539 0.9944
0.0294 18.0 648 0.0231 1.0
0.0272 19.0 684 0.0204 1.0
0.0247 20.0 720 0.0188 1.0
0.0227 21.0 756 0.0172 1.0
0.0211 22.0 792 0.0161 1.0
0.0195 23.0 828 0.0152 1.0
0.0684 24.0 864 0.0144 1.0
0.0175 25.0 900 0.0139 1.0
0.0756 26.0 936 0.0134 1.0
0.017 27.0 972 0.0131 1.0
0.0163 28.0 1008 0.0128 1.0
0.0158 29.0 1044 0.0126 1.0
0.0159 30.0 1080 0.0126 1.0

Framework versions

  • Transformers 4.28.1
  • Pytorch 1.11.0+cu102
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Space using santhosh-4000/wav2vec2-base-finetuned-mednames 1