Edit model card

mms-zeroshot-300m-bem

This model is a fine-tuned version of mms-meta/mms-zeroshot-300m on the BEMBASPEECH - BEM dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1787
  • Wer: 0.3583

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.6629 0.1778 500 0.3540 0.5421
0.6579 0.3556 1000 0.2588 0.4883
0.591 0.5334 1500 0.2552 0.4720
0.5467 0.7112 2000 0.2370 0.4542
0.5405 0.8890 2500 0.2376 0.4556
0.5027 1.0669 3000 0.2234 0.4307
0.5001 1.2447 3500 0.2176 0.4213
0.4962 1.4225 4000 0.2199 0.4205
0.486 1.6003 4500 0.2145 0.4167
0.47 1.7781 5000 0.2159 0.4169
0.4557 1.9559 5500 0.2099 0.4135
0.4514 2.1337 6000 0.2091 0.4100
0.4539 2.3115 6500 0.2038 0.4016
0.439 2.4893 7000 0.2041 0.4025
0.4378 2.6671 7500 0.2002 0.3916
0.4347 2.8450 8000 0.1961 0.3911
0.4278 3.0228 8500 0.1995 0.3923
0.4117 3.2006 9000 0.1959 0.3892
0.4149 3.3784 9500 0.1926 0.3859
0.4148 3.5562 10000 0.1958 0.3804
0.4009 3.7340 10500 0.1930 0.3790
0.4174 3.9118 11000 0.1955 0.3823
0.4012 4.0896 11500 0.1950 0.3812
0.3974 4.2674 12000 0.1934 0.3773
0.3943 4.4452 12500 0.1845 0.3720
0.4071 4.6230 13000 0.1920 0.3839
0.3968 4.8009 13500 0.1867 0.3743
0.3795 4.9787 14000 0.1872 0.3713
0.3856 5.1565 14500 0.1869 0.3737
0.3706 5.3343 15000 0.1903 0.3766
0.3784 5.5121 15500 0.1861 0.3683
0.3777 5.6899 16000 0.1866 0.3713
0.3861 5.8677 16500 0.1812 0.3637
0.3711 6.0455 17000 0.1842 0.3667
0.374 6.2233 17500 0.1815 0.3618
0.3539 6.4011 18000 0.1815 0.3647
0.3625 6.5789 18500 0.1785 0.3589
0.3599 6.7568 19000 0.1795 0.3621
0.3654 6.9346 19500 0.1822 0.3624
0.3693 7.1124 20000 0.1792 0.3612
0.3519 7.2902 20500 0.1800 0.3675
0.3553 7.4680 21000 0.1808 0.3640
0.3451 7.6458 21500 0.1808 0.3620
0.3558 7.8236 22000 0.1794 0.3610
0.3595 8.0014 22500 0.1772 0.3576
0.3404 8.1792 23000 0.1788 0.3581
0.3593 8.3570 23500 0.1782 0.3580
0.3471 8.5349 24000 0.1797 0.3606
0.3497 8.7127 24500 0.1778 0.3588
0.3398 8.8905 25000 0.1775 0.3583
0.3444 9.0683 25500 0.1796 0.3586
0.3366 9.2461 26000 0.1785 0.3574
0.3434 9.4239 26500 0.1781 0.3592
0.3426 9.6017 27000 0.1786 0.3593
0.3496 9.7795 27500 0.1787 0.3590
0.334 9.9573 28000 0.1788 0.3588

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
14
Safetensors
Model size
316M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for csikasote/mms-zeroshot-300m-bem

Finetuned
(8)
this model