Edit model card

md_d_l2_arctic

This model is a fine-tuned version of microsoft/wavlm-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8258
  • Wer: 0.6289

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 20
  • eval_batch_size: 20
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 40
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
14.3657 1.27 100 9.1210 2.4375
4.382 2.53 200 3.4219 1.0
3.2514 3.8 300 2.7881 0.9980
2.4508 5.06 400 1.8000 0.7380
1.6168 6.33 500 1.1315 0.9678
1.1212 7.59 600 0.8749 1.0658
0.8953 8.86 700 0.7655 0.9655
0.7684 10.13 800 0.6687 0.7621
0.6661 11.39 900 0.6319 0.6756
0.6306 12.66 1000 0.6196 0.6963
0.5759 13.92 1100 0.5875 0.5965
0.5417 15.19 1200 0.5780 0.6528
0.528 16.46 1300 0.5798 0.6539
0.4857 17.72 1400 0.5569 0.5725
0.4655 18.99 1500 0.5500 0.5755
0.4526 20.25 1600 0.5583 0.5776
0.4287 21.52 1700 0.5557 0.5610
0.4149 22.78 1800 0.5575 0.5748
0.3983 24.05 1900 0.5649 0.6003
0.4001 25.32 2000 0.5674 0.5976
0.3649 26.58 2100 0.5797 0.5805
0.3711 27.85 2200 0.5839 0.6546
0.3547 29.11 2300 0.5735 0.5904
0.3402 30.38 2400 0.5699 0.5426
0.3414 31.65 2500 0.5700 0.5421
0.3255 32.91 2600 0.5745 0.5663
0.3093 34.18 2700 0.5958 0.5932
0.315 35.44 2800 0.5934 0.5906
0.31 36.71 2900 0.6072 0.6011
0.3026 37.97 3000 0.6038 0.5760
0.2802 39.24 3100 0.6080 0.5777
0.2835 40.51 3200 0.6062 0.5744
0.2585 41.77 3300 0.6225 0.5784
0.2699 43.04 3400 0.6226 0.5665
0.2785 44.3 3500 0.6240 0.5714
0.2689 45.57 3600 0.6295 0.5649
0.2514 46.84 3700 0.6425 0.5421
0.2433 48.1 3800 0.6668 0.6068
0.2403 49.37 3900 0.6563 0.5750
0.2287 50.63 4000 0.6696 0.5933
0.2366 51.9 4100 0.6739 0.5731
0.2295 53.16 4200 0.6809 0.6091
0.2274 54.43 4300 0.6875 0.5914
0.2178 55.7 4400 0.6899 0.5949
0.2176 56.96 4500 0.6925 0.5828
0.2064 58.23 4600 0.7009 0.5985
0.2081 59.49 4700 0.7013 0.5996
0.2093 60.76 4800 0.7257 0.6086
0.2024 62.03 4900 0.7215 0.6003
0.1999 63.29 5000 0.7333 0.6091
0.2064 64.56 5100 0.7530 0.6397
0.186 65.82 5200 0.7542 0.6349
0.186 67.09 5300 0.7416 0.6270
0.1807 68.35 5400 0.7549 0.6352
0.1784 69.62 5500 0.7506 0.5844
0.1824 70.89 5600 0.7611 0.6253
0.1769 72.15 5700 0.7713 0.5927
0.1843 73.42 5800 0.7720 0.5956
0.1709 74.68 5900 0.7805 0.6258
0.1691 75.95 6000 0.7865 0.6282
0.1701 77.22 6100 0.7808 0.6218
0.1735 78.48 6200 0.7790 0.5966
0.1746 79.75 6300 0.7949 0.6431
0.1745 81.01 6400 0.8126 0.6285
0.1605 82.28 6500 0.8113 0.6195
0.1579 83.54 6600 0.7977 0.6155
0.1704 84.81 6700 0.8017 0.6140
0.1659 86.08 6800 0.8147 0.6279
0.166 87.34 6900 0.8088 0.6350
0.1539 88.61 7000 0.8053 0.6164
0.1589 89.87 7100 0.8189 0.6357
0.1559 91.14 7200 0.8152 0.6258
0.1564 92.41 7300 0.8191 0.6245
0.158 93.67 7400 0.8255 0.6333
0.1595 94.94 7500 0.8184 0.6206
0.1638 96.2 7600 0.8230 0.6364
0.1629 97.47 7700 0.8245 0.6312
0.1531 98.73 7800 0.8226 0.6267
0.1572 100.0 7900 0.8258 0.6289

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
65
Safetensors
Model size
193M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for tuanio/md_d_l2_arctic

Finetuned
(12)
this model