Edit model card

whisper_4_with_init_sun_syl_wd_0__0070

This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.2729
  • Train Accuracy: 0.0338
  • Train Wermet: 0.0595
  • Train Wermet Syl: 0.0677
  • Validation Loss: 1.1911
  • Validation Accuracy: 0.0207
  • Validation Wermet: 0.3247
  • Validation Wermet Syl: 0.2886
  • Epoch: 69

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
  • training_precision: float32

Training results

Train Loss Train Accuracy Train Wermet Train Wermet Syl Validation Loss Validation Accuracy Validation Wermet Validation Wermet Syl Epoch
5.3409 0.0111 1.3547 1.2898 3.9789 0.0114 0.9710 0.9563 0
4.7143 0.0116 0.8622 0.8228 3.9404 0.0113 0.9823 0.9735 1
4.6752 0.0117 0.8472 0.8057 3.9081 0.0114 0.9579 0.9359 2
4.6500 0.0117 0.8382 0.7945 3.8820 0.0115 0.9213 0.8856 3
4.6282 0.0118 0.8286 0.7805 3.8738 0.0114 0.9433 0.9119 4
4.6095 0.0118 0.8190 0.7696 3.8630 0.0115 0.9117 0.8698 5
4.5875 0.0119 0.7976 0.7465 3.8341 0.0116 0.8976 0.8552 6
4.5682 0.0120 0.7753 0.7227 3.8277 0.0116 0.9014 0.8653 7
4.5376 0.0121 0.7528 0.7005 3.7844 0.0118 0.8332 0.7815 8
4.5060 0.0122 0.7392 0.6844 3.7537 0.0118 0.8578 0.8152 9
4.4580 0.0124 0.7221 0.6694 3.7038 0.0120 0.8190 0.7679 10
4.3989 0.0125 0.7156 0.6636 3.6169 0.0122 0.7979 0.7429 11
4.3056 0.0128 0.7069 0.6557 3.5098 0.0125 0.7924 0.7396 12
4.1673 0.0132 0.7054 0.6584 3.3542 0.0128 0.7759 0.7240 13
3.9762 0.0138 0.6987 0.6559 3.1318 0.0133 0.7644 0.7231 14
3.7385 0.0145 0.6835 0.6448 2.9144 0.0138 0.7392 0.6955 15
3.5040 0.0152 0.6644 0.6298 2.7413 0.0142 0.7019 0.6548 16
3.2728 0.0160 0.6408 0.6101 2.5183 0.0149 0.6798 0.6363 17
3.0657 0.0167 0.6188 0.5912 2.3594 0.0153 0.6528 0.6103 18
2.8703 0.0174 0.5936 0.5685 2.2644 0.0156 0.6310 0.5925 19
2.6850 0.0181 0.5680 0.5453 2.1296 0.0160 0.6040 0.5652 20
2.5227 0.0188 0.5423 0.5215 2.0019 0.0165 0.5793 0.5403 21
2.3878 0.0194 0.5199 0.5015 1.8996 0.0169 0.5592 0.5229 22
2.2437 0.0201 0.4959 0.4788 1.8141 0.0172 0.5414 0.5045 23
2.1205 0.0207 0.4752 0.4607 1.7245 0.0175 0.5208 0.4838 24
1.9919 0.0213 0.4533 0.4390 1.6673 0.0178 0.5026 0.4659 25
1.9140 0.0217 0.4355 0.4216 1.6041 0.0181 0.4873 0.4512 26
1.8225 0.0222 0.4184 0.4052 1.6271 0.0179 0.4852 0.4511 27
1.7265 0.0227 0.4016 0.3895 1.5219 0.0184 0.4635 0.4275 28
1.6240 0.0233 0.3833 0.3729 1.4718 0.0186 0.4515 0.4170 29
1.5610 0.0236 0.3697 0.3588 1.4404 0.0188 0.4407 0.4056 30
1.4719 0.0242 0.3540 0.3449 1.4125 0.0189 0.4310 0.3961 31
1.4152 0.0245 0.3421 0.3339 1.3655 0.0191 0.4234 0.3881 32
1.3546 0.0249 0.3277 0.3195 1.3419 0.0192 0.4156 0.3816 33
1.2565 0.0256 0.3135 0.3060 1.3172 0.0194 0.4065 0.3722 34
1.2135 0.0258 0.3026 0.2958 1.3019 0.0194 0.4006 0.3662 35
1.1739 0.0261 0.2923 0.2861 1.3843 0.0190 0.3951 0.3587 36
1.0950 0.0267 0.2782 0.2733 1.2665 0.0197 0.3883 0.3541 37
1.0435 0.0271 0.2673 0.2631 1.2567 0.0197 0.3837 0.3497 38
0.9922 0.0275 0.2580 0.2542 1.2566 0.0197 0.3801 0.3444 39
0.9387 0.0279 0.2464 0.2438 1.2441 0.0198 0.3767 0.3423 40
0.9345 0.0278 0.2393 0.2373 1.2221 0.0199 0.3682 0.3336 41
0.8574 0.0285 0.2268 0.2255 1.2258 0.0199 0.3680 0.3338 42
0.8275 0.0287 0.2183 0.2180 1.2044 0.0201 0.3628 0.3290 43
0.8201 0.0288 0.2114 0.2108 1.2056 0.0201 0.3601 0.3270 44
0.7684 0.0292 0.2020 0.2029 1.1879 0.0202 0.3553 0.3215 45
0.7262 0.0295 0.1938 0.1947 1.2263 0.0200 0.3537 0.3177 46
0.7286 0.0295 0.1876 0.1898 1.1772 0.0203 0.3485 0.3135 47
0.6807 0.0300 0.1775 0.1797 1.1761 0.0203 0.3490 0.3155 48
0.6609 0.0301 0.1713 0.1742 1.1853 0.0203 0.3484 0.3153 49
0.6062 0.0306 0.1615 0.1653 1.1660 0.0204 0.3432 0.3090 50
0.5755 0.0309 0.1547 0.1584 1.1698 0.0204 0.3428 0.3089 51
0.5600 0.0310 0.1482 0.1524 1.1667 0.0204 0.3398 0.3058 52
0.5715 0.0308 0.1449 0.1496 1.1614 0.0205 0.3381 0.3036 53
0.5247 0.0313 0.1358 0.1411 1.1639 0.0205 0.3359 0.3025 54
0.5085 0.0315 0.1301 0.1358 1.2420 0.0202 0.3412 0.3064 55
0.4827 0.0317 0.1239 0.1295 1.1677 0.0205 0.3349 0.3009 56
0.4848 0.0317 0.1207 0.1280 1.1653 0.0205 0.3326 0.2991 57
0.4323 0.0322 0.1109 0.1185 1.1602 0.0206 0.3299 0.2953 58
0.4183 0.0323 0.1057 0.1133 1.1622 0.0206 0.3307 0.2962 59
0.4329 0.0322 0.1028 0.1100 1.1714 0.0206 0.3300 0.2950 60
0.3962 0.0326 0.0964 0.1045 1.1726 0.0206 0.3311 0.2967 61
0.3642 0.0329 0.0898 0.0973 1.1699 0.0206 0.3289 0.2936 62
0.3786 0.0327 0.0884 0.0963 1.1734 0.0206 0.3279 0.2929 63
0.3698 0.0328 0.0842 0.0925 1.1728 0.0207 0.3282 0.2932 64
0.3219 0.0333 0.0765 0.0850 1.1830 0.0207 0.3258 0.2907 65
0.3035 0.0335 0.0725 0.0811 1.1840 0.0207 0.3261 0.2904 66
0.3522 0.0330 0.0745 0.0826 1.2107 0.0206 0.3299 0.2955 67
0.3001 0.0335 0.0663 0.0749 1.1810 0.0207 0.3264 0.2909 68
0.2729 0.0338 0.0595 0.0677 1.1911 0.0207 0.3247 0.2886 69

Framework versions

  • Transformers 4.34.0.dev0
  • TensorFlow 2.13.0
  • Tokenizers 0.13.3
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bigmorning/whisper_4_with_init_sun_syl_wd_0__0070

Finetuned
(1206)
this model