wav2vec2-base-ogma-phoneme
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: nan
- Cer: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Cer |
---|---|---|---|---|
67.3394 | 1.0 | 5 | 62.1245 | 6.7376 |
73.2076 | 2.0 | 10 | nan | 6.7376 |
-257055208286820768.0000 | 3.0 | 15 | nan | 6.7376 |
64.2241 | 4.0 | 20 | nan | 6.7376 |
65.3601 | 5.0 | 25 | 62.1245 | 6.7376 |
64.2295 | 6.0 | 30 | 62.1157 | 6.7376 |
45.425 | 7.0 | 35 | 62.1251 | 6.7376 |
50.6118 | 8.0 | 40 | nan | 6.7178 |
64.6394 | 9.0 | 45 | 62.0582 | 6.6188 |
48.7615 | 10.0 | 50 | nan | 6.6188 |
54.5817 | 11.0 | 55 | nan | 6.4950 |
48.1198 | 12.0 | 60 | nan | 6.4950 |
56.9202 | 13.0 | 65 | nan | 6.3465 |
57.3656 | 14.0 | 70 | nan | 6.4406 |
68.163 | 15.0 | 75 | 61.7497 | 6.2129 |
56.806 | 16.0 | 80 | nan | 6.2129 |
69.1218 | 17.0 | 85 | 61.6119 | 5.7574 |
55.5282 | 18.0 | 90 | 61.5413 | 5.4158 |
-6752.4055 | 19.0 | 95 | 61.2303 | 4.9257 |
64.744 | 20.0 | 100 | 60.9641 | 4.4455 |
66.7382 | 21.0 | 105 | 60.3274 | 3.5198 |
-21060.9078 | 22.0 | 110 | nan | 3.5198 |
51.2619 | 23.0 | 115 | 59.9896 | 3.1089 |
51.398 | 24.0 | 120 | nan | 2.7772 |
63.6242 | 25.0 | 125 | 59.3321 | 2.5149 |
59.6308 | 26.0 | 130 | 58.7697 | 2.1931 |
62.0615 | 27.0 | 135 | nan | 1.8366 |
-46.2037 | 28.0 | 140 | 57.9474 | 1.7475 |
60.5632 | 29.0 | 145 | 57.5041 | 1.5941 |
55.4431 | 30.0 | 150 | 56.7507 | 1.4307 |
40.8661 | 31.0 | 155 | 56.6063 | 1.4059 |
63.784 | 32.0 | 160 | 56.1097 | 1.2327 |
42.2708 | 33.0 | 165 | nan | 1.2327 |
53.7813 | 34.0 | 170 | nan | 1.2426 |
57.459 | 35.0 | 175 | 55.8894 | 1.2228 |
58.9998 | 36.0 | 180 | nan | 1.0 |
0.0 | 37.0 | 185 | nan | 1.0 |
0.0 | 38.0 | 190 | nan | 1.0 |
0.0 | 39.0 | 195 | nan | 1.0 |
0.0 | 40.0 | 200 | nan | 1.0 |
0.0 | 41.0 | 205 | nan | 1.0 |
0.0 | 42.0 | 210 | nan | 1.0 |
0.0 | 43.0 | 215 | nan | 1.0 |
0.0 | 44.0 | 220 | nan | 1.0 |
0.0 | 45.0 | 225 | nan | 1.0 |
0.0 | 46.0 | 230 | nan | 1.0 |
0.0 | 47.0 | 235 | nan | 1.0 |
0.0 | 48.0 | 240 | nan | 1.0 |
0.0 | 49.0 | 245 | nan | 1.0 |
0.0 | 50.0 | 250 | nan | 1.0 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for colerobertson/wav2vec2-base-ogma-phoneme
Base model
facebook/wav2vec2-base