DylanonWic
commited on
Commit
•
2163556
1
Parent(s):
76a661f
update model card README.md
Browse files
README.md
CHANGED
@@ -12,10 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
12 |
|
13 |
# wav2vec2-large-asr-th
|
14 |
|
15 |
-
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on
|
16 |
-
It achieves the following results on the evaluation set:
|
17 |
-
- Loss: 3.7473
|
18 |
-
- Cer: 0.9999
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -34,32 +31,16 @@ More information needed
|
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
-
- learning_rate:
|
38 |
- train_batch_size: 16
|
39 |
- eval_batch_size: 16
|
40 |
- seed: 42
|
41 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
42 |
- lr_scheduler_type: linear
|
43 |
-
- lr_scheduler_warmup_steps:
|
44 |
- training_steps: 1000
|
45 |
- mixed_precision_training: Native AMP
|
46 |
|
47 |
-
### Training results
|
48 |
-
|
49 |
-
| Training Loss | Epoch | Step | Validation Loss | Cer |
|
50 |
-
|:-------------:|:-----:|:----:|:---------------:|:------:|
|
51 |
-
| 4.2296 | 0.56 | 100 | 4.0607 | 0.9999 |
|
52 |
-
| 3.6029 | 1.13 | 200 | 3.9200 | 0.9999 |
|
53 |
-
| 3.6003 | 1.69 | 300 | 3.7882 | 0.9999 |
|
54 |
-
| 3.6346 | 2.26 | 400 | 3.7473 | 0.9999 |
|
55 |
-
| 3.6041 | 2.82 | 500 | 3.9651 | 0.9999 |
|
56 |
-
| 3.5491 | 3.39 | 600 | 3.9053 | 0.9999 |
|
57 |
-
| 3.568 | 3.95 | 700 | 3.7021 | 0.9999 |
|
58 |
-
| 3.5773 | 4.52 | 800 | 3.6694 | 0.9999 |
|
59 |
-
| 3.5367 | 5.08 | 900 | 3.6426 | 0.9999 |
|
60 |
-
| 3.5112 | 5.65 | 1000 | 3.6260 | 0.9999 |
|
61 |
-
|
62 |
-
|
63 |
### Framework versions
|
64 |
|
65 |
- Transformers 4.26.1
|
|
|
12 |
|
13 |
# wav2vec2-large-asr-th
|
14 |
|
15 |
+
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
|
|
|
|
|
|
|
16 |
|
17 |
## Model description
|
18 |
|
|
|
31 |
### Training hyperparameters
|
32 |
|
33 |
The following hyperparameters were used during training:
|
34 |
+
- learning_rate: 5e-05
|
35 |
- train_batch_size: 16
|
36 |
- eval_batch_size: 16
|
37 |
- seed: 42
|
38 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
39 |
- lr_scheduler_type: linear
|
40 |
+
- lr_scheduler_warmup_steps: 200
|
41 |
- training_steps: 1000
|
42 |
- mixed_precision_training: Native AMP
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
### Framework versions
|
45 |
|
46 |
- Transformers 4.26.1
|