DylanonWic commited on
Commit
fe8245b
1 Parent(s): b8bbf5a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -25
README.md CHANGED
@@ -1,9 +1,6 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
- metrics:
6
- - wer
7
  model-index:
8
  - name: wav2vec2-large-asr-th-2
9
  results: []
@@ -14,11 +11,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # wav2vec2-large-asr-th-2
16
 
17
- This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.5485
20
- - Wer: 0.6163
21
- - Cer: 0.1880
22
 
23
  ## Model description
24
 
@@ -37,30 +30,18 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 0.0001
41
- - train_batch_size: 20
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - gradient_accumulation_steps: 2
45
- - total_train_batch_size: 40
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 500
49
- - training_steps: 3000
50
  - mixed_precision_training: Native AMP
51
 
52
- ### Training results
53
-
54
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
55
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
56
- | 3.7562 | 0.39 | 500 | 3.6183 | 1.0 | 0.9999 |
57
- | 2.3585 | 0.79 | 1000 | 1.6482 | 0.9893 | 0.4815 |
58
- | 1.1946 | 1.18 | 1500 | 0.7852 | 0.7434 | 0.2404 |
59
- | 1.0082 | 1.58 | 2000 | 0.6294 | 0.6566 | 0.2069 |
60
- | 0.9679 | 1.97 | 2500 | 0.5688 | 0.6272 | 0.1922 |
61
- | 0.9845 | 2.37 | 3000 | 0.5485 | 0.6163 | 0.1880 |
62
-
63
-
64
  ### Framework versions
65
 
66
  - Transformers 4.27.3
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
 
 
4
  model-index:
5
  - name: wav2vec2-large-asr-th-2
6
  results: []
 
11
 
12
  # wav2vec2-large-asr-th-2
13
 
14
+ This model was trained from scratch on the None dataset.
 
 
 
 
15
 
16
  ## Model description
17
 
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
33
+ - learning_rate: 0.0002
34
+ - train_batch_size: 16
35
  - eval_batch_size: 8
36
  - seed: 42
37
  - gradient_accumulation_steps: 2
38
+ - total_train_batch_size: 32
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
+ - lr_scheduler_warmup_steps: 200
42
+ - training_steps: 4000
43
  - mixed_precision_training: Native AMP
44
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - Transformers 4.27.3