DylanonWic commited on
Commit
bc7a9dd
1 Parent(s): 19a8b25

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -20
README.md CHANGED
@@ -2,8 +2,6 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
- metrics:
6
- - wer
7
  model-index:
8
  - name: wav2vec2-large-asr-th
9
  results: []
@@ -14,10 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # wav2vec2-large-asr-th
16
 
17
- This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 3.7418
20
- - Wer: 1.0
21
 
22
  ## Model description
23
 
@@ -40,25 +35,12 @@ The following hyperparameters were used during training:
40
  - train_batch_size: 16
41
  - eval_batch_size: 16
42
  - seed: 42
43
- - gradient_accumulation_steps: 2
44
- - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 100
48
  - training_steps: 1000
49
  - mixed_precision_training: Native AMP
50
 
51
- ### Training results
52
-
53
- | Training Loss | Epoch | Step | Validation Loss | Wer |
54
- |:-------------:|:-----:|:----:|:---------------:|:---:|
55
- | 3.9266 | 2.27 | 200 | 3.8144 | 1.0 |
56
- | 3.7284 | 4.54 | 400 | 3.7418 | 1.0 |
57
- | 3.7024 | 6.81 | 600 | 3.7100 | 1.0 |
58
- | 3.3622 | 9.09 | 800 | 3.1187 | 1.0 |
59
- | 2.7545 | 11.36 | 1000 | 2.5473 | 1.0 |
60
-
61
-
62
  ### Framework versions
63
 
64
  - Transformers 4.26.1
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: wav2vec2-large-asr-th
7
  results: []
 
12
 
13
  # wav2vec2-large-asr-th
14
 
15
+ This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
 
 
 
16
 
17
  ## Model description
18
 
 
35
  - train_batch_size: 16
36
  - eval_batch_size: 16
37
  - seed: 42
 
 
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
+ - lr_scheduler_warmup_steps: 200
41
  - training_steps: 1000
42
  - mixed_precision_training: Native AMP
43
 
 
 
 
 
 
 
 
 
 
 
 
44
  ### Framework versions
45
 
46
  - Transformers 4.26.1