DylanonWic commited on
Commit
99b0d21
1 Parent(s): 99c9ee9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -16
README.md CHANGED
@@ -12,10 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # wav2vec2-large-asr-th
14
 
15
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 5.0687
18
- - Cer: 0.9999
19
 
20
  ## Model description
21
 
@@ -34,7 +31,7 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 1e-05
38
  - train_batch_size: 16
39
  - eval_batch_size: 16
40
  - seed: 42
@@ -43,21 +40,12 @@ The following hyperparameters were used during training:
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 100
46
- - training_steps: 1500
47
  - mixed_precision_training: Native AMP
48
 
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss | Cer |
52
- |:-------------:|:-----:|:----:|:---------------:|:------:|
53
- | 5.2461 | 4.39 | 400 | 5.0687 | 0.9999 |
54
- | 4.0581 | 8.79 | 800 | 4.0428 | 0.9999 |
55
- | 3.962 | 13.19 | 1200 | 3.8773 | 0.9999 |
56
-
57
-
58
  ### Framework versions
59
 
60
- - Transformers 4.26.0
61
  - Pytorch 1.13.1+cu116
62
  - Datasets 2.9.0
63
  - Tokenizers 0.13.2
 
12
 
13
  # wav2vec2-large-asr-th
14
 
15
+ This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
 
 
 
16
 
17
  ## Model description
18
 
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
+ - learning_rate: 0.0001
35
  - train_batch_size: 16
36
  - eval_batch_size: 16
37
  - seed: 42
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 100
43
+ - training_steps: 1000
44
  - mixed_precision_training: Native AMP
45
 
 
 
 
 
 
 
 
 
 
46
  ### Framework versions
47
 
48
+ - Transformers 4.26.1
49
  - Pytorch 1.13.1+cu116
50
  - Datasets 2.9.0
51
  - Tokenizers 0.13.2