nrshoudi commited on
Commit
e6aa491
1 Parent(s): 3d67ae0

End of training

Browse files
Files changed (3) hide show
  1. README.md +18 -11
  2. adapter_model.bin +2 -2
  3. training_args.bin +2 -2
README.md CHANGED
@@ -13,9 +13,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # Whisper-small-Arabic-phoneme
15
 
16
- This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.1381
19
 
20
  ## Model description
21
 
@@ -35,26 +35,33 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.001
38
- - train_batch_size: 8
39
- - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 50
44
- - num_epochs: 3
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | 3.5284 | 1.0 | 228 | 3.2308 |
51
- | 0.1412 | 2.0 | 456 | 0.1778 |
52
- | 0.0842 | 3.0 | 684 | 0.1381 |
 
 
 
 
 
 
 
53
 
54
 
55
  ### Framework versions
56
 
57
- - Transformers 4.34.0
58
- - Pytorch 2.0.1+cu118
59
- - Datasets 2.14.5
60
  - Tokenizers 0.14.1
 
13
 
14
  # Whisper-small-Arabic-phoneme
15
 
16
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.2262
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.001
38
+ - train_batch_size: 6
39
+ - eval_batch_size: 6
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 50
44
+ - num_epochs: 10
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | 0.0713 | 1.0 | 539 | 0.2005 |
51
+ | 0.0577 | 2.0 | 1078 | 0.2035 |
52
+ | 0.0479 | 3.0 | 1617 | 0.1821 |
53
+ | 0.0336 | 4.0 | 2156 | 0.2092 |
54
+ | 0.0191 | 5.0 | 2695 | 0.1887 |
55
+ | 0.0143 | 6.0 | 3234 | 0.1927 |
56
+ | 0.0104 | 7.0 | 3773 | 0.1955 |
57
+ | 0.0059 | 8.0 | 4312 | 0.2194 |
58
+ | 0.004 | 9.0 | 4851 | 0.2193 |
59
+ | 0.0024 | 10.0 | 5390 | 0.2262 |
60
 
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.34.1
65
+ - Pytorch 2.1.0+cu118
66
+ - Datasets 2.14.6
67
  - Tokenizers 0.14.1
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3564b1931782a7a38eca2b4b9432688f6870d51f67516c7c19dfe95ce88da092
3
- size 63056269
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b7193ed715cc8c1bc6beda4d30075960617643b589c4d4924783b0b4cd5d9a2
3
+ size 63056714
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:17d3c2bd9c82e8f7279738c79a6df86ce89cd3de75154d7cfe0dc47707e1d9ff
3
- size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d0d8c0b45295c57356f37a5212f3f41412949032fb90c6dd2dc317c6c344309
3
+ size 4728