kingmhd1519 commited on
Commit
0d5dd5b
1 Parent(s): 4c1170e

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5531
20
 
21
  ## Model description
22
 
@@ -35,7 +35,7 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 0.001
39
  - train_batch_size: 4
40
  - eval_batch_size: 2
41
  - seed: 42
@@ -44,21 +44,32 @@ The following hyperparameters were used during training:
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
- - training_steps: 300
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-------:|:----:|:---------------:|
54
- | 0.8648 | 3.5556 | 100 | 0.8506 |
55
- | 0.6876 | 7.1111 | 200 | 0.6333 |
56
- | 0.5763 | 10.6667 | 300 | 0.5531 |
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
 
59
  ### Framework versions
60
 
61
  - Transformers 4.46.3
62
  - Pytorch 2.5.1+cu121
63
- - Datasets 3.1.0
64
  - Tokenizers 0.20.3
 
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.5176
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 0.0001
39
  - train_batch_size: 4
40
  - eval_batch_size: 2
41
  - seed: 42
 
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
+ - training_steps: 1500
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-------:|:----:|:---------------:|
54
+ | 0.611 | 3.5556 | 100 | 0.5611 |
55
+ | 0.55 | 7.1111 | 200 | 0.5361 |
56
+ | 0.5435 | 10.6667 | 300 | 0.5158 |
57
+ | 0.5081 | 14.2222 | 400 | 0.4987 |
58
+ | 0.4918 | 17.7778 | 500 | 0.5124 |
59
+ | 0.4851 | 21.3333 | 600 | 0.4984 |
60
+ | 0.4783 | 24.8889 | 700 | 0.5027 |
61
+ | 0.4721 | 28.4444 | 800 | 0.4964 |
62
+ | 0.4595 | 32.0 | 900 | 0.5092 |
63
+ | 0.4524 | 35.5556 | 1000 | 0.5169 |
64
+ | 0.4528 | 39.1111 | 1100 | 0.5130 |
65
+ | 0.4423 | 42.6667 | 1200 | 0.5114 |
66
+ | 0.4401 | 46.2222 | 1300 | 0.5175 |
67
+ | 0.439 | 49.7778 | 1400 | 0.5202 |
68
+ | 0.4357 | 53.3333 | 1500 | 0.5176 |
69
 
70
 
71
  ### Framework versions
72
 
73
  - Transformers 4.46.3
74
  - Pytorch 2.5.1+cu121
 
75
  - Tokenizers 0.20.3
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ecd2ce38048cd3f6d91ca2ad809554f1bd9be72afc07fa53ff2acef63af08876
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c41b568ec266e55563b7571b1baac0d8f35d399288b5adf796ec54e9d85ede5e
3
  size 577789320
runs/Dec10_09-26-28_9e944507ed53/events.out.tfevents.1733822791.9e944507ed53.231.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5a6a725167fd69ab7ca0bc541d186d22a6b2f52cc3729bc0f25eea405aa2b18
3
- size 23385
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ae5bdef3148904e63f64a2b4e29b751e6a20f876bc407f2d67db79dafb7adf3
3
+ size 23739