binisha commited on
Commit
f600e82
·
verified ·
1 Parent(s): 3a6ff56

Model save

Browse files
Files changed (2) hide show
  1. README.md +19 -19
  2. generation_config.json +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.3870
20
 
21
  ## Model description
22
 
@@ -41,7 +41,7 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - gradient_accumulation_steps: 8
43
  - total_train_batch_size: 32
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
  - training_steps: 1500
@@ -51,26 +51,26 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-------:|:----:|:---------------:|
54
- | 0.5982 | 2.7586 | 100 | 0.5087 |
55
- | 0.5189 | 5.5172 | 200 | 0.4784 |
56
- | 0.5032 | 8.2759 | 300 | 0.4659 |
57
- | 0.4652 | 11.0345 | 400 | 0.4518 |
58
- | 0.4476 | 13.7931 | 500 | 0.4489 |
59
- | 0.4243 | 16.5517 | 600 | 0.4324 |
60
- | 0.4176 | 19.3103 | 700 | 0.4115 |
61
- | 0.3981 | 22.0690 | 800 | 0.4019 |
62
- | 0.3887 | 24.8276 | 900 | 0.4002 |
63
- | 0.3903 | 27.5862 | 1000 | 0.4059 |
64
- | 0.3747 | 30.3448 | 1100 | 0.4010 |
65
- | 0.3656 | 33.1034 | 1200 | 0.3929 |
66
- | 0.3636 | 35.8621 | 1300 | 0.3899 |
67
- | 0.3605 | 38.6207 | 1400 | 0.3900 |
68
- | 0.3648 | 41.3793 | 1500 | 0.3870 |
69
 
70
 
71
  ### Framework versions
72
 
73
- - Transformers 4.44.2
74
  - Pytorch 2.5.0+cu121
75
  - Datasets 3.1.0
76
- - Tokenizers 0.19.1
 
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.3836
20
 
21
  ## Model description
22
 
 
41
  - seed: 42
42
  - gradient_accumulation_steps: 8
43
  - total_train_batch_size: 32
44
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
  - training_steps: 1500
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-------:|:----:|:---------------:|
54
+ | 0.6028 | 2.7586 | 100 | 0.5187 |
55
+ | 0.5195 | 5.5172 | 200 | 0.4851 |
56
+ | 0.5075 | 8.2759 | 300 | 0.4708 |
57
+ | 0.462 | 11.0345 | 400 | 0.4609 |
58
+ | 0.4429 | 13.7931 | 500 | 0.4294 |
59
+ | 0.4303 | 16.5517 | 600 | 0.4249 |
60
+ | 0.4172 | 19.3103 | 700 | 0.4184 |
61
+ | 0.402 | 22.0690 | 800 | 0.4077 |
62
+ | 0.3898 | 24.8276 | 900 | 0.3975 |
63
+ | 0.3966 | 27.5862 | 1000 | 0.4197 |
64
+ | 0.3773 | 30.3448 | 1100 | 0.3955 |
65
+ | 0.3658 | 33.1034 | 1200 | 0.3878 |
66
+ | 0.3644 | 35.8621 | 1300 | 0.3878 |
67
+ | 0.3622 | 38.6207 | 1400 | 0.3841 |
68
+ | 0.3671 | 41.3793 | 1500 | 0.3836 |
69
 
70
 
71
  ### Framework versions
72
 
73
+ - Transformers 4.46.2
74
  - Pytorch 2.5.0+cu121
75
  - Datasets 3.1.0
76
+ - Tokenizers 0.20.3
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.44.2"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.46.2"
9
  }