rumourscape
commited on
Commit
•
418921c
1
Parent(s):
afceb0a
Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ language:
|
|
8 |
pipeline_tag: text-to-speech
|
9 |
---
|
10 |
|
11 |
-
# F5 Hindi 24KHz Model
|
12 |
|
13 |
This is a Hindi Text-to-Speech model trained from scratch using the [F5 architecure](https://arxiv.org/abs/2410.06885).
|
14 |
|
@@ -30,6 +30,8 @@ Clone the following github repo and refer to the README: https://github.com/rumo
|
|
30 |
|
31 |
The model was trained on 8x A100 40GB GPUs for close to a week. We would like to thank [CDAC](https://cdac.in/) for providing the compute resources.
|
32 |
|
|
|
|
|
33 |
### Training Data
|
34 |
|
35 |
We used the Hindi subsets of [IndicTTS](https://www.tsdconference.org/tsd2016/download/cbblr16-850.pdf) and [IndicVoices-R](https://arxiv.org/pdf/2409.05356) datasets for training this model.
|
|
|
8 |
pipeline_tag: text-to-speech
|
9 |
---
|
10 |
|
11 |
+
# F5-TTS Hindi 24KHz Model
|
12 |
|
13 |
This is a Hindi Text-to-Speech model trained from scratch using the [F5 architecure](https://arxiv.org/abs/2410.06885).
|
14 |
|
|
|
30 |
|
31 |
The model was trained on 8x A100 40GB GPUs for close to a week. We would like to thank [CDAC](https://cdac.in/) for providing the compute resources.
|
32 |
|
33 |
+
We used the "small" configuration(151M parameter) model for training according to the F5 paper.
|
34 |
+
|
35 |
### Training Data
|
36 |
|
37 |
We used the Hindi subsets of [IndicTTS](https://www.tsdconference.org/tsd2016/download/cbblr16-850.pdf) and [IndicVoices-R](https://arxiv.org/pdf/2409.05356) datasets for training this model.
|