jonathanjordan21 commited on
Commit
cc90e1e
1 Parent(s): 83da92a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -71
README.md CHANGED
@@ -1,71 +1,12 @@
1
- # Global Prosody Style Transfer Without Text Transcriptions
2
-
3
- This repository provides a PyTorch implementation of [AutoPST](https://arxiv.org/abs/2106.08519), which enables unsupervised global prosody conversion without text transcriptions.
4
-
5
- This is a short video that explains the main concepts of our work. If you find this work useful and use it in your research, please consider citing our paper.
6
-
7
- [![SpeechSplit](./assets/cover.png)](https://youtu.be/wow2DRuJ69c/)
8
-
9
- ```
10
- @InProceedings{pmlr-v139-qian21b,
11
- title = {Global Prosody Style Transfer Without Text Transcriptions},
12
- author = {Qian, Kaizhi and Zhang, Yang and Chang, Shiyu and Xiong, Jinjun and Gan, Chuang and Cox, David and Hasegawa-Johnson, Mark},
13
- booktitle = {Proceedings of the 38th International Conference on Machine Learning},
14
- pages = {8650--8660},
15
- year = {2021},
16
- editor = {Meila, Marina and Zhang, Tong},
17
- volume = {139},
18
- series = {Proceedings of Machine Learning Research},
19
- month = {18--24 Jul},
20
- publisher = {PMLR},
21
- url = {http://proceedings.mlr.press/v139/qian21b.html}
22
- }
23
-
24
- ```
25
-
26
-
27
- ## Audio Demo
28
-
29
- The audio demo for AutoPST can be found [here](https://auspicious3000.github.io/AutoPST-Demo/)
30
-
31
- ## Dependencies
32
- - Python 3.6
33
- - Numpy
34
- - Scipy
35
- - PyTorch == v1.6.0
36
- - librosa
37
- - pysptk
38
- - soundfile
39
- - wavenet_vocoder ```pip install wavenet_vocoder==0.1.1```
40
- for more information, please refer to https://github.com/r9y9/wavenet_vocoder
41
-
42
-
43
- ## To Run Demo
44
-
45
- Download [pre-trained models](https://drive.google.com/file/d/1ji3Bk6YGvXkPqFu1hLOAJp_SKw-vHGrp/view?usp=sharing) to ```assets```
46
-
47
- Download the same WaveNet vocoder model as in [AutoVC](https://github.com/auspicious3000/autovc) to ```assets```
48
-
49
- The fast and high-quality hifi-gan v1 (https://github.com/jik876/hifi-gan) pre-trained model is now available [here.](https://drive.google.com/file/d/1n76jHs8k1sDQ3Eh5ajXwdxuY_EZw4N9N/view?usp=sharing)
50
-
51
- Please refer to [AutoVC](https://github.com/auspicious3000/autovc) if you have any problems with the vocoder part, because they share the same vocoder scripts.
52
-
53
- Run ```demo.ipynb```
54
-
55
-
56
- ## To Train
57
-
58
- Download [training data](https://drive.google.com/file/d/1H1dyA80qREKLHybqnYaqBRRsacIdFbnE/view?usp=sharing) to ```assets```.
59
- The provided training data is very small for code verification purpose only.
60
- Please use the scripts to prepare your own data for training.
61
-
62
- 1. Prepare training data: ```python prepare_train_data.py```
63
-
64
- 2. Train 1st Stage: ```python main_1.py```
65
-
66
- 3. Train 2nd Stage: ```python main_2.py```
67
-
68
-
69
- ## Final Words
70
-
71
- This project is part of an ongoing research. We hope this repo is useful for your research. If you need any help or have any suggestions on improving the framework, please raise an issue and we will do our best to get back to you as soon as possible.
 
1
+ ---
2
+ title: Tts Rvc Autopst
3
+ emoji: 💬
4
+ colorFrom: yellow
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 4.36.1
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).