kevinwang676 commited on
Commit
17d6678
1 Parent(s): 788274f

Upload 9 files

Browse files
pretrained_models/CosyVoice-300M-Instruct/README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CosyVoice
2
+ ## 👉🏻 [CosyVoice Demos](https://fun-audio-llm.github.io/) 👈🏻
3
+ [[CosyVoice Paper](https://fun-audio-llm.github.io/pdf/CosyVoice_v1.pdf)][[CosyVoice Studio](https://www.modelscope.cn/studios/iic/CosyVoice-300M)][[CosyVoice Code](https://github.com/FunAudioLLM/CosyVoice)]
4
+
5
+ For `SenseVoice`, visit [SenseVoice repo](https://github.com/FunAudioLLM/SenseVoice) and [SenseVoice space](https://www.modelscope.cn/studios/iic/SenseVoice).
6
+
7
+ ## Install
8
+
9
+ **Clone and install**
10
+
11
+ - Clone the repo
12
+ ``` sh
13
+ git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
14
+ # If you failed to clone submodule due to network failures, please run following command until success
15
+ cd CosyVoice
16
+ git submodule update --init --recursive
17
+ ```
18
+
19
+ - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
20
+ - Create Conda env:
21
+
22
+ ``` sh
23
+ conda create -n cosyvoice python=3.8
24
+ conda activate cosyvoice
25
+ pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
26
+
27
+ # If you encounter sox compatibility issues
28
+ # ubuntu
29
+ sudo apt-get install sox libsox-dev
30
+ # centos
31
+ sudo yum install sox sox-devel
32
+ ```
33
+
34
+ **Model download**
35
+
36
+ We strongly recommand that you download our pretrained `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `speech_kantts_ttsfrd` resource.
37
+
38
+ If you are expert in this field, and you are only interested in training your own CosyVoice model from scratch, you can skip this step.
39
+
40
+ ``` python
41
+ # SDK模型下载
42
+ from modelscope import snapshot_download
43
+ snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
44
+ snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
45
+ snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
46
+ snapshot_download('iic/speech_kantts_ttsfrd', local_dir='pretrained_models/speech_kantts_ttsfrd')
47
+ ```
48
+
49
+ ``` sh
50
+ # git模型下载,请确保已安装git lfs
51
+ mkdir -p pretrained_models
52
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
53
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
54
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
55
+ git clone https://www.modelscope.cn/iic/speech_kantts_ttsfrd.git pretrained_models/speech_kantts_ttsfrd
56
+ ```
57
+
58
+ Unzip `ttsfrd` resouce and install `ttsfrd` package
59
+ ``` sh
60
+ cd pretrained_models/speech_kantts_ttsfrd/
61
+ unzip resource.zip -d .
62
+ pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
63
+ ```
64
+
65
+ **Basic Usage**
66
+
67
+ For zero_shot/cross_lingual inference, please use `CosyVoice-300M` model.
68
+ For sft inference, please use `CosyVoice-300M-SFT` model.
69
+ For instruct inference, please use `CosyVoice-300M-Instruct` model.
70
+ First, add `third_party/AcademiCodec` and `third_party/Matcha-TTS` to your `PYTHONPATH`.
71
+
72
+ ``` sh
73
+ export PYTHONPATH=third_party/AcademiCodec:third_party/Matcha-TTS
74
+ ```
75
+
76
+ ``` python
77
+ from cosyvoice.cli.cosyvoice import CosyVoice
78
+ from cosyvoice.utils.file_utils import load_wav
79
+ import torchaudio
80
+
81
+ cosyvoice = CosyVoice('speech_tts/CosyVoice-300M-SFT')
82
+ # sft usage
83
+ print(cosyvoice.list_avaliable_spks())
84
+ output = cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女')
85
+ torchaudio.save('sft.wav', output['tts_speech'], 22050)
86
+
87
+ cosyvoice = CosyVoice('speech_tts/CosyVoice-300M')
88
+ # zero_shot usage
89
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
90
+ output = cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k)
91
+ torchaudio.save('zero_shot.wav', output['tts_speech'], 22050)
92
+ # cross_lingual usage
93
+ prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
94
+ output = cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k)
95
+ torchaudio.save('cross_lingual.wav', output['tts_speech'], 22050)
96
+
97
+ cosyvoice = CosyVoice('speech_tts/CosyVoice-300M-Instruct')
98
+ # instruct usage
99
+ output = cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.')
100
+ torchaudio.save('instruct.wav', output['tts_speech'], 22050)
101
+ ```
102
+
103
+ **Start web demo**
104
+
105
+ You can use our web demo page to get familiar with CosyVoice quickly.
106
+ We support sft/zero_shot/cross_lingual/instruct inference in web demo.
107
+
108
+ Please see the demo website for details.
109
+
110
+ ``` python
111
+ # change speech_tts/CosyVoice-300M-SFT for sft inference, or speech_tts/CosyVoice-300M-Instruct for instruct inference
112
+ python3 webui.py --port 50000 --model_dir speech_tts/CosyVoice-300M
113
+ ```
114
+
115
+ **Advanced Usage**
116
+
117
+ For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
118
+ You can get familiar with CosyVoice following this recipie.
119
+
120
+ **Build for deployment**
121
+
122
+ Optionally, if you want to use grpc for service deployment,
123
+ you can run following steps. Otherwise, you can just ignore this step.
124
+
125
+ ``` sh
126
+ cd runtime/python
127
+ docker build -t cosyvoice:v1.0 .
128
+ # change speech_tts/CosyVoice-300M to speech_tts/CosyVoice-300M-Instruct if you want to use instruct inference
129
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python && python3 server.py --port 50000 --max_conc 4 --model_dir speech_tts/CosyVoice-300M && sleep infinity"
130
+ python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
131
+ ```
132
+
133
+ ## Discussion & Communication
134
+
135
+ You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
136
+
137
+ You can also scan the QR code to join our officla Dingding chat group.
138
+
139
+ <img src="./asset/dingding.png" width="250px">
140
+
141
+ ## Acknowledge
142
+
143
+ 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
144
+ 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
145
+ 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
146
+ 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
147
+ 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
148
+
149
+ ## Disclaimer
150
+ The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
pretrained_models/CosyVoice-300M-Instruct/campplus.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ac6a63997761ae2997373e2ee1c47040854b4b759ea41ec48e4e42df0f4d73
3
+ size 28303423
pretrained_models/CosyVoice-300M-Instruct/configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework":"Pytorch","task":"text-to-speech"}
pretrained_models/CosyVoice-300M-Instruct/cosyvoice.yaml ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # set random seed, so that you may reproduce your result.
2
+ __set_seed1: !apply:random.seed [1986]
3
+ __set_seed2: !apply:numpy.random.seed [1986]
4
+ __set_seed3: !apply:torch.manual_seed [1986]
5
+ __set_seed4: !apply:torch.cuda.manual_seed_all [1986]
6
+
7
+ # fixed params
8
+ sample_rate: 22050
9
+ text_encoder_input_size: 512
10
+ llm_input_size: 1024
11
+ llm_output_size: 1024
12
+ spk_embed_dim: 192
13
+
14
+ # model params
15
+ # for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
16
+ # for system/third_party class/function, we do not require this.
17
+ llm: !new:cosyvoice.llm.llm.TransformerLM
18
+ text_encoder_input_size: !ref <text_encoder_input_size>
19
+ llm_input_size: !ref <llm_input_size>
20
+ llm_output_size: !ref <llm_output_size>
21
+ text_token_size: 51866
22
+ speech_token_size: 4096
23
+ length_normalized_loss: True
24
+ lsm_weight: 0
25
+ spk_embed_dim: !ref <spk_embed_dim>
26
+ text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
27
+ input_size: !ref <text_encoder_input_size>
28
+ output_size: 1024
29
+ attention_heads: 16
30
+ linear_units: 4096
31
+ num_blocks: 6
32
+ dropout_rate: 0.1
33
+ positional_dropout_rate: 0.1
34
+ attention_dropout_rate: 0
35
+ normalize_before: True
36
+ input_layer: 'linear'
37
+ pos_enc_layer_type: 'rel_pos_espnet'
38
+ selfattention_layer_type: 'rel_selfattn'
39
+ use_cnn_module: False
40
+ macaron_style: False
41
+ use_dynamic_chunk: False
42
+ use_dynamic_left_chunk: False
43
+ static_chunk_size: 1
44
+ llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
45
+ input_size: !ref <llm_input_size>
46
+ output_size: !ref <llm_output_size>
47
+ attention_heads: 16
48
+ linear_units: 4096
49
+ num_blocks: 14
50
+ dropout_rate: 0.1
51
+ positional_dropout_rate: 0.1
52
+ attention_dropout_rate: 0
53
+ input_layer: 'linear_legacy'
54
+ pos_enc_layer_type: 'rel_pos_espnet'
55
+ selfattention_layer_type: 'rel_selfattn'
56
+ static_chunk_size: 1
57
+
58
+ flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
59
+ input_size: 512
60
+ output_size: 80
61
+ spk_embed_dim: !ref <spk_embed_dim>
62
+ output_type: 'mel'
63
+ vocab_size: 4096
64
+ input_frame_rate: 50
65
+ only_mask_loss: True
66
+ encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
67
+ output_size: 512
68
+ attention_heads: 8
69
+ linear_units: 2048
70
+ num_blocks: 6
71
+ dropout_rate: 0.1
72
+ positional_dropout_rate: 0.1
73
+ attention_dropout_rate: 0.1
74
+ normalize_before: True
75
+ input_layer: 'linear'
76
+ pos_enc_layer_type: 'rel_pos_espnet'
77
+ selfattention_layer_type: 'rel_selfattn'
78
+ input_size: 512
79
+ use_cnn_module: False
80
+ macaron_style: False
81
+ length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
82
+ channels: 80
83
+ sampling_ratios: [1, 1, 1, 1]
84
+ decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
85
+ in_channels: 240
86
+ n_spks: 1
87
+ spk_emb_dim: 80
88
+ cfm_params: !new:omegaconf.DictConfig
89
+ content:
90
+ sigma_min: 1e-06
91
+ solver: 'euler'
92
+ t_scheduler: 'cosine'
93
+ training_cfg_rate: 0.2
94
+ inference_cfg_rate: 0.7
95
+ reg_loss_type: 'l1'
96
+ estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
97
+ in_channels: 320
98
+ out_channels: 80
99
+ channels: [256, 256]
100
+ dropout: 0
101
+ attention_head_dim: 64
102
+ n_blocks: 4
103
+ num_mid_blocks: 12
104
+ num_heads: 8
105
+ act_fn: 'gelu'
106
+
107
+ hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
108
+ in_channels: 80
109
+ base_channels: 512
110
+ nb_harmonics: 8
111
+ sampling_rate: !ref <sample_rate>
112
+ nsf_alpha: 0.1
113
+ nsf_sigma: 0.003
114
+ nsf_voiced_threshold: 10
115
+ upsample_rates: [8, 8]
116
+ upsample_kernel_sizes: [16, 16]
117
+ istft_params:
118
+ n_fft: 16
119
+ hop_len: 4
120
+ resblock_kernel_sizes: [3, 7, 11]
121
+ resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
122
+ source_resblock_kernel_sizes: [7, 11]
123
+ source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
124
+ lrelu_slope: 0.1
125
+ audio_limit: 0.99
126
+ f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
127
+ num_class: 1
128
+ in_channels: 80
129
+ cond_channels: 512
130
+
131
+ # processor functions
132
+ parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
133
+ get_tokenizer: !name:whisper.tokenizer.get_tokenizer
134
+ multilingual: True
135
+ num_languages: 100
136
+ language: 'en'
137
+ task: 'transcribe'
138
+ allowed_special: 'all'
139
+ tokenize: !name:cosyvoice.dataset.processor.tokenize
140
+ get_tokenizer: !ref <get_tokenizer>
141
+ allowed_special: !ref <allowed_special>
142
+ filter: !name:cosyvoice.dataset.processor.filter
143
+ max_length: 40960
144
+ min_length: 0
145
+ token_max_length: 200
146
+ token_min_length: 1
147
+ resample: !name:cosyvoice.dataset.processor.resample
148
+ resample_rate: !ref <sample_rate>
149
+ feat_extractor: !name:matcha.utils.audio.mel_spectrogram
150
+ n_fft: 1024
151
+ num_mels: 80
152
+ sampling_rate: !ref <sample_rate>
153
+ hop_size: 256
154
+ win_size: 1024
155
+ fmin: 0
156
+ fmax: 8000
157
+ center: False
158
+ compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
159
+ feat_extractor: !ref <feat_extractor>
160
+ parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
161
+ normalize: True
162
+ shuffle: !name:cosyvoice.dataset.processor.shuffle
163
+ shuffle_size: 1000
164
+ sort: !name:cosyvoice.dataset.processor.sort
165
+ sort_size: 500 # sort_size should be less than shuffle_size
166
+ batch: !name:cosyvoice.dataset.processor.batch
167
+ batch_type: 'dynamic'
168
+ max_frames_in_batch: 2000
169
+ padding: !name:cosyvoice.dataset.processor.padding
170
+
171
+ # dataset processor pipeline
172
+ data_pipeline: [
173
+ !ref <parquet_opener>,
174
+ !ref <tokenize>,
175
+ !ref <filter>,
176
+ !ref <resample>,
177
+ !ref <compute_fbank>,
178
+ !ref <parse_embedding>,
179
+ !ref <shuffle>,
180
+ !ref <sort>,
181
+ !ref <batch>,
182
+ !ref <padding>,
183
+ ]
184
+
185
+ # train conf
186
+ train_conf:
187
+ optim: adam
188
+ optim_conf:
189
+ lr: 0.001
190
+ scheduler: warmuplr
191
+ scheduler_conf:
192
+ warmup_steps: 2500
193
+ max_epoch: 200
194
+ grad_clip: 5
195
+ accum_grad: 2
196
+ log_interval: 100
197
+ save_per_step: -1
pretrained_models/CosyVoice-300M-Instruct/flow.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd80b089444a95e52956c57cdf177d7f6017a5af13b8a697717628a1d2be6b55
3
+ size 419900943
pretrained_models/CosyVoice-300M-Instruct/hift.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e679b6ca1eff71187ffb4f3ab0444935594cdcc20a9bd12afad111ef8d6012
3
+ size 81896716
pretrained_models/CosyVoice-300M-Instruct/llm.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef23893d68b93406eae04719a3b800488921e08b394624e9a5ba69bea3a59a13
3
+ size 1242994771
pretrained_models/CosyVoice-300M-Instruct/speech_tokenizer_v1.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23b5a723ed9143aebfd9ffda14ac4c21231f31c35ef837b6a13bb9e5488abb1e
3
+ size 522624269
pretrained_models/CosyVoice-300M-Instruct/spk2info.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:652d571b2efec1be6dc14345c2bae52eb41affe4b5d3fa4174548e059bd633b4
3
+ size 1317821