ILG2021
commited on
Commit
·
7f8566b
1
Parent(s):
a3fa472
update
Browse files
README.md
CHANGED
@@ -1,516 +1,10 @@
|
|
1 |
---
|
2 |
-
<<<<<<< HEAD
|
3 |
-
tasks:
|
4 |
-
- auto-speech-recognition
|
5 |
-
domain:
|
6 |
-
- audio
|
7 |
-
model-type:
|
8 |
-
- Non-autoregressive
|
9 |
-
frameworks:
|
10 |
-
- pytorch
|
11 |
-
backbone:
|
12 |
-
- transformer/conformer
|
13 |
-
metrics:
|
14 |
-
- CER
|
15 |
-
license: Apache License 2.0
|
16 |
-
language:
|
17 |
-
- cn
|
18 |
-
tags:
|
19 |
-
- Paraformer
|
20 |
-
- Alibaba
|
21 |
-
- INTERSPEECH 2022
|
22 |
-
datasets:
|
23 |
-
train:
|
24 |
-
- 60,000 hour industrial Mandarin task
|
25 |
-
test:
|
26 |
-
- AISHELL-1 dev/test
|
27 |
-
- AISHELL-2 dev_android/dev_ios/dev_mic/test_android/test_ios/test_mic
|
28 |
-
- WentSpeech dev/test_meeting/test_net
|
29 |
-
- SpeechIO TIOBE
|
30 |
-
- 60,000 hour industrial Mandarin task
|
31 |
-
indexing:
|
32 |
-
results:
|
33 |
-
- task:
|
34 |
-
name: Automatic Speech Recognition
|
35 |
-
dataset:
|
36 |
-
name: 60,000 hour industrial Mandarin task
|
37 |
-
type: audio # optional
|
38 |
-
args: 16k sampling rate, 8404 characters # optional
|
39 |
-
metrics:
|
40 |
-
- type: CER
|
41 |
-
value: 8.53% # float
|
42 |
-
description: greedy search, withou lm, avg.
|
43 |
-
args: default
|
44 |
-
- type: RTF
|
45 |
-
value: 0.0251 # float
|
46 |
-
description: GPU inference on V100
|
47 |
-
args: batch_size=1
|
48 |
-
widgets:
|
49 |
-
- task: auto-speech-recognition
|
50 |
-
model_revision: v1.2.1
|
51 |
-
inputs:
|
52 |
-
- type: audio
|
53 |
-
name: input
|
54 |
-
title: 音频
|
55 |
-
examples:
|
56 |
-
- name: 1
|
57 |
-
title: 示例1
|
58 |
-
inputs:
|
59 |
-
- name: input
|
60 |
-
data: git://example/asr_example.wav
|
61 |
-
inferencespec:
|
62 |
-
cpu: 8 #CPU数量
|
63 |
-
memory: 4096
|
64 |
-
finetune-support: True
|
65 |
-
---
|
66 |
-
=======
|
67 |
language:
|
68 |
- zh
|
69 |
metrics:
|
70 |
- cer
|
71 |
---
|
72 |
This is a funasr_onnx model export from parameter-large-long model.
|
73 |
-
>>>>>>> 0db1cd07d5b3dbd29668070160a371f18ddb5d8f
|
74 |
-
|
75 |
-
|
76 |
-
# Highlights
|
77 |
-
- Paraformer-large长音频模型集成VAD、ASR、标点与时间戳功能,可直接对时长为数小时音频进行识别,并输出带标点文字与时间戳:
|
78 |
-
- ASR模型:[Parformer-large模型](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)结构为非自回归语音识别模型,多个中文公开数据集上取得SOTA效果,可快速地基于ModelScope对模型进行微调定制和推理。
|
79 |
-
- 热词版本:[Paraformer-large热词版模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/summary)支持热词定制功能,基于提供的热词列表进行激励增强,提升热词的召回率和准确率。
|
80 |
-
|
81 |
-
|
82 |
-
## Release Notes
|
83 |
-
|
84 |
-
- 2023年3月17日:[funasr-0.3.0](https://github.com/alibaba-damo-academy/FunASR/tree/main), modelscope-1.4.1
|
85 |
-
- 功能完善:
|
86 |
-
- 新增GPU runtime方案,[nv-triton](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/triton_gpu),可以将modelscope中Paraformer模型便捷导出,并部署成triton服务,实测,单GPU-V100,RTF为0.0032,吞吐率为300,[benchmark](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/triton_gpu#performance-benchmark)。
|
87 |
-
- 新增CPU [runtime量化方案](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/export),支持从modelscope导出量化版本onnx与libtorch,实测,CPU-8369B,量化后,RTF提升50%(0.00438->0.00226),吞吐率翻倍(228->442),[benchmark](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python)。
|
88 |
-
- [新增加C++版本grpc服务部署方案](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/grpc),配合C++版本[onnxruntime](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/onnxruntime),以及[量化方案](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/export),相比python-runtime性能翻倍。
|
89 |
-
- [16k VAD模型](https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary),[8k VAD模型](https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-8k-common/summary),modelscope pipeline,新增加流式推理方式,,最小支持10ms语音输入流,[用法](https://github.com/alibaba-damo-academy/FunASR/discussions/236)。
|
90 |
-
- 优化[标点预测模型](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary),主观体验标点准确性提升(fscore绝对提升 55.6->56.5)。
|
91 |
-
- 基于grpc服务,新增实时字幕[demo](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/grpc),采用2pass识别模型,[Paraformer流式模型](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary) 用来上屏,[Paraformer-large离线模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)用来纠正识别结果。
|
92 |
-
- 上线新模型:
|
93 |
-
- [16k Paraformer流式模型](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary),支持语音流输入,可以进行实时语音识别,[用法](https://github.com/alibaba-damo-academy/FunASR/discussions/241)。支持基于grpc服务进��部署,可实现实时字幕功能。
|
94 |
-
- [流式标点模型](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727/summary),支持流式语音识别场景中的标点打标,以VAD点为实时调用点进行流式调用。可与实时ASR模型配合使用,实现具有可读性的实时字幕功能,[用法](https://github.com/alibaba-damo-academy/FunASR/discussions/238)
|
95 |
-
- [TP-Aligner时间戳模型](https://www.modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary),输入音频及对应文本输出字级别时间戳,效果与Kaldi FA模型相当(60.3ms v.s. 69.3ms),支持与asr模型自由组合,[用法](https://github.com/alibaba-damo-academy/FunASR/discussions/246)。
|
96 |
-
- 金融领域模型,[8k Paraformer-large-3445vocab](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-8k-finance-vocab3445/summary),使用1000小时数据微调训练,金融领域测试集识别效果相对提升5%,领域关键词召回相对提升7%。
|
97 |
-
- 音视频领域模型,[16k Paraformer-large-3445vocab](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-audio_and_video-vocab3445/summary),使用10000小时数据微调训练,音视频领域测试集识别效果相对提升8%。
|
98 |
-
- [8k说话人确认模型](https://www.modelscope.cn/models/damo/speech_xvector_sv-en-us-callhome-8k-spk6135-pytorch/summary),CallHome数据集英文说话人确认模型,也可用于声纹特征提取。
|
99 |
-
- 说话人日志模型,[16k SOND中文模型](https://www.modelscope.cn/models/damo/speech_diarization_sond-zh-cn-alimeeting-16k-n16k4-pytorch/summary),[8k SOND英文模型](https://www.modelscope.cn/models/damo/speech_diarization_sond-en-us-callhome-8k-n16k4-pytorch/summary),在AliMeeting和Callhome上获得最优性能,DER分别为4.46%和11.13%。
|
100 |
-
- UniASR流式离线一体化模型:
|
101 |
-
[16k UniASR缅甸语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-my-16k-common-vocab696-pytorch/summary)、[16k UniASR希伯来语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-he-16k-common-vocab1085-pytorch/summary)、[16k UniASR乌尔都语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ur-16k-common-vocab877-pytorch/summary)、[8k UniASR中文金融领域](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-finance-vocab3445-online/summary)、[16k UniASR中文音视频领域](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-16k-audio_and_video-vocab3445-online/summary)。
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
- 历史 Release Notes,[详细版本](https://github.com/alibaba-damo-academy/FunASR/releases)
|
106 |
-
- 重点模型如下:
|
107 |
-
- [MFCCA多通道多说话人识别模型](https://www.modelscope.cn/models/NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950/summary)
|
108 |
-
- 标点模型:
|
109 |
-
[中文标点预测通用模型](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary)
|
110 |
-
- 说话人确认模型:
|
111 |
-
[说话人确认模型](https://www.modelscope.cn/models/damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch/summary)
|
112 |
-
- VAD模型:
|
113 |
-
[16k语音端点检测VAD模型](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary)、
|
114 |
-
[8k语音端点检测VAD模型](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-8k-common/summary)
|
115 |
-
- Paraformer离线模型:
|
116 |
-
[16k Paraformer-large中英文模型](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)、
|
117 |
-
[16k Paraformer-large热词模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/summary)、
|
118 |
-
[16k Paraformer-large长音频模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)、
|
119 |
-
[16k Paraformer中文](https://modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8358-tensorflow1/summary)、
|
120 |
-
[16k Paraformer-large中文](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8358-tensorflow1/summary)、
|
121 |
-
[8k Paraformer中文](https://modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1/summary)、
|
122 |
-
[小尺寸设备端Paraformer指令词模型](https://www.modelscope.cn/models/damo/speech_paraformer-tiny-commandword_asr_nat-zh-cn-16k-vocab544-pytorch/summary)
|
123 |
-
- UniASR流式离线一体化模型:
|
124 |
-
[UniASR中文模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-16k-common-vocab8358-tensorflow1-online/summary)、
|
125 |
-
[UniASR方言模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-cn-dialect-16k-vocab8358-tensorflow1-online/summary)、
|
126 |
-
[16k UniASR闽南语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-minnan-16k-common-vocab3825/summary)、
|
127 |
-
[16k UniASR法语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fr-16k-common-vocab3472-tensorflow1-online/summary)、
|
128 |
-
[16k UniASR德语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-de-16k-common-vocab3690-tensorflow1-online/summary)、
|
129 |
-
[16k UniASR越南语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-vi-16k-common-vocab1001-pytorch-online/summary)、
|
130 |
-
[16k UniASR波斯语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fa-16k-common-vocab1257-pytorch-online/summary)。
|
131 |
-
[16k UniASR-large中文](https://modelscope.cn/models/damo/speech_UniASR-large_asr_2pass-zh-cn-16k-common-vocab8358-tensorflow1-offline/summary)、
|
132 |
-
[16k UniASR日语模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ja-16k-common-vocab93-tensorflow1-online/summary)、
|
133 |
-
[16k UniASR印尼语模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-id-16k-common-vocab1067-tensorflow1-online/summary)、
|
134 |
-
[16k UniASR葡萄牙语模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-pt-16k-common-vocab1617-tensorflow1-online/summary)、
|
135 |
-
[16k UniASR英文模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-en-16k-common-vocab1080-tensorflow1-online/summary)、
|
136 |
-
[16k UniASR俄语模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ru-16k-common-vocab1664-tensorflow1-online/summary)、
|
137 |
-
[16k UniASR韩语模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ko-16k-common-vocab6400-tensorflow1-online/summary)、
|
138 |
-
[16k UniASR西班牙语模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-es-16k-common-vocab3445-tensorflow1-online/summary)、
|
139 |
-
[16k UniASR粤语简体模型](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-cantonese-CHS-16k-common-vocab1468-tensorflow1-online/files)、
|
140 |
-
[8k UniASR中文-vocab8358](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-common-vocab8358-tensorflow1-offline/summary)、
|
141 |
-
[8K UniASR流式模型](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online/summary)
|
142 |
-
|
143 |
-
- 无监督预训练模型:
|
144 |
-
[中文无监督预训练Data2vec模型](https://www.modelscope.cn/models/damo/speech_data2vec_pretrain-zh-cn-aishell2-16k-pytorch/summary)、
|
145 |
-
[基于Data2vec结构无监督预训练Paraformer模型](https://www.modelscope.cn/models/damo/speech_data2vec_pretrain-paraformer-zh-cn-aishell2-16k/summary)。
|
146 |
-
|
147 |
-
|
148 |
-
## 项目介绍
|
149 |
-
|
150 |
-
Paraformer是达摩院语音团队提出的一种高效的非自回归端到端语音识别框架。本项目为Paraformer中文通用语音识别模型,采用工业级数万小时的标注音频进行模型训练,保证了模型的通用识别效果。模型可以被应用于语音输入法、语音导航、智能会议纪要等场景。
|
151 |
-
|
152 |
-
<p align="center">
|
153 |
-
<img src="fig/struct.png" alt="Paraformer模型结构" width="500" />
|
154 |
-
|
155 |
-
|
156 |
-
Paraformer模型结构如上图所示,由 Encoder、Predictor、Sampler、Decoder 与 Loss function 五部分组成。Encoder可以采用不同的网络结构,例如self-attention,conformer,SAN-M等。Predictor 为两层FFN,预测目标文字个数以及抽取目标文字对应的声学向量。Sampler 为无可学习参数模块,依据输入的声学向量和目标向量,生产含有语义的特征向量。Decoder 结构与自回归模型类似,为双向建模(自回归为单向建模)。Loss function 部分,除了交叉熵(CE)与 MWER 区分性优化目标,还包括了 Predictor 优化目标 MAE。
|
157 |
-
|
158 |
-
|
159 |
-
其核心点主要有:
|
160 |
-
- Predictor 模块:基于 Continuous integrate-and-fire (CIF) 的 预测器 (Predictor) 来抽取目标文字对应的声学特征向量,可以更加准确的预测语音中目标文字个数。
|
161 |
-
- Sampler:通过采样,将声学特征向量与目标文字向量变换成含有语义信息的特征向量,配合双向的 Decoder 来增强模型对于上下文的建模能力。
|
162 |
-
- 基于负样本采样的 MWER 训练准则。
|
163 |
-
|
164 |
-
更详细的细节见:
|
165 |
-
- 论文: [Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition](https://arxiv.org/abs/2206.08317)
|
166 |
-
- 论文解读:[Paraformer: 高识别率、高计算效率的单轮非自回归端到端语音识别模型](https://mp.weixin.qq.com/s/xQ87isj5_wxWiQs4qUXtVw)
|
167 |
-
|
168 |
-
|
169 |
-
## 如何使用与训练自己的模型
|
170 |
-
|
171 |
-
本项目提供的预训练模型是基于大数据训练的通用领域识别模型,开发者可以基于此模型进一步利用ModelScope的微调功能或者本项目对应的Github代码仓库[FunASR](https://github.com/alibaba-damo-academy/FunASR)进一步进行模型的领域定制化。
|
172 |
-
|
173 |
-
### 在Notebook中开发
|
174 |
-
|
175 |
-
对于有开发需求的使用者,特别推荐您使用Notebook进行离线处理。先登录ModelScope账号,点击模型页面右上角的“在Notebook中打开”按钮出现对话框,首次使用会提示您关联阿里云账号,按提示操作即可。关联账号后可进入选择启动实例界面,选择计算资源,建立实例,待实例创建��成后进入开发环境,进行调用。
|
176 |
-
|
177 |
-
#### 基于ModelScope进行推理
|
178 |
-
|
179 |
-
- 推理支持音频格式如下:
|
180 |
-
- wav文件路径,例如:data/test/audios/asr_example.wav
|
181 |
-
- pcm文件路径,例如:data/test/audios/asr_example.pcm
|
182 |
-
- wav文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav
|
183 |
-
- wav二进制数据,格式bytes,例如:用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
|
184 |
-
- 已解析的audio音频,例如:audio, rate = soundfile.read("asr_example_zh.wav"),类型为numpy.ndarray或者torch.Tensor。
|
185 |
-
- wav.scp文件,需符合如下要求:
|
186 |
-
|
187 |
-
```sh
|
188 |
-
cat wav.scp
|
189 |
-
asr_example1 data/test/audios/asr_example1.wav
|
190 |
-
asr_example2 data/test/audios/asr_example2.wav
|
191 |
-
...
|
192 |
-
```
|
193 |
-
|
194 |
-
- 若输入格式wav文件url,api调用方式可参考如下范例:
|
195 |
-
|
196 |
-
```python
|
197 |
-
from modelscope.pipelines import pipeline
|
198 |
-
from modelscope.utils.constant import Tasks
|
199 |
-
|
200 |
-
inference_pipeline = pipeline(
|
201 |
-
task=Tasks.auto_speech_recognition,
|
202 |
-
model='damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
|
203 |
-
vad_model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
|
204 |
-
vad_model_revision="v1.1.8",
|
205 |
-
punc_model='damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch',
|
206 |
-
punc_model_revision="v1.1.6")
|
207 |
-
|
208 |
-
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_vad_punc_example.wav')
|
209 |
-
print(rec_result)
|
210 |
-
```
|
211 |
-
|
212 |
-
- 输入音频为pcm格式,调用api时需要传入音频采样率参数audio_fs,例如:
|
213 |
-
|
214 |
-
```python
|
215 |
-
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_vad_punc_example.pcm', audio_fs=16000)
|
216 |
-
```
|
217 |
-
|
218 |
-
- 输入音频为wav格式,api调用方式可参考如下范例:
|
219 |
-
|
220 |
-
```python
|
221 |
-
rec_result = inference_pipeline(audio_in='asr_vad_punc_example.wav')
|
222 |
-
```
|
223 |
-
|
224 |
-
- 若输入格式为文件wav.scp(注:文件名需要以.scp结尾),可添加 output_dir 参数将识别结果写入文件中,api调用方式可参考如下范例:
|
225 |
-
|
226 |
-
```python
|
227 |
-
inference_pipeline = pipeline(
|
228 |
-
task=Tasks.auto_speech_recognition,
|
229 |
-
model='damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
|
230 |
-
vad_model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
|
231 |
-
vad_model_revision="v1.1.8",
|
232 |
-
punc_model='damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch',
|
233 |
-
punc_model_revision="v1.1.6",
|
234 |
-
output_dir='./output_dir')
|
235 |
-
|
236 |
-
inference_pipeline(audio_in="wav.scp")
|
237 |
-
```
|
238 |
-
识别结果输出路径结构如下:
|
239 |
-
|
240 |
-
```sh
|
241 |
-
tree output_dir/
|
242 |
-
output_dir/
|
243 |
-
└── 1best_recog
|
244 |
-
├── rtf
|
245 |
-
├── score
|
246 |
-
├── text
|
247 |
-
└── time_stamp
|
248 |
-
|
249 |
-
1 directory, 4 files
|
250 |
-
```
|
251 |
-
rtf:计算过程耗时统计
|
252 |
-
|
253 |
-
score:识别路径得分
|
254 |
-
|
255 |
-
text:语音识别结果文件
|
256 |
-
|
257 |
-
time_stamp:时间戳结果文件
|
258 |
-
|
259 |
-
- 若输入音频为已解析的audio音频,api调用方式可参考如下范例:
|
260 |
-
|
261 |
-
```python
|
262 |
-
import soundfile
|
263 |
-
|
264 |
-
waveform, sample_rate = soundfile.read("asr_vad_punc_example.wav")
|
265 |
-
rec_result = inference_pipeline(audio_in=waveform)
|
266 |
-
```
|
267 |
-
|
268 |
-
- ASR、VAD、PUNC模型自由组合
|
269 |
-
|
270 |
-
可根据使用需求对VAD和PUNC标点模型进行自由组合,使用方式如下:
|
271 |
-
```python
|
272 |
-
inference_pipeline = pipeline(
|
273 |
-
task=Tasks.auto_speech_recognition,
|
274 |
-
model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
|
275 |
-
vad_model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
|
276 |
-
vad_model_revision="v1.1.8",
|
277 |
-
punc_model='damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch',
|
278 |
-
punc_model_revision="v1.1.6",
|
279 |
-
)
|
280 |
-
```
|
281 |
-
如需加入LM模型,可增加配置lm_model='damo/speech_transformer_lm_zh-cn-common-vocab8404-pytorch'。
|
282 |
-
|
283 |
-
长音频版本模型中集成了VAD、ASR、标点模型,若不使用VAD或标点模型,可设置参数vad_model=""或punc_model="",具体使用方式可参考[文档](https://github.com/alibaba-damo-academy/FunASR/discussions/134),例如:
|
284 |
-
```python
|
285 |
-
inference_pipeline = pipeline(
|
286 |
-
task=Tasks.auto_speech_recognition,
|
287 |
-
model='damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
|
288 |
-
vad_model='',
|
289 |
-
punc_model='',
|
290 |
-
)
|
291 |
-
```
|
292 |
-
|
293 |
-
长音频版本模型默认开启时间戳,若不使用时间戳,可通过传入参数param_dict['use_timestamp'] = False关闭时间戳,使用方式如下:
|
294 |
-
```python
|
295 |
-
param_dict['use_timestamp'] = False
|
296 |
-
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_vad_punc_example.wav', param_dict=param_dict)
|
297 |
-
|
298 |
-
)
|
299 |
-
```
|
300 |
-
|
301 |
-
|
302 |
-
|
303 |
-
#### 基于ModelScope进行微调
|
304 |
-
|
305 |
-
- 基于ModelScope上数据集进行微调:
|
306 |
-
|
307 |
-
以[AISHELL-1](https://www.modelscope.cn/datasets/speech_asr/speech_asr_aishell1_trainsets/summary)数据集为例,完整数据集已经上传ModelScope,可通过数据集英文名(speech_asr_aishell1_trainsets)搜索:
|
308 |
-
|
309 |
-
|
310 |
-
```python
|
311 |
-
import os
|
312 |
-
from modelscope.metainfo import Trainers
|
313 |
-
from modelscope.trainers import build_trainer
|
314 |
-
from modelscope.msdatasets.audio.asr_dataset import ASRDataset
|
315 |
-
|
316 |
-
def modelscope_finetune(params):
|
317 |
-
if not os.path.exists(params.output_dir):
|
318 |
-
os.makedirs(params.output_dir, exist_ok=True)
|
319 |
-
# dataset split ["train", "validation"]
|
320 |
-
ds_dict = ASRDataset.load(params.data_path, namespace='speech_asr')
|
321 |
-
kwargs = dict(
|
322 |
-
model=params.model,
|
323 |
-
data_dir=ds_dict,
|
324 |
-
dataset_type=params.dataset_type,
|
325 |
-
work_dir=params.output_dir,
|
326 |
-
batch_bins=params.batch_bins,
|
327 |
-
max_epoch=params.max_epoch,
|
328 |
-
lr=params.lr)
|
329 |
-
trainer = build_trainer(Trainers.speech_asr_trainer, default_args=kwargs)
|
330 |
-
trainer.train()
|
331 |
-
|
332 |
-
|
333 |
-
if __name__ == '__main__':
|
334 |
-
from funasr.utils.modelscope_param import modelscope_args
|
335 |
-
params = modelscope_args(model="damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch")
|
336 |
-
params.output_dir = "./checkpoint" # 模型保存路径
|
337 |
-
params.data_path = "speech_asr_aishell1_trainsets" # 数据路径,可以为modelscope中已上传数据,也可以是本地数据
|
338 |
-
params.dataset_type = "small" # 小数据量设置small,若数据量大于1000小时,请使用large
|
339 |
-
params.batch_bins = 2000 # batch size,如果dataset_type="small",batch_bins单位为fbank特征帧数,如果dataset_type="large",batch_bins单位为毫秒,
|
340 |
-
params.max_epoch = 50 # 最大训练轮数
|
341 |
-
params.lr = 0.00005 # 设置学习率
|
342 |
-
|
343 |
-
modelscope_finetune(params)
|
344 |
-
```
|
345 |
-
|
346 |
-
可将上述代码保存为py文件(如finetune.py),直接python finetune.py运行;若使用多卡进行训练,如下命令:
|
347 |
-
|
348 |
-
```sh
|
349 |
-
CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch --nproc_per_node 2 finetune.py > log.txt 2>&1
|
350 |
-
```
|
351 |
-
|
352 |
-
- 基于私有数据集进行微调:
|
353 |
-
只需要设置本地数据存放路径即可:
|
354 |
-
```python
|
355 |
-
params.data_path = "speech_asr_aishell1_trainsets"
|
356 |
-
```
|
357 |
-
私有数据集格式按如下准备:
|
358 |
-
```sh
|
359 |
-
tree ./example_data/
|
360 |
-
./example_data/
|
361 |
-
├── validation
|
362 |
-
│ ├── text
|
363 |
-
│ └── wav.scp
|
364 |
-
└── train
|
365 |
-
├── text
|
366 |
-
└── wav.scp
|
367 |
-
2 directories, 4 files
|
368 |
-
```
|
369 |
-
|
370 |
-
其中,text文件中存放音频标注,wav.scp文件中存放wav音频绝对路径,样例如下:
|
371 |
-
|
372 |
-
```sh
|
373 |
-
cat ./example_data/text
|
374 |
-
BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购
|
375 |
-
BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉
|
376 |
-
english_example_1 hello world
|
377 |
-
english_example_2 go swim 去 游 泳
|
378 |
-
|
379 |
-
cat ./example_data/wav.scp
|
380 |
-
BAC009S0002W0122 /mnt/data/wav/train/S0002/BAC009S0002W0122.wav
|
381 |
-
BAC009S0002W0123 /mnt/data/wav/train/S0002/BAC009S0002W0123.wav
|
382 |
-
english_example_1 /mnt/data/wav/train/S0002/english_example_1.wav
|
383 |
-
english_example_2 /mnt/data/wav/train/S0002/english_example_2.wav
|
384 |
-
```
|
385 |
-
|
386 |
-
|
387 |
-
|
388 |
-
### 在本地机器中开发
|
389 |
-
|
390 |
-
#### 基于ModelScope进行微调和推理
|
391 |
-
|
392 |
-
支持基于ModelScope上数据集及私有数据集进行定制微调和推理,使用方式同Notebook中开发。
|
393 |
-
|
394 |
-
#### 基于FunASR进行微调和推理
|
395 |
-
|
396 |
-
FunASR框架支持魔搭社区开源的工业级的语音识别模型的training & finetuning,使得研究人员和开发者可以更加便捷的进行语音识别模型的研究和生产,目前已在Github开源:https://github.com/alibaba-damo-academy/FunASR 。若在使用过程中遇到任何问题,欢迎联系我们:[联系方式](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/images/dingding.jpg)
|
397 |
-
|
398 |
-
#### FunASR框架安装
|
399 |
-
|
400 |
-
- 安装FunASR和ModelScope,[详见](https://github.com/alibaba-damo-academy/FunASR/wiki)
|
401 |
-
|
402 |
-
```sh
|
403 |
-
pip install "modelscope[audio_asr]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
|
404 |
-
git clone https://github.com/alibaba/FunASR.git && cd FunASR
|
405 |
-
pip install --editable ./
|
406 |
-
```
|
407 |
-
|
408 |
-
|
409 |
-
#### 基于FunASR进行推理
|
410 |
-
|
411 |
-
接下来会以私有数据集为例,介绍如何在FunASR框架中使用Paraformer-large进行推理以及微调。
|
412 |
-
|
413 |
-
```sh
|
414 |
-
cd egs_modelscope/paraformer/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch
|
415 |
-
python infer.py
|
416 |
-
```
|
417 |
-
|
418 |
-
#### 基于FunASR进行微调
|
419 |
-
```sh
|
420 |
-
cd egs_modelscope/paraformer/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch
|
421 |
-
python finetune.py
|
422 |
-
```
|
423 |
-
|
424 |
-
多卡进行微调训练用法与notebook中相同。
|
425 |
-
|
426 |
-
## Benchmark
|
427 |
-
结合大数据、大模型优化的Paraformer在一序列语音识别的benchmark上获得当前SOTA的效果,以下展示学术数据集AISHELL-1、AISHELL-2、WenetSpeech,公开评测项目SpeechIO TIOBE白盒测试场景的效果。在学术界常用的中文语音识别评测任务中,其表现远远超于目前公开发表论文中的结果,远好于单独封闭数据集上的模型。此结果为[Paraformer-large模型](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-aishell1-vocab8404-pytorch/summary)在无VAD和标点��型下的测试结果。
|
428 |
-
|
429 |
-
### AISHELL-1
|
430 |
-
|
431 |
-
| AISHELL-1 test | w/o LM | w/ LM |
|
432 |
-
|:------------------------------------------------:|:-------------------------------------:|:-------------------------------------:|
|
433 |
-
| <div style="width: 150pt">Espnet</div> | <div style="width: 150pt">4.90</div> | <div style="width: 150pt">4.70</div> |
|
434 |
-
| <div style="width: 150pt">Wenet</div> | <div style="width: 150pt">4.61</div> | <div style="width: 150pt">4.36</div> |
|
435 |
-
| <div style="width: 150pt">K2</div> | <div style="width: 150pt">-</div> | <div style="width: 150pt">4.26</div> |
|
436 |
-
| <div style="width: 150pt">Blockformer</div> | <div style="width: 150pt">4.29</div> | <div style="width: 150pt">4.05</div> |
|
437 |
-
| <div style="width: 150pt">Paraformer-large</div> | <div style="width: 150pt">1.95</div> | <div style="width: 150pt">1.68</div> |
|
438 |
-
|
439 |
-
### AISHELL-2
|
440 |
-
|
441 |
-
| | dev_ios| test_android| test_ios|test_mic|
|
442 |
-
|:-------------------------------------------------:|:-------------------------------------:|:-------------------------------------:|:------------------------------------:|:------------------------------------:|
|
443 |
-
| <div style="width: 150pt">Espnet</div> | <div style="width: 70pt">5.40</div> |<div style="width: 70pt">6.10</div> |<div style="width: 70pt">5.70</div> |<div style="width: 70pt">6.10</div> |
|
444 |
-
| <div style="width: 150pt">WeNet</div> | <div style="width: 70pt">-</div> |<div style="width: 70pt">-</div> |<div style="width: 70pt">5.39</div> |<div style="width: 70pt">-</div> |
|
445 |
-
| <div style="width: 150pt">Paraformer-large</div> | <div style="width: 70pt">2.80</div> |<div style="width: 70pt">3.13</div> |<div style="width: 70pt">2.85</div> |<div style="width: 70pt">3.06</div> |
|
446 |
-
|
447 |
-
|
448 |
-
### Wenetspeech
|
449 |
-
|
450 |
-
| | dev| test_meeting| test_net|
|
451 |
-
|:-------------------------------------------------:|:-------------------------------------:|:-------------------------------------:|:------------------------------------:|
|
452 |
-
| <div style="width: 150pt">Espnet</div> | <div style="width: 100pt">9.70</div> |<div style="width: 100pt">15.90</div> |<div style="width: 100pt">8.80</div> |
|
453 |
-
| <div style="width: 150pt">WeNet</div> | <div style="width: 100pt">8.60</div> |<div style="width: 100pt">17.34</div> |<div style="width: 100pt">9.26</div> |
|
454 |
-
| <div style="width: 150pt">K2</div> | <div style="width: 100pt">7.76</div> |<div style="width: 100pt">13.41</div> |<div style="width: 100pt">8.71</div> |
|
455 |
-
| <div style="width: 150pt">Paraformer-large</div> | <div style="width: 100pt">3.57</div> |<div style="width: 100pt">6.97</div> |<div style="width: 100pt">6.74</div> |
|
456 |
-
|
457 |
-
### [SpeechIO TIOBE](https://github.com/SpeechColab/Leaderboard)
|
458 |
-
|
459 |
-
Paraformer-large模型结合Transformer-LM模型做shallow fusion,在公开评测项目SpeechIO TIOBE白盒测试场景上获得当前SOTA的效果,目前[Transformer-LM模型](https://modelscope.cn/models/damo/speech_transformer_lm_zh-cn-common-vocab8404-pytorch/summary)已在ModelScope上开源,以下展示SpeechIO TIOBE白盒测试场景without LM、with Transformer-LM的效果:
|
460 |
-
|
461 |
-
- Decode config w/o LM:
|
462 |
-
- Decode without LM
|
463 |
-
- Beam size: 1
|
464 |
-
- Decode config w/ LM:
|
465 |
-
- Decode with [Transformer-LM](https://modelscope.cn/models/damo/speech_transformer_lm_zh-cn-common-vocab8404-pytorch/summary)
|
466 |
-
- Beam size: 10
|
467 |
-
- LM weight: 0.15
|
468 |
-
|
469 |
-
| testset | w/o LM | w/ LM |
|
470 |
-
|:------------------:|:----:|:----:|
|
471 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00001</div>| <div style="width: 150pt">0.49</div> | <div style="width: 150pt">0.35</div> |
|
472 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00002</div>| <div style="width: 150pt">3.23</div> | <div style="width: 150pt">2.86</div> |
|
473 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00003</div>| <div style="width: 150pt">1.13</div> | <div style="width: 150pt">0.80</div> |
|
474 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00004</div>| <div style="width: 150pt">1.33</div> | <div style="width: 150pt">1.10</div> |
|
475 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00005</div>| <div style="width: 150pt">1.41</div> | <div style="width: 150pt">1.18</div> |
|
476 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00006</div>| <div style="width: 150pt">5.25</div> | <div style="width: 150pt">4.85</div> |
|
477 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00007</div>| <div style="width: 150pt">5.51</div> | <div style="width: 150pt">4.97</div> |
|
478 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00008</div>| <div style="width: 150pt">3.69</div> | <div style="width: 150pt">3.18</div> |
|
479 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH00009</div>| <div style="width: 150pt">3.02</div> | <div style="width: 150pt">2.78</div> |
|
480 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH000010</div>| <div style="width: 150pt">3.35</div> | <div style="width: 150pt">2.99</div> |
|
481 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH000011</div>| <div style="width: 150pt">1.54</div> | <div style="width: 150pt">1.25</div> |
|
482 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH000012</div>| <div style="width: 150pt">2.06</div> | <div style="width: 150pt">1.68</div> |
|
483 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH000013</div>| <div style="width: 150pt">2.57</div> | <div style="width: 150pt">2.25</div> |
|
484 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH000014</div>| <div style="width: 150pt">3.86</div> | <div style="width: 150pt">3.08</div> |
|
485 |
-
|<div style="width: 200pt">SPEECHIO_ASR_ZH000015</div>| <div style="width: 150pt">3.34</div> | <div style="width: 150pt">2.67</div> |
|
486 |
-
|
487 |
-
|
488 |
-
## 使用方式以及适用范围
|
489 |
-
|
490 |
-
运行范围
|
491 |
-
- 现阶段只能在Linux-x86_64运行,不支持Mac和Windows。
|
492 |
-
|
493 |
-
使用方式
|
494 |
-
- 直接推理:可以直接对输入音频进行解码,输出目标文字。
|
495 |
-
- 微调:加载训练好的模型,采用私有或者开源数据进行模型训练。
|
496 |
-
|
497 |
-
使用范围与目标场景
|
498 |
-
- 适合与离线语音识别场景,如录音文件转写,配合GPU推理效果更加,输入音频时长不限制,可以为几个小时音频。
|
499 |
-
|
500 |
-
|
501 |
-
## 模型局限性以及可能的偏差
|
502 |
-
|
503 |
-
考虑到特征提取流程和工具以及训练工具差异,会对CER的数据带来一定的差异(<0.1%),推理GPU环境差异导致的RTF数值差异。
|
504 |
-
|
505 |
-
|
506 |
-
|
507 |
-
## 相关论文以及引用信息
|
508 |
|
509 |
-
|
510 |
-
|
511 |
-
title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
|
512 |
-
author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie},
|
513 |
-
booktitle={INTERSPEECH},
|
514 |
-
year={2022}
|
515 |
-
}
|
516 |
-
```
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
language:
|
3 |
- zh
|
4 |
metrics:
|
5 |
- cer
|
6 |
---
|
7 |
This is a funasr_onnx model export from parameter-large-long model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
+
The orginal model link is:
|
10 |
+
https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary
|
|
|
|
|
|
|
|
|
|
|
|
model.pb
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:fd7c1c6fa7f499377d238b3f3790eb2f6f16318a0e33e33ae50936725f4d8388
|
3 |
-
size 900702648
|
|
|
|
|
|
|
|