File size: 3,009 Bytes
ce7c2ee
 
 
 
eb10730
 
 
b9c9e13
ce7c2ee
 
 
 
 
 
 
4516880
ce7c2ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
009de6d
ce7c2ee
22d44ae
ce7c2ee
 
 
 
 
 
 
 
4516880
fc52a0c
 
ce7c2ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
metrics:
- cer
base_model:
- openai/whisper-large-v3-turbo
pipeline_tag: automatic-speech-recognition
library_name: transformers
---

## Welcome
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE  and https://github.com/shuaijiang/Whisper-Finetune

# Belle-whisper-large-v3-turbo-zh
Fine tune whisper-large-v3-turbo-zh to enhance Chinese speech recognition capabilities,
Belle-whisper-large-v3-turbo-zh demonstrates a **24-64%** relative improvement in performance to whisper-large-v3-turbo on Chinese ASR benchmarks, including AISHELL1, AISHELL2, WENETSPEECH, and HKUST.

Same to Belle-whisper-large-v3-zh-punct, the punctuation marks come from model [punc_ct-transformer_cn-en-common-vocab471067-large](https://www.modelscope.cn/models/iic/punc_ct-transformer_cn-en-common-vocab471067-large/),
and are added to the training datasets.

## Usage
```python

from transformers import pipeline

transcriber = pipeline(
  "automatic-speech-recognition", 
  model="BELLE-2/Belle-whisper-large-v3-turbo-zh"
)

transcriber.model.config.forced_decoder_ids = (
  transcriber.tokenizer.get_decoder_prompt_ids(
    language="zh", 
    task="transcribe"
  )
)

transcription = transcriber("my_audio.wav") 

```

## Fine-tuning
|       Model      |  (Re)Sample Rate   |                      Train Datasets         | Fine-tuning (full or peft) | 
|:----------------:|:-------:|:----------------------------------------------------------:|:-----------:|
| Belle-whisper-large-v3-turbo-zh | 16KHz | [AISHELL-1](https://openslr.magicdatatech.com/resources/33/) [AISHELL-2](https://www.aishelltech.com/aishell_2) [WenetSpeech](https://wenet.org.cn/WenetSpeech/) [HKUST](https://catalog.ldc.upenn.edu/LDC2005S15)  |   [full fine-tuning](https://github.com/shuaijiang/Whisper-Finetune)   |    


If you want to fine-thuning the model on your datasets, please reference to the [github repo](https://github.com/shuaijiang/Whisper-Finetune)

    
## CER(%) ↓ 
|      Model       |  Language Tag   | aishell_1_test(↓) |aishell_2_test(↓)| wenetspeech_net(↓) | wenetspeech_meeting(↓) | HKUST_dev(↓)|   
|:----------------:|:-------:|:-----------:|:-----------:|:--------:|:-----------:|:-------:|
| whisper-large-v3 | Chinese |  8.085 | 5.475  |  11.72   |  20.15 | 28.597 |
| whisper-large-v3-turbo | Chinese |  8.639 | 6.014  |  13.507   |  20.313 | 37.324 |
| Belle-whisper-large-v3-turbo-zh | Chinese |   3.070    | 4.114  |   10.230    | 13.357 | 18.944 |

It is worth mentioning that compared to whisper-large-v3 and whisper-large-v3-turbo, Belle-whisper-large-v3-turbo-zh has a significant improvement.


## Citation

Please cite our paper and github when using our code, data or model.

```
@misc{BELLE,
  author = {BELLEGroup},
  title = {BELLE: Be Everyone's Large Language model Engine},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```