File size: 3,374 Bytes
2f2b72c 93348e5 2f2b72c 85e2e23 1c982e7 85e2e23 1c982e7 8bad015 85e2e23 956c1e4 85e2e23 2f2b72c 48f83f0 2f2b72c 85e2e23 48f83f0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
language: "en"
tags:
- Robust ASR
- Speech Enhancement
- PyTorch
license: "apache-2.0"
datasets:
- Voicebank
- DEMAND
metrics:
- WER
- PESQ
- eSTOI
---
# 1D CNN + Transformer Trained w/ Mimic Loss
This repository provides all the necessary tools to perform enhancement and
robust ASR training (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given model performance is:
| Release | Test PESQ | Test eSTOI | Valid WER | Test WER |
|:-----------:|:-----:| :-----:|:----:|:---------:|
| 21-03-08 | 2.92 | 85.2 | 3.20 | 2.96 |
## Pipeline description
The mimic loss training system consists of three steps:
1. A perceptual model is pre-trained on clean speech features, the
same type used for the enhancement masking system.
2. An enhancement model is trained with mimic loss, using the
pre-trained perceptual model.
3. A large ASR model pre-trained on LibriSpeech is fine-tuned
using the enhancement front-end.
The enhancement and ASR models can be used together or
independently.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
## Pretrained Usage
To use the mimic-loss-trained model for enhancement, use the following simple code:
```python
import torchaudio
from speechbrain.pretrained import SpectralMaskEnhancement
enhance_model = SpectralMaskEnhancement.from_hparams(
source="speechbrain/mtl-mimic-voicebank",
savedir="pretrained_models/mtl-mimic-voicebank",
)
enhanced = enhance_model.enhance_file("speechbrain/mtl-mimic-voicebank/example.wav")
# Saving enhanced signal on disk
torchaudio.save('enhanced.wav', enhanced.unsqueeze(0).cpu(), 16000)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Referencing Mimic Loss
If you find mimic loss useful, please cite:
```
@inproceedings{bagchi2018spectral,
title={Spectral Feature Mapping with Mimic Loss for Robust Speech Recognition},
author={Bagchi, Deblin and Plantinga, Peter and Stiff, Adam and Fosler-Lussier, Eric},
booktitle={IEEE Conference on Audio, Speech, and Signal Processing (ICASSP)},
year={2018}
}
```
## Referencing SpeechBrain
If you find SpeechBrain useful, please cite:
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
|