speechbrainteam
commited on
Commit
•
c106fda
1
Parent(s):
bbbfa22
Update README.md
Browse files
README.md
CHANGED
@@ -61,6 +61,28 @@ torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 16000)
|
|
61 |
### Inference on GPU
|
62 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
### Limitations
|
65 |
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
|
66 |
|
|
|
61 |
### Inference on GPU
|
62 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
63 |
|
64 |
+
### Training
|
65 |
+
The model was trained with SpeechBrain (fc2eabb7).
|
66 |
+
To train it from scratch follows these steps:
|
67 |
+
1. Clone SpeechBrain:
|
68 |
+
```bash
|
69 |
+
git clone https://github.com/speechbrain/speechbrain/
|
70 |
+
```
|
71 |
+
2. Install it:
|
72 |
+
```
|
73 |
+
cd speechbrain
|
74 |
+
pip install -r requirements.txt
|
75 |
+
pip install -e .
|
76 |
+
```
|
77 |
+
|
78 |
+
3. Run Training:
|
79 |
+
```
|
80 |
+
cd recipes/WHAMandWHAMR/separation/
|
81 |
+
python train.py hparams/sepformer-whamr.yaml --data_folder=your_data_folder --sample_rate=16000
|
82 |
+
```
|
83 |
+
|
84 |
+
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1QiQhp1vi5t4UfNpNETA48_OmPiXnUy8O?usp=sharing).
|
85 |
+
|
86 |
### Limitations
|
87 |
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
|
88 |
|