Sucial's picture
Update README.md
512db29 verified
|
raw
history blame
2.37 kB
metadata
license: cc-by-nc-sa-4.0

Description

This model is used to separate reverb and delay effects in vocals. In addition, it can also separate partial harmony, but it cannot completely separate them. I added random high cut after the reverberation and delay effects in the dataset, so the model's handling of high frequencies is not particularly aggressive.
You can try listening to the performance of this model here!

How to use the model?

Try it with ZFTurbo's Music-Source-Separation-Training

Model

Configs_256_8_4: config_dereverb-echo_mel_band_roformer.yaml
Model_256_8_4: dereverb-echo_mel_band_roformer_sdr_10.0169.ckpt
Instr dry sdr: 13.1507, Instr other sdr: 6.8830, Metric avg sdr: 10.0169

Configs_128_4_4: config_dereverb-echo_128_4_4_mel_band_roformer.yaml
Model_128_4: dereverb-echo_128_4_4_mel_band_roformer_sdr_dry_12.4235.ckpt Instr dry sdr: 12.4235

Instruments: [dry, other]
Finetuned from: model_mel_band_roformer_ep_3005_sdr_11.4360.ckpt
Datasets:

  • Training datasets: 270 songs from opencpop and GTSinger
  • Validation datasets: 30 songs from my own collection
  • All random reverbs and delay effects are generated by this python script and sorted into the mustb18 dataset format.

Training log

Training logs: train.log
The following image is the TensorBoard visualization training log generated by this script. image

Thanks