The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

HypothesesParadise

This repo releases the Robust HyPoradise dataset in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition."

GitHub: https://github.com/YUCHEN005/RobustGER

Model: https://huggingface.co/PeacefulData/RobustGER

Data: This repo

UPDATE (Apr-18-2024): We have released the training data, which follows the same format as test data. Considering the file size, the uploaded training data does not contain the speech features (vast size). Alternatively, we have provided a script named add_speech_feats_to_train_data.py to generate them from raw speech (.wav). You need to specify the raw speech path from utterance id in the script. Here are the available speech data: CHiME-4, VB-DEMAND, LS-FreeSound, NOIZEUS.

IMPORTANT: The vast speech feature size mentioned above is because Whisper requires a fixed input length of 30s that is too long. Please do the follwing step to remove it before running add_speech_feats_to_train_data.py:

  • Modified the whisper model code x = (x + self.positional_embedding).to(x.dtype) to be x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)

UPDATE (Apr-29-2024): To support customization, We release the script generate_robust_hp.py for users to generate train/test data from their own ASR datasets. We also release two necessary packages for generation: "my_jiwer" and "decoding.py". To summarize, you will need to do the following three steps before running generate_robust_hp.py:

  • Modified the whisper model code x = (x + self.positional_embedding).to(x.dtype) to be x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)
  • Specify the absolute path of "my_jiwer" directory in generate_robust_hp.py (sys.path.append())
  • Put our whisper decoding script "decoding.py" under your locally installed whisper directory "<your-path>/whisper/whisper"

If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.

@inproceedings{hu2024large,
  title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
  author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
  booktitle={International Conference on Learning Representations},
  year={2024}
}
Downloads last month
612

Models trained or fine-tuned on PeacefulData/Robust-HyPoradise