| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | inference: false |
| | fine-tuning: true |
| | tags: |
| | - generative error correction |
| | - large language model |
| | - LLaMA |
| | metrics: |
| | - wer |
| | datasets: |
| | - PeacefulData/Robust-HyPoradise |
| | --- |
| | |
| | This repo releases the trained LLaMA-adapter weights in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition." |
| |
|
| | **GitHub:** https://github.com/YUCHEN005/RobustGER |
| |
|
| | **Data:** https://huggingface.co/datasets/PeacefulData/Robust-HyPoradise |
| |
|
| | **Model:** This repo |
| |
|
| | If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you. |
| |
|
| | ```bib |
| | @inproceedings{hu2024large, |
| | title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition}, |
| | author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong}, |
| | booktitle={International Conference on Learning Representations}, |
| | year={2024} |
| | } |
| | ``` |