Spaces:
Running
on
Zero
Apply for community grant: Academic project (gpu and storage)
FunAudioLLM, a framework designed to enhance natural voice interactions between humans and large language models (LLMs). At its core are two innovative models: SenseVoice for high-precision multilingual speech recognition, emotion recognition, and audio event detection; and CosyVoice for natural speech generation with multi-language, timbre, and emotion control. SenseVoice delivers exceptionally low latency and supports over 50 languages, while CosyVoice excels in multi-lingual voice generation, zero-shot voice generation, cross-lingual voice cloning, and instruction-following capabilities. The models related to SenseVoice and CosyVoice have been open-sourced on Modelscope and Huggingface, along with the corresponding training, inference, and fine-tuning codes released on GitHub. By integrating these models with LLMs, FunAudioLLM enables applications such as speech translation, emotional voice chat, interactive podcasts, and expressive audiobook narration, thereby pushing the boundaries of voice interaction technology.