MG2 / README.md
ManzhenWei's picture
Update README.md
79b6efa verified
metadata
license: mit

Model Overview

The Melody Guided Music Generation (MG²) model is an innovative approach that uses melody to guide music generation, achieving impressive results despite its simplicity and minimal resource requirements. MG² aligns melody with audio waveforms and text descriptions via a multimodal alignment module and conditions its diffusion module on these learned melody representations. This enables MG² to create music that matches the style of given audio and reflects the content of text descriptions.

Demo

Explore the capabilities of the MG² model through an online demo:

  • Demo Link: Model Demo
  • Instructions: Input a text description, then click "Generate" to see the music generated by the model.

GitHub Repository

Access the code and additional resources for the MG² model:

Integration with Transformers and Hugging Face Hub

We are currently working on integrating MG² into the Hugging Face Transformers library and making it available on the Hugging Face Hub 🤗.

Tips: To generate high-quality music using MG², you'd better craft detailed and descriptive prompts that provide rich context and specific musical elements.

Paper

Title: "Melody Is All You Need For Music Generation"   Authors: Shaopeng Wei, Manzhen Wei, Haoyu Wang, Yu Zhao, Gang Kou   Year: 2024   arXiv Link  

Citation

@article{wei2024melodyneedmusicgeneration,
      title={Melody Is All You Need For Music Generation}, 
      author={Shaopeng Wei and Manzhen Wei and Haoyu Wang and Yu Zhao and Gang Kou},
      year={2024},
      eprint={2409.20196},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2409.20196}, 
}