MelodyMaster V1
MelodyMaster V1 is an AI music generation model based on Meta's MusicGen-medium architecture. It generates high-quality music from text descriptions.
Model Description
- Model Architecture: MusicGen (3.3B parameters)
- Base Model: facebook/musicgen-medium
- Task: Text-to-Music Generation
- Output: 32kHz audio samples
- Max Duration: 30 seconds
Usage
from transformers import AutoProcessor, MusicgenForConditionalGeneration
# Load model and processor
model = MusicgenForConditionalGeneration.from_pretrained("opentunesai/melodymasterv1")
processor = AutoProcessor.from_pretrained("opentunesai/melodymasterv1")
# Process text and generate music
inputs = processor(
text=["happy rock song with electric guitar"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=1500)
Example Prompts
- "An upbeat electronic dance track with a strong beat"
- "A peaceful piano melody with soft strings"
- "A rock song with electric guitar and drums"
- "Jazz trio with piano, bass and drums"
Demo
Try the model in our Gradio Demo Space
License
Apache 2.0
Acknowledgments
Based on Meta's MusicGen model. Original model card: facebook/musicgen-medium
Citation
@article{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
journal={arXiv preprint arXiv:2306.05284},
}
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for opentunes-ai/melodymaster-v1
Base model
facebook/musicgen-medium