image/png

Model Information

The LLAMUsic is a finetuned version of Llama 3.2 instruction-tuned generative models in 3B size (text in/text out).

Model Developers: Marco Onorato, Riccardo Preite, Niccolò Monaco

Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported.

Llama 3.2 Model Family: Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.

Model Release Date: Dec 20, 2024

Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.

License: MIT License, please use this with conscience.

Feedback: You can contact info.llamusic@gmail.com

Intended Use

Intended Use Cases: Llama 3.2 is intended for personal and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.

Out of Scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.

How to use

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import torch
from transformers import pipeline
model_id = "marcoonorato91/LLAMUsic"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are LLAMUsic, an artificial intelligence expert of music."},
    {"role": "user", "content": "Write a guitar tab in the style of Metallica and include lyrics."},
]
outputs = pipe(
    messages,
    max_new_tokens=4000,
)
print(outputs[0]["generated_text"][-1])

Use with ollama

Please, follow the instructions here to install ollama

Then you can pull from the public llamusic ollama hub

Two models are available: the standard version and the Q4_K_M quantized version

Downloads last month
47
Safetensors
Model size
3.21B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using marcoonorato91/LLAMUsic 1