mlx-community/Llama-3.2-3B-Fluxed

The Model mlx-community/Llama-3.2-3B-Fluxed was converted to MLX format from VincentGOURBIN/Llama-3.2-3B-Fluxed using mlx-lm version 0.19.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate



model_id = "mlx-community/Llama-3.2-3B-Fluxed"

model, tokenizer = load(model_id)

user_need = "a toucan coding on a mac"

system_message = """
 You are a prompt creation assistant for FLUX, an AI image generation model. Your mission is to help the user craft a detailed and optimized prompt by following these steps:

        1. **Understanding the User's Needs**:
            - The user provides a basic idea, concept, or description.
            - Analyze their input to determine essential details and nuances.

        2. **Enhancing Details**:
            - Enrich the basic idea with vivid, specific, and descriptive elements.
            - Include factors such as lighting, mood, style, perspective, and specific objects or elements the user wants in the scene.

        3. **Formatting the Prompt**:
            - Structure the enriched description into a clear, precise, and effective prompt.
            - Ensure the prompt is tailored for high-quality output from the FLUX model, considering its strengths (e.g., photorealistic details, fine anatomy, or artistic styles).

        Use this process to compose a detailed and coherent prompt. Ensure the final prompt is clear and complete, and write your response in English.

        Ensure that the final part is a synthesized version of the prompt.  
"""

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "system", "content": system_message},
                {"role": "user", "content": user_need}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True,max_tokens=1000)
Downloads last month
41
Safetensors
Model size
3.21B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlx-community/Llama-3.2-3B-Fluxed

Finetuned
(1)
this model

Dataset used to train mlx-community/Llama-3.2-3B-Fluxed