Edit model card

RCNA MINI

RCNA MINI is a compact LoRA (Low-Rank Adaptation) model designed for generating high-quality, 4-step text-to-video outputs. It can create video clips ranging from 4 to 16 seconds long, making it ideal for generating short animations with rich details and smooth transitions.

Key Features:

  • 4-step Text-to-Video: Generates videos from a text prompt in just 4 steps.
  • Video Length: Can generate videos from 4 seconds to 16 seconds long.
  • High Quality: Supports high-resolution and detailed outputs (up to 8K).
  • Fast Sampling: Leveraging decoupled consistency learning, the model is optimized for speed while maintaining quality.

Example Outputs:

  • Prompt: "Astronaut in a jungle, cold color palette, muted colors, detailed, 8K"
    • Generates a high-quality video with rich details and smooth motion.

How it Works:

RCNA MINI is based on the LoRA architecture, which fine-tunes diffusion models using low-rank adaptations. This results in faster generation and less computational overhead compared to full model retraining.

Applications:

  • Short-form animations for social media content
  • Video generation for creative projects
  • Artistic video generation based on textual descriptions

Model Details:

  • Architecture: LoRA applied to diffusion models
  • Inference Steps: 4-step generation
  • Output Length: 4 to 16 seconds

Using AnimateLCM with Diffusers

import torch
from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter, DiffusionPipeline
from diffusers.utils import export_to_gif

# Load AnimateLCM for video generation
adapter = MotionAdapter.from_pretrained("Binarybardakshat/RCNA_MINI")
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")
pipe.load_lora_weights("Binarybardakshat/RCNA_MINI", weight_name="RCNA_LORA_MINI_1.safetensors", adapter_name="lcm-lora")
pipe.set_adapters(["lcm-lora"], [0.8])
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

# Generate video using RCNA MINI
output = pipe(
    prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution",
    negative_prompt="bad quality, worse quality, low resolution",
    num_frames=16,
    guidance_scale=2.0,
    num_inference_steps=6,
    generator=torch.Generator("cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "animatelcm.gif")
print("Video and image generation complete!")

License:

This model is licensed under the MIT License.

Downloads last month
29
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.