Edit model card

Converted to bfloat16 from rain1011/pyramid-flow-sd3. Use the text encoders and tokenizers from that repo (or from SD3), no point reuploading them over and over unchanged.

Inference code is available here: github.com/jy0205/Pyramid-Flow.

Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.

I highly recommend using cpu_offloading=True when generating, unless you have more than 24 GB VRAM.

Downloads last month
0
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.

Model tree for SeanScripts/pyramid-flow-sd3-bf16

Finetuned
(2)
this model