gguf quantized version of t5xxl encoder with mochi (test pack revised)
setup (once)
- drag mochi_fp8_e4m3fn.safetensors (10GB) to > ./ComfyUI/models/diffusion_models
- drag t5xxl_fp16-q4_0.gguf (2.9GB) to > ./ComfyUI/models/text_encoders
- drag mochi_vae_fp8_e4m3fn.safetensors (460MB) to > ./ComfyUI/models/vae
run it straight (no installation needed way)
- run the .bat file in the main directory (assuming you are using the gguf-node pack below)
- drag the workflow json file (below) to > your browser
workflow
- example workflow (with gguf encoder)
- example workflow (safetensors)
review
- revised workflow to bypass oom issue and around 50% faster with the new fp8_e4m3fn file
- t5xxl gguf works fine as text encoder
- model gguf file might not work; if so, please wait for the code update
reference
- base model from genmo
- comfyui from comfyanonymous
- comfyui-gguf city96
- gguf-comfy pack
- gguf-node (pypi|repo|pack)
prompt test#
prompt: "a fox moving quickly in a beautiful winter scenery nature trees sunset tracking camera"
- Downloads last month
- 292
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for calcuis/mochi
Base model
genmo/mochi-1-preview