Quantization
import torch
from huggingface_hub import cached_download
from safetensors.torch import load_file, save_file
ckpt_path = cached_download(
"https://huggingface.co/cagliostrolab/animagine-xl-3.1/resolve/main/animagine-xl-3.1.safetensors",
)
state_dict = load_file(ckpt_path)
for key, value in state_dict.items():
state_dict[key] = value.to(torch.float8_e4m3fn)
save_file(state_dict, "./animagine-xl-3.1.float8_e4m3fn.safetensors")
Model tree for p1atdev/animagine-xl-3.1-fp8
Base model
stabilityai/stable-diffusion-xl-base-1.0
Finetuned
Linaqruf/animagine-xl-2.0
Finetuned
cagliostrolab/animagine-xl-3.0
Finetuned
cagliostrolab/animagine-xl-3.1