File size: 600 Bytes
9f8613e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
license: apache-2.0
base_model: fal/AuraFlow-v0.3
base_model_relation: quantized
---
FP8 quantized version of [AuraFlow v0.3](fal/AuraFlow-v0.3)
## Quantization
```py
import torch
from huggingface_hub import cached_download
from safetensors.torch import load_file, save_file
ckpt_path = cached_download(
"https://huggingface.co/fal/AuraFlow-v0.3/resolve/main/aura_flow_0.3.safetensors",
)
state_dict = load_file(ckpt_path)
for key, value in state_dict.items():
state_dict[key] = value.to(torch.float8_e4m3fn)
save_file(state_dict, "./aura_flow_0.3.float8_e4m3fn.safetensors")
```
|