ZeroGPU AoTI
community
AI & ML interests
AoT compilation, ZeroGPU inference optimization
Recent Activity
View all activity
optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU 💨
Compare AoTI vs. base version
optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU 💨
Creative applications and accelerated demos
Compare AoTI vs. base version
optimized demo for Flux kontext [dev], using FP8 quantization and AoT compilation