Stable Diffusion XL 1.0 for ONNX Runtime CUDA provider

Introduction

This repository hosts the optimized versions of Stable Diffusion XL Refiner 1.0 to accelerate inference with ONNX Runtime CUDA execution provider.

The models are generated by Olive with command like the following:

python stable_diffusion_xl.py --provider cuda --optimize --use_fp16_fixed_vae --model_id stabilityai/stable-diffusion-xl-refiner-1.0

Model Description

The VAE decoder is converted from sdxl-vae-fp16-fix. There are slight discrepancies between its output and that of the original VAE, but the decoded images should be close enough for most purposes.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for tlwu/stable-diffusion-xl-refiner-1.0-onnxruntime

Quantized
(1)
this model

Collection including tlwu/stable-diffusion-xl-refiner-1.0-onnxruntime