Juggernaut XL v7 for ONNX Runtime CUDA provider

Introduction

This repository hosts the optimized versions of Juggernaut XL v7 to accelerate inference with ONNX Runtime CUDA execution provider for Nvidia GPUs. It cannot run in other providers like CPU or DirectML.

The models are generated by Olive with command like the following:

python stable_diffusion_xl.py --provider cuda --optimize --model_id stablediffusionapi/juggernaut-xl-v7
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for tlwu/juggernaut-xl-v7-onnxruntime

Quantized
(1)
this model

Collection including tlwu/juggernaut-xl-v7-onnxruntime