I rented an H100 and converted a couple of TensorRT models — I hope it isn't breaking any rules for me to host them here! Info in the file names, but the static ones are set to 1024x1024x1 batch while the dynamic ones support 512-2048 height and width and batch of 4, optimal 1024 batch 1 I've got some credit left so I'll consider taking requests if there's enough demand for specific other configurations, make a post in the Community tab. Unfortunately I ran into a bug while trying to convert FP8 models so I'm not sure about the quantization potential here. And follow me on [TikTok](https://tiktok.com/@allhailthealgo) and [Instagram](https://Instagram.com/allhailthealgo) if this helped you :) --- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/raw/main/LICENSE.md ---