Locally running the model leads to very bad image editing results, with inconsistent backgrounds.
I have tried the demo on https://huggingface.co/spaces/Qwen/Qwen-Image-Edit, where the result is good, and the background consistency preserves well. However, given the same prompt, the results on local machine are very bad
I'm having a similar issue. The background keep being cropped or outpainted even if i change one simple thing in the image.
same here, the subject is zoomed in or out, cropped as well sometimes
Can anyone share how you got this to run locally?
+1
Use the default Qwen Image Edit comfyui template, that seems to work ok, you have to update to latest comfyui of course to see the template in menu "Browse Templates".
When you first open the workflow it will tell you what models are missing and a link to download them and where to put them.
After you download and put them into correct locations, just refresh the browser and it's good to go.
https://github.com/comfyanonymous/ComfyUI/issues/9481 This issue discussed this problem, for diffuser's users, a simple but useful solution is resizing the input image using the following size: (1024x1024γ1184x880γ1392x752γ1568x672).
fyi I have not quality issues on my end when running locally the implementation with df11: https://huggingface.co/DFloat11/Qwen-Image-Edit-DF11
Now, I have "only" 24gb VRAM and the inference time is really long since I need to offload a few block to the CPU (16 blocks is the min I need to offload). Possibly there are some tweaks possible to speed to reduce the idle times due to the large and many memory allocation/transfers happening prior to hard compute, but for now if you have less than 32gb VRAM, it may take a while to generate (10 to 20mins per image).