Spaces:
Running
on
CPU Upgrade
Feature: SD-XL inpainting
Disclaimer: This is a very tall order and I currently have no idea how it would be implemented.
Adding the ability to inpaint areas of a comic factory generated page would be ideal for modifying, refining and fixing generated images.
Demo of SD-XL inpainting:
https://huggingface.co/spaces/diffusers/stable-diffusion-xl-inpainting
Workaround
The output from Comic Factory can currently be opened in a SD-XL inpainting space where that feature functionality can be realized.
Additional thoughts
It would perhaps be ideal to be able to go beyond standard inpainting and have a source and a target paint area to allow weighting and intentional additions into an image's designated areas
A painted (or drawn) area on an influencing reference image would provide graphic intention of the user.
A painted (or drawn) area on the original source image would provide the target where that attention would be constrained.
A new output image would then be generated based on this criteria plus any text prompt (if allowed), projecting the change into the targeted locations(s).
It may be useful to refer to a C++ program developed by Shun Iwasawa called IwaWarper (Note: Iwa = Iwasawa)
Shun variously has worked for Studio Ghibli and other Japanese studios and has a solid understanding of graphics, programming and machine learning.
Link: https://opentoonz.github.io/download/iwawarper.html
The difference in suggested approach versus how source and target shapes in IwaWarper work is that the shapes are drawn vector shapes and not painted areas. Upon being drawn two shapes are created... a source shape and a target shape... each of which can be modified and keyframed separately.
In this way both the source and target can be changed.
Note: Shun is the main developer of the open source program Opentoonz.
(Disclaimer: I am a facilitator involved in the development of Opentoonz)