[Textual Inversion] Trouble converting and curiosities
Hi @GuiyeC and friends !
I've been using a boosted TI training colab (stable-textual-inversion-cafe Colab - Lazy Edition) but I can't seem to get it to convert for Guernika.
Is there anything fixable in the converter or by myself ?
Thank you
- Have you been able to get some compatibility with Shortcut or Applescript ?
I wish we could make our own super upscalers that way, and what about the latest controlnet (specially tile) ?
Looking forward to see what is coming up in the next release, and next OS stuffs.
Hey @ZProphete ! sorry for the late response, I have been working on a new update that will, hopefully, improve the creation UI in Guernika and add support for Stable Diffusion XL :D and multiple ControlNets. Here are a couple of images in case you want a sneak peak:
I would love your feedback in this and any other suggestions you have for this screen, I wanna focus on this one at the moment and improve collection manager in a different update.
As for this problem, I haven't tested this but if this is working in the latest diffusers an upcoming update for the Converter could fix this.
Finally, I do have shortcuts support in my list, I want to add AT LEAST some action to be able to give a shortcut a model, prompt, inputs... and for this to run the generation and give the output, if it's not on this release I will try to do this next.
@ZProphete This actually seems to work! Let me know what you would actually need in this flow, at the moment this just generates an image and saves it as normal in Guernika, maybe it makes sense to return the image as the shortcut output and not save it at all. Do you think being able to select the TextEncoder in this shortcuts would be important?
Please give me any feedback and I'll try to get this in the app in the next update too.
@GuiyeC
Yesssssssss !
This is so exciting ! it look sleek and friendly nothing to add !
Here I see multicontrolnet, that's amazing.
Hope it can fit in the m1 ANE, probably in next OS updates with the quantized models ?
(I a huge ANE fan, it's so crazy efficient, somehow I was complaining in other posts but I got most ControlNET+ANE working, my Mac was really busy, thanks for that).
I'm really enjoying current update and I'm glad you didn't "burn out"
Yes I wish it can output as shortcut image, my idea was to emulate the famous webui upscalers :
I would slice an image outside of Guernika into 512x512 and process each one as Shortcut Input and stitch them back together (how does controlnetTile work here, I honestly don't know)
So we could handling the fades, overlaps and collages outside of Guernika so you won't have to, But I won't be bothered if you want to keep your precious App on the spotlight.text encoder in Shortcut is important to me ^^
(edited : shower thoughts removed, hope you read and considered it in private)
Cheers.
Here is how I see it :
{} = Shortcuts, we do
[] = Guernika, you do
An image, and group of images / folder in Guernika should have a "Share" extension
- It's a Guernika folder / image selection :
1.a) Select Folder > Share : {Global Prompts & settings, new folder} > {Each images} > [[Process into Guernika using settings, send to specified new folder]]
1.b Select Folder > Share : {Each images} > {Stitch the slices}
- it's a single Guernika image :
2.a) Select image > Share : {Upscale} > {Slice into multiple 512} > [[Create a new folder in Guernika and import those slices]]
- it's a selection from an image editor :
Square Selection > Copy to clipboard > {fetch clipboard image, sometimes its base64} > {Prompts and settings} > [[Process into Guernika]] > [[Send to clipboard or save]]
- it's a selection from an image editor :
You'd have to make importations possible then, this way seems fair to me, Guernika will stay upfront.
@ZProphete this is what I have so far, basically:
- Text to image
- Image to image
- Create collection
- Save in collection
- Set as base image (this sets the input as the base image for image variation in Guernika)
How does this look? This could of course be improved in the future, but maybe good enough for a first release? Not sure if this would be enough for the flow you described, let me know if you think we would need more actions.
So far I have not added support for ControlNet, I tried but preprocessing for example might be tricky, and having so many inputs in an action can be too much, maybe a new action with ControlNet? I don't know, I have also not added an inpainting action, this could actually be one of the most useful but it would require having the mask in the right format, maybe it's worth it to improve this on the app itself first.
I ended up adding an inpainting action, will help if people want to try outpainting.
Hello
@GuiyeC
Amazing !
I don't see anything wrong there :)
Yes I understand the challenges, Shortcuts probably has more limits but is more user-friendly than my initial AppleScript approach,
glad you still took a look at it, I apologize if I assumed Shortcuts works a certain way.
- Maybe add Preprocessors in a separate action ? Shortcut will decide to crash itself or not, thus, user will decide if he choose to chain [Preprocessing + Controlnet] or batch preprocessing before-hand, so yes, separate preprocessor and controlNet action are wanted :)
We'd have to match image input+controlnet input afterwards in a batch, that's tricky, also, where to save ? inside a collection ? We'll figure it out.
Sounds good to me cause I'd split my workflow into a series a little scripts 1,2,3,4, with that idea of [Upscale>Tiling>Img2img>Collage] I'd put a pause to fix little things manually before step 3.
The simpler the better I guess.
- What do you mean inpainting action, like Set a mask from external sources ?
It's important but the "prepared masks" certainly feels not fun to do. Can you pop-up the Inpaint-brush menu on demand, and export the mask separately ? Same splitting energy, same problem to match them up afterwards.
Also, with exported masks we could approach the "inpaint masked-area-only" option if we blend it ourselves, but I guess ppl want it builtin ;)
Outpainting, how ? I can't wrap my head around that today.
You should release it as experimental and we'd gladly provide feedbacks :)
Off Subject :
- Will Textual Inversion come to iPad ? Not a big user on this device, but I like to showoff ;)
- Can you take a look at Reference_only and ControlNetTile
- If ControlNet Inpaint replaces inpaint models, we'd need a mask on the controlNet area or something ?
- How is SDXL implementation going ? 1024x1024 feels hard to generate.
- Did you get a hand on the next OS beta ? Will the quantized model leave more space for ANE to work, or maybe make images superior to 512x512 ?
Wish you the best !
@ZProphete I hope this works for you, I agree that AppleScript might have been more advanced but Shortcuts seemed much simpler to implement, I may look at AppleScript in the future but I think Apple is gonna keep pushing Shortcuts so that may even disappear at some point entirely.
I did not add ControlNet for now, I may do that with preprocessing but it will have to be in a later update.
Yes, the action will be similar to Image to image but you also add an alpha based mask, I don't think I can show the brush on command for now this could be useful for outpainting for example, meaning, you have a pre-prepared mask that is a black square with a transparent square in the middle, then as the base image you give it the image you want to outpaint having the size of the square in the middle and the inpaint action will do it's magic.
I will bring it to the iPad yes, I only need to update the UI and then everything should just work.
I'll take a look at those, is conversion failing? I also don't really understand what ControlNet tile does, could you explain it?
This new updates uses the new way of doing inpainting, basically inpainting now works with every model :D it is still recommended using and inpainting model as it was train to do so but SDXL for example will work out of the box, no ControlNet or anything needed. I think this will be the way to go, no idea how ControlNEt inpainting is working at the moment.
It is done, as far as I can tell it's working, the model is huge compared to old models but I think that will get better, I might have to manage those models differently.
I am on beta at the moment yes, the way I understand that, quantized models are just compressed models, they will take less space in the disk but when loading them they will take the same memory so no memory improvements there, its still nice as I have seem very little precision loss compressing to 8bit which takes a fourth of the space.
Thanks for your response :))
ControlNet Tile is weird, it is supposed to help generate new details while maintaining general composition, like lots of new details at high denoising.
It somehow helps fix blurry images, very gently, aka img2img with guard rails.
Here in this doc the input image is corrupted by an upscaler and was reconstructed at high Denoising.
When we will need to make a collage, it should help with the seams between the tiles and generate a coherent huge image.
Instead of progressively upscaling 512 > 768 > 1024 > 2k at 0.2 denoise to add details (the best technique like 2months ago, lol), it could make a little leap and rely less on luck (noise) and hardware.
Conversion worked, but it adds weird patterns all over the image, tried briefly.Excited about the new inpainting :) forget about the controlnet-inpaint then, cause it was supposed to do exactly that : Inpainting on any model, not sure if it's better or not.
Interresting, I wasn't interested in SDXL, I was concerned about the load on my machine and how it would affect multitasking, I bet you'll do wonders.
I read somewhere SDXL refiner could work on 1.5 models, maybe that's a fun feature to add, it could acts as a face restorer, idk ^^I see, that's great, we will have [square/portrait/landscape] for the storage of a single old model, you should consider tidying the models under a single dropdown and make separate Size/Ratio buttons someday.
I'm happy about the upcoming updates, Thank you :)
ControlNet Tile is even more weird cause from I understood,
If you put a tile in the img2img input, and the original image in the controlnet input -- without changing the prompt -- it is supposed to recognize things and not generate a doggy on every tile.
Good luck with that :D
Edit : WELL, I tried it longer, it actually works (?!) I don't know what I've been doing wrong there hehe, super title upscaler here you are.
@ZProphete I was about to say, I've been testing this against the python implementation and these is what I got with the same exact outputs:
I may have changed something since the latest update but in any case, at least in the future update it will be pretty close to what you would get in python.
This is huuuuuuge :)
Thanks you !
@GuiyeC
I'm trying various stuffs on shortcuts right now (things without any 3rd party app in case you release it on iPad also) I'll keep you in touch
I like that you took more screen space, feels more robust and mature, looks faster navigating in large collection also.
I noticed the converter broke a little, it is looking for SDXL stuffs in priority thus I can't convert older models, and try if my TI bug is fixed, no biggie.
Can you add the SDXL into the model list ? I feel a little lazy :)
Very minor : CMD+A (select all) is not corresponding to French layout :P
Thank you, Thank you !
I'm back, I love it.
I think I got a good grasp in this test :
Manually Upscaled 512x512 to 1024x1024 then :
https://www.icloud.com/shortcuts/38eca437f6f84f16ae24054da31f46cf
Selected resulting tiles then :
https://www.icloud.com/shortcuts/dfea860f62d147b980cb9932130c0074
I failed/postponed :
- naming the files correctly through a nested loop lol (when I chain "execute another shortcut", the order is messed up; works in 2 separates steps)
- other ratios, it stretches for now
--
- The "save to collection" action accepts only Shortcut's Entry, the generated image can't be added, or must "execute another shortcut", not a real problem.
- Shortcut is weird but it does the job
- Love that it chooses ANE first on 512x512s and GPU automatically on others
- Hope for ControlNet (at least single) for completing my shortcut :)
Thanks, amazing job !
@ZProphete thanks for the response :) I'm glad you like it!
Shortcuts should be out in iPad in the next update, I have to work in the iPad and iOS UIs to add some of the new features first, I'm not sure how reliable they will be in lower memory devices though.
I thought the converter could have problems, I tried to keep it compatible with SDXL and older models and it seemed to be working, do you have a specific one that's failing so I can test with it?
I don't want to add the SDXL model for now, I got early access and I don't think I'm able to share the model for now, in any case the official release date was supposed to be today so maybe I can post SDXL 1.0 shortly.
I'll take a look at the things you mention, and feel free to share any other feedback with me :)
@GuiyeC
The error occurs on every models I tried, here is from a simple dreambooth :
I can't try my original TI conversion problem yet :)
There is a little problem with Text Encoder, it's in the [Text Encoder] tab but only "default" is displayed in the dropdown.
It IS visible in Shortcuts "Search Text Encoder Action".
Happy hunting :)