Spaces:
Starting
on
A10G
using colab trainings locally
Is there a way to use the concepts trained on colab locally? I cloned the concept from the library, but it is missing the config files etc.
Yes!
Take this example from the concept library https://huggingface.co/sd-concepts-library/kheiron
Download learned_embeds.bin from "Files and versions" and rename it like the placeholder (in this case, kheiron.bin).
On the root of your stable diffusion installation, create if not exist a folder named embeddings, and put the file there.
Works like a charm using AUTOMATIC1111 repo. No need to close and reopen the bat file, launching a new generation automatically load new embeddings
Thanks for the response Sprunk!
Do I have to use the AUTOMATIC1111 repo or does it work with mainline too?
Interesting, can I use multiple embeddings at a time?
I don't know it works on the mainline too, but i can confirm that you can use multiple embedding at a time with the AUTOMATIC1111 repo
Thanks! One last question, do you know if it is possible to use AUTOMATIC1111 from the command line? I'm not great with web-uis and I have my own workflow that sits on top of sd.
Oh I see an issue on github that AUTOMATIC1111 doesn't work on linux. That would be great to use multiple concepts, but if it doesn't work on linux, then I have to find something else. My main goal is still to download a learned_embeds.bin file and use it for Inference as described here: https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion
Thanks! One last question, do you know if it is possible to use AUTOMATIC1111 from the command line? I'm not great with web-uis and I have my own workflow that sits on top of sd.
I think the webui was created for not having to use the commandline.
Oh I see an issue on github that AUTOMATIC1111 doesn't work on linux. That would be great to use multiple concepts, but if it doesn't work on linux, then I have to find something else. My main goal is still to download a learned_embeds.bin file and use it for Inference as described here: https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion
It works great in Linux. Take the Colab ipynb Notebook as a starting point. Remember to download the sd-concept repos with "git lfs clone", not just with "git clone", then it will work without problems. Load the webui with --embeddings-dir pointing to the folder where your .bin files lay.
Thank you bartman! That makes sense, I thought I could do it with the code in the README at diffusers/examples/textual_inversion , but I should just try to get the collab notebook working locally.
I think the webui was created for not having to use the commandline.
It's unfortunate because using multiple embeds would be great but that functionality is sort of wedded to the UI part I guess.
@skomra
You're welcome. You have more options than one;
You can clone then Automatic111 repo and run
"python launch.py --embeddings-dir /whatever-folder/"
/whatever-folder/ being the folder where your .bin files from your text embeddings reside.
This option does not use the HF Diffusers pipeline.
This will probably change in some time, as the Diffusers Devs have offered to embed the Diffusers pipeline in the WebUI repo.
The launcher will install the dependencies.
But I think you have to download the model.
Here is an easy way to download the v1.4 model:
"wget https://archive.org/download/sd-v1-4/sd-v1-4.ckpt"You can make a big Python File from the Text-Inversion Inference Colab and run the Python File.
The Notebook uses the HF Diffusers pipeline.