Spaces:
Running
on
Zero
No CPU?
SDXL runs just fine on a free tier CPU space here. View my SDXL-1.0-CPU space. ( https://huggingface.co/spaces/Manjushri/SDXL-1.0-CPU ) Why not actually add the code to make that work for people that wish to duplicate this space to a CPU space?
@Manjushri
It's really slow on the CPU, so there's basically no point in implementing it. People can just use the free Google Colab to run this Space if they don't want to pay.
That's a matter of opinion, if it is worth it or not. If it isn't going to be made to be able to run on a CPU, remove the if / else command, it is just sloppy. 4 more lines of code and it works... Not doing it Sounds lazy to me. Saying it isn't worth it is an incredibly one sided opinion. My SDXL-CPU space has just as many likes as your A10G space (3 more at this time actually), so clearly others value it.
Sorry if I sounded different than I intended and hurt your feelings.
I'm not saying that your Space is worthless. I was just trying to answer why I'm not going to implement to load models on the CPU for my Space.
If it isn't going to be made to be able to run on a CPU, remove the if / else command, it is just sloppy. 4 more lines of code and it works...
Are you talking about this part? Actually, this is useful to quickly check the UI without loading the model.
Yes, You have set your pipelines to None, which is pointless and would lead to a build errors, thus never actually seeing the output about not being able to run it on CPU. Simply not including That if/else would be the simplified code that doesn't lead to errors. Again, look at my code to see how this is done correctly. Also, you mean checking Environment, not UI...
Trust me, my feelings were not hurt. I merely informed you of the obvious flaws in your logic. People honestly don't appear to mind waiting 25-50 minutes an image. It would be hard to hurt My Feelings on this particular subject. My Space was the 1st one to use SDXL correctly (Base+Refiner), the first one to use SDXL 1.0, period. Mine Runs faster on a T4 than yours does on an A10G, and I've squeezed in SD 2.0 x2 Upscaler into My Community Granted T4. I have 110 more likes than yours at last count... Plus better control. You arbitrarily list 2nd prompts not realizing those are for embedding... Want me to go on?
I've answered the reason why I'd like to keep this Space the way it is, so let me close this discussion.
Feel free to duplicate this Space and apply the following patch if you'd like to make it available on CPU as well:
diff --git a/app.py b/app.py
index 74c48a9..4de53c8 100755
--- a/app.py
+++ b/app.py
@@ -12,8 +12,6 @@ import torch
from diffusers import DiffusionPipeline
DESCRIPTION = '# SD-XL'
-if not torch.cuda.is_available():
- DESCRIPTION += '\n<p>Running on CPU 🥶 This demo does not work on CPU.</p>'
MAX_SEED = np.iinfo(np.int32).max
CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv(
@@ -47,8 +45,13 @@ if torch.cuda.is_available():
mode='reduce-overhead',
fullgraph=True)
else:
- pipe = None
- refiner = None
+ pipe = DiffusionPipeline.from_pretrained(
+ 'stabilityai/stable-diffusion-xl-base-1.0',
+ use_safetensors=True)
+ refiner = DiffusionPipeline.from_pretrained(
+ 'stabilityai/stable-diffusion-xl-refiner-1.0',
+ use_safetensors=True)
+
def randomize_seed_fn(seed: int, randomize_seed: bool) -> int:
No thanks, I already run a better one. But you see my point how simple it was. One leads to errors and is code bloat (only 3 lines, but still, it causes errors) and the other is functional.