Spaces:
Running
on
Zero
space not loading if not logged in to hugging face
^
Funny, I didn't put in not-for-all-audience or anything...😇
I'll check things out.
There is no such indication in the related space, and there should be no function to do anything about HF access restrictions from the Python source code in the first place, so I tried to make the most suspicious README.md as close as possible to the original source code.
https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer/blob/main/README.md
https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer
If this is fixed, it doesn't make sense to the contrary, but I wonder what happened?
Page is loading ,but on generate it's saying error , without login ,I was doing it because my GPU limit got exceeded after I generated 3 image , what I do is I logout put a vpn on ,and then generate,it used to work few days prior ,not now .
Ah, that's how it is!
I know about that old (change IP and Quota will be reset) spec too.
The Zero GPU space at HF is currently undergoing rush construction, so the behavior is changing rapidly yesterday, today, and tomorrow.
As part of that process, new bugs must have been created, or they tried to give preferential treatment to HF logged-in users and failed, and created bugs.
Either way, there is no doubt that it is a HF-wide spec change, and there is nothing I can do about it.
It would be easy for me to go and report the bug, but I have a feeling that it might result in beating around the bush and letting out a snake.🤢
What should I do?
oh understood, lets just wait it out then , wont want them to make it worse than it is. thanks for the information <3
it's okay.
can it run locally?
I don't know because I've never tried to run Spaces locally on my PC specs, even for SDXL is just barely enough.
If you have a PC with the performance to run it locally, wouldn't ComfyUI or WebUI Forge be faster with less memory (and support for fp8, NF4 and GGUF)?
However, HF_TOKEN and CIVITAI_API_KEY in this space are only used to access the tagger's LLM and download LoRA, respectively, and the rest of the processing should only be enough to run locally.
P.S.
Diffusers will soon support NF4 loading.
https://huggingface.co/sayakpaul/flux.1-dev-nf4-with-bnb-integration
Incidentally, Flux models are usually mirrored here.
It would be useful if you use it for WebUI or something.
If you don't see it here, it must be somewhere in HF.
https://huggingface.co/datasets/John6666/flux1-backup-202408
https://huggingface.co/datasets/John6666/flux1-backup-202409
If you want to use HF's LoRA in the WebUI, you may need to convert it, but this should do it.
It says SDXL, but I haven't checked to see if the contents are SDXL, so it should work. (Maybe!)
https://huggingface.co/spaces/John6666/convert_repo_to_safetensors_sdxl_lora
I don't know because I've never tried to run Spaces locally on my PC specs, even for SDXL is just barely enough.
If you have a PC with the performance to run it locally, wouldn't ComfyUI or WebUI Forge be faster with less memory (and support for fp8, NF4 and GGUF)?However, HF_TOKEN and CIVITAI_API_KEY in this space are only used to access the tagger's LLM and download LoRA, respectively, and the rest of the processing should only be enough to run locally.
P.S.
Diffusers will soon support NF4 loading.
https://huggingface.co/sayakpaul/flux.1-dev-nf4-with-bnb-integration
ok, thx for the answer
Lora not working , showing error
Thank you for the report.
Actually, I changed the source code yesterday, but it didn't work because of a bug in HF, so I put it back.
In other words, the code for the generation part hasn't changed at all since a few days ago.
And this bug is the same as the HF-derived bug that was in other spaces.
The conditions under which the bug occurs have changed!😭
Seriously...
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 288, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1931, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1516, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 201, in run_lora
image = generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, cn_on, progress)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 211, in gradio_handler
raise gr.Error("GPU task aborted")
gradio.exceptions.Error: 'GPU task aborted'
I restarted the space and the symptoms went away, but there's still some bug lurking in there...
Could not parse server response: SyntaxError: Unexpected token '<', "
That bug is one that I see from time to time.
To be precise, I think it is one form of this series of bugs in HF.
I've rebooted for now.
still gives an error...
The big bug in HF that has been a problem for a while now is pretty much fixed, but it only works on spaces that have been updated after the fix, so it hasn't been reflected yet.
So I rebooted.
I can't do anything about the restrictions... I have all the restrictions turned off as much as possible.