runtime error
Exit code: 1. Reason: ll fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference. sdxl_lightning_4step_lora.safetensors: 0%| | 0.00/394M [00:00<?, ?B/s][A sdxl_lightning_4step_lora.safetensors: 47%|βββββ | 184M/394M [00:01<00:01, 180MB/s][A sdxl_lightning_4step_lora.safetensors: 100%|ββββββββββ| 394M/394M [00:01<00:00, 319MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 110, in <module> models = { File "/home/user/app/app.py", line 111, in <dictcomp> k: StableMultiDiffusionSDXLPipeline(device, hf_key=v, has_i2t=False).cuda() File "/home/user/app/model.py", line 174, in __init__ self.pipe.load_lora_weights(hf_hub_download(lightning_repo, lora_ckpt), adapter_name='lightning') File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 636, in load_lora_weights self.load_lora_into_unet( File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 826, in load_lora_into_unet unet.load_attn_procs( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/unet.py", line 215, in load_attn_procs is_model_cpu_offload, is_sequential_cpu_offload = self._process_lora( File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/unet.py", line 355, in _process_lora incompatible_keys = set_peft_model_state_dict(self, state_dict, adapter_name, **peft_kwargs) File "/usr/local/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 445, in set_peft_model_state_dict load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True) TypeError: Module.load_state_dict() got an unexpected keyword argument 'assign'
Container logs:
Fetching error logs...