Running on 32 GB V100

#1
by Dong142857 - opened

Hi, thank you for sharing this model. I want to have a try on my 32GB V100 device. when I attempt to use the example code, I encounter the out-of-memory (OOM) problem. So I try to add the 'pipeline.enable_model_cpu_offload' in the example code:

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16)
pipeline.enable_model_cpu_offload() # to save the VRAM
pipeline.load_lora_weights('Purz/uv-unwrapped-head', weight_name='purz-uv_unwrapp3d_h34d.safetensors')
image = pipeline('uv_unwrapp3d_h34d, a man with a beard').images[0]
image.save('./my_uv_images.png')

This code can run successfully, but the synthesized image is not a UV image. I'm not familiar with diffuser library. Can you give me some help?

my_uv_images.png

Sign up or log in to comment