How can run the model on GPU?
#6
by
GaoQiQiang
- opened
I wanna run this model on gpu. I tried "pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, max_new_tokens=50).to("cuda")", but I got " AttributeError: 'TextGenerationPipeline' object has no attribute 'to' ". Also, I tried "model=model.to("cuda")", I got "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)". Please
You can make it working by the following
tokenizer = AutoTokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
model = AutoModelForCausalLM.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
gpt2_pipe = pipeline("text-generation", model= model, tokenizer = tokenizer, device = CUDA_DEVICE)
Thanks for your code, it works. Followed your github@https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion/discussions/6#63e6306d63037c7d960bbb5a