Problem using on GPU

#2
by galochka - opened

I have a problem when I use a model on GPU.
I use a simple code

model = OneFormerForUniversalSegmentation.from_pretrained('shi-labs/oneformer_coco_swin_large').to('cuda')
instance_inputs = processor(images=image, task_inputs=['instance'], return_tensors='pt').to('cuda')
with torch.no_grad():
    outputs = model(**instance_inputs)
instance_segmentation = processor.post_process_instance_segmentation(outputs)[0]

In this case model return OneFormerModelOutput object and it's on GPU. model(**instance_inputs).to('cpu') doesn't work here. How can I return this object on CPU?

Sign up or log in to comment