OOM at inferencing

#1
by abalogh - opened

Hi all,

using the sample code from the non-AWQ version of the model, I get OOME:

from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image

DEVICE = "cuda"
MODEL = "HuggingFaceM4/idefics2-8b-AWQ"

# Note that passing the image urls (instead of the actual pil images) to the processor is also possible
image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")
image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")
processor = AutoProcessor.from_pretrained(MODEL)
model = AutoModelForVision2Seq.from_pretrained(MODEL,
    torch_dtype="auto",
    device_map="auto")
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.52 GiB. GPU 0 has a total capacity of 15.71 GiB of which 4.31 GiB is free. Process 9715 has 15.40 MiB memory in use. Including non-PyTorch memory, this process has 11.32 GiB memory in use. Of the allocated memory 10.70 GiB is allocated by PyTorch, and 474.47 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Memory usage is around 6G after loading model, then it crashes when i am calling model.generate().

Is this expected? How much RAM should I have to be able to use this model?

I am getting "awq_ext" not defined error while running below code -
model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b-AWQ",quantization_config=quantization_config,).to(DEVICE)

image.png

Can someone please help me in this ?

Hi @abalogh
can you say more about where in the model you are oom-ing?
i suspect it's in the vision self attention.
https://huggingface.co/HuggingFaceM4/idefics2-8b#model-optimizations gives a few tips on how to reduce that memory requirement

Hi @VictorSanh
I tried to Finetune idefics2-8b-AWQ model but unfortunately i encountered this error "ImportError: /usr/local/lib/python3.10/dist-packages/awq_inference_engine.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi" so i tried installing awq inference engine but pypi doesnt have awq inference engine package so i tried downloading from a third party and done a setup but still i landed on that same error, can you help me with this?
Screenshot from 2024-06-15 01-20-10.png

Sign up or log in to comment