model doesnt seem to support device_map="auto" for multi GPU .
#11
by
pulkitmehtametacube
- opened
model doesnt seem to support device_map="auto" for multi GPU . Please suggest.
Also adding more details for the error when we choose device_map="auto"
UNAVAILABLE: Internal: ValueError: NewModel does not support device_map='auto' | | | |
. To implement support, the model class needs to implement the _no_split_modu | | | | les
attribute.
This model can run efficiently on an 8GB GPU, and we did not consider multi-GPU scenarios.
You can enable xformers and unpadding to reduce GPU memory usage:
https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers