What kind of machine would be suitable for this model (in amazon sagemaker)?
For simply to try deployment and inference with up to 500 words. Any help?
Hi! The model has approximately 20 billion parameters and thus loading the model in FP16 format would require approximately 40GB of VRAM, with additional memory required for inference.
However, if you allow some loss of precision, you can also try 8bit quantization which should halve the memory footprint.
After installing accelerate
and bitsandbytes
, just try to load the model as follows:
model = AutoModelForCausalLM.from_pretrained('togethercomputer/GPT-NeoXT-Chat-Base-20B', device_map="auto", load_in_8bit=True)
Thanks for the help. Managed to deploy OK but unfortunately getting an error during inference (I deployed using very big instances (ml.g4dn.12xlarge that has 48 vCPU and 192 GiB mem). I wonder if the huggingface version (amazon supports up to transformers_version="4.17.0") is a problem or if there's something else going on.
I could deploy & do inference just fine on neo 125MB model, but this big 20B model is causing an error during inference:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "\u0027gpt_neox\u0027"
}
It seems that most likely the problem is with the version. (Cross-posting this same link) ...here's a potential solution:
HI, were you able to solve this with the above solution?
It appears that 8-bit quantization works well on version 4.22.1, but not on version 4.21.1. So it is likely a version issue.