Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

AWQ 4-bit 128g version of mpt-7b-instruct!

#62
by abhinavkulkarni - opened

Hi,

I would like to draw everyone's attention to AWQ quantized version of mpt-7b-instruct model at https://huggingface.co/abhinavkulkarni/mpt-7b-instruct-w4-g128-awq.

For more on AWQ, click here.

The quantized model size on disk is 3.76GB vs 13.3GB for the original model. Similar gains could be observed for VRAM usage. The perplexity is only worse by 5%.

Please take a look and give it a try.

Thanks!

abhinavkulkarni changed discussion status to closed

Sign up or log in to comment