Gemma-2-27b-it ?

#2
by VlSav - opened

Hi! Thanks for quantization!
Is there any chance you will do that same for 27B gemma-2 ?

ModelCloud.AI org
edited Jul 22

@VlSav we uploaded https://huggingface.co/ModelCloud/gemma-2-27b-it-gptq-4bit, please check inference examples.

@lrl-modelcloud Thank a lot! Working fine so far.

VlSav changed discussion status to closed

Sign up or log in to comment