LiberatedHermes-2-Pro-Mistral-7B-HQQ

This is a 4bit quantization using HQQ

Load Script

from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = HQQModelForCausalLM.from_quantized(model_id)
Downloads last month
16
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for macadeliccc/LiberatedHermes-2-Pro-Mistral-7B-HQQ