Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,7 @@ This is a version of the LLama-2-7B-chat-hf model quantized to 4-bit via Half-Qu
|
|
12 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
13 |
``` Python
|
14 |
model_id = 'mobiuslabsgmbh/Llama-2-7b-chat-hf-4bit_g64-HQQ'
|
|
|
15 |
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
|
16 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
17 |
model = HQQModelForCausalLM.from_quantized(model_id)
|
|
|
12 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
13 |
``` Python
|
14 |
model_id = 'mobiuslabsgmbh/Llama-2-7b-chat-hf-4bit_g64-HQQ'
|
15 |
+
|
16 |
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
|
17 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
18 |
model = HQQModelForCausalLM.from_quantized(model_id)
|