4-bit GEMM AWQ Quantizations of OpenBioLLM-Llama3-8B
Using AutoAWQ release v0.2.4 for quantization.
Original model: https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B
Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
AWQ Parameters
- q_group_size: 128
- w_bit: 4
- zero_point: True
- version: GEMM
How to run
From the AutoAWQ repo here
First install autoawq pypi package:
pip install autoawq
Then run the following:
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
quant_path = "models/OpenBioLLM-Llama3-8B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
chat = [
{"role": "system", "content": "You are a concise assistant that helps answer questions."},
{"role": "user", "content": prompt},
]
# <|eot_id|> used for llama 3 models
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
tokens = tokenizer.apply_chat_template(
chat,
return_tensors="pt"
).cuda()
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_new_tokens=64,
eos_token_id=terminators
)
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for bartowski/OpenBioLLM-Llama3-8B-AWQ
Base model
meta-llama/Meta-Llama-3-8B