base_model: Locutusque/NeuralHyperion-2.0-Mistral-7B
library_name: transformers
tags:
- code
- chemistry
- medical
- quantized
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
license: apache-2.0
datasets:
- Locutusque/hyperion-v2.0
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
model_creator: Locutusque
model_name: Darewin-7B
model_type: mistral
pipeline_tag: text-generation
inference: false
prompt_template: |
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
quantized_by: Suparious
Locutusque/NeuralHyperion-2.0-Mistral-7B AWQ
- Model creator: Locutusque
- Original model: NeuralHyperion-2.0-Mistral-7B
Model Summary
Locutusque/NeuralHyperion-2.0-Mistral-7B
is a state-of-the-art language model fine-tuned on the Hyperion-v2.0 and distilabel-capybara dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
How to use
Install the necessary packages
pip install --upgrade autoawq autoawq-kernels
Example Python code
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/NeuralHyperion-2.0-Mistral-7B-AWQ"
system_message = "You are Hyperion, incarnated as a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
Prompt template: ChatML
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant