Suparious's picture
Added base_model tag in README.md
6d442d9 verified
metadata
base_model: cognitivecomputations/dolphin-2.8-experiment26-7b
language:
  - en
license: apache-2.0
datasets:
  - ehartford/dolphin
  - jondurbin/airoboros-2.2.1
  - ehartford/dolphin-coder
  - teknium/openhermes
  - m-a-p/Code-Feedback
tags:
  - quantized
  - 4-bit
  - AWQ
  - transformers
  - pytorch
  - mistral
  - text-generation
  - conversational
  - license:apache-2.0
  - autotrain_compatible
  - endpoints_compatible
  - text-gen
library_name: transformers
model_creator: hydra-project
model_name: ChatHercules-2.5-Mistral-7B
model_type: mistral
pipeline_tag: text-generation
inference: false
prompt_template: |
  <|im_start|>system
  {system_message}<|im_end|>
  <|im_start|>user
  {prompt}<|im_end|>
  <|im_start|>assistant
quantized_by: Suparious

cognitivecomputations/dolphin-2.8-experiment26-7b AWQ

Model Summary

Sponsored by MassedCompute

Discord https://discord.gg/cognitivecomputations

This model is based on Experiment-26 by Yam Peleg.

The base model has 16k context

This Dolphin is really good at coding, @ehartford trained this with a lot of coding data.

It took 3 days to train 3 epochs on 7x A6000s using qlora on Axolotl

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/dolphin-2.8-experiment26-7b-AWQ"
system_message = "You are Hercules, incarnated as a powerful AI."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant