legraphista's picture
Upload README.md with huggingface_hub
ed74d91 verified
|
raw
history blame
8.12 kB
metadata
base_model: Qwen/Qwen2-57B-A14B-Instruct
inference: false
language:
  - en
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
  - chat
  - quantized
  - GGUF
  - quantization
  - static
  - 16bit
  - 8bit
  - 6bit
  - 5bit
  - 4bit
  - 3bit
  - 2bit

Qwen2-57B-A14B-Instruct-GGUF

Llama.cpp static quantization of Qwen/Qwen2-57B-A14B-Instruct

Original Model: Qwen/Qwen2-57B-A14B-Instruct
Original dtype: BF16 (bfloat16)
Quantized by: https://github.com/ggerganov/llama.cpp/tree/master
IMatrix dataset: here


Files

Common Quants

Filename Quant type File Size Status Uses IMatrix Is Split
Qwen2-57B-A14B-Instruct.Q8_0/* Q8_0 61.02GB βœ… Available βšͺ Static βœ‚ Yes
Qwen2-57B-A14B-Instruct.Q6_K/* Q6_K 47.12GB βœ… Available βšͺ Static βœ‚ Yes
Qwen2-57B-A14B-Instruct.Q4_K.gguf Q4_K 34.85GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q3_K.gguf Q3_K 27.51GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q2_K.gguf Q2_K 21.06GB βœ… Available βšͺ Static πŸ“¦ No

All Quants

Filename Quant type File Size Status Uses IMatrix Is Split
Qwen2-57B-A14B-Instruct.BF16/* BF16 114.84GB βœ… Available βšͺ Static βœ‚ Yes
Qwen2-57B-A14B-Instruct.FP16/* F16 114.84GB βœ… Available βšͺ Static βœ‚ Yes
Qwen2-57B-A14B-Instruct.Q8_0/* Q8_0 61.02GB βœ… Available βšͺ Static βœ‚ Yes
Qwen2-57B-A14B-Instruct.Q6_K/* Q6_K 47.12GB βœ… Available βšͺ Static βœ‚ Yes
Qwen2-57B-A14B-Instruct.Q5_K.gguf Q5_K 40.80GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q5_K_S.gguf Q5_K_S 39.57GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q4_K.gguf Q4_K 34.85GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q4_K_S.gguf Q4_K_S 32.71GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.IQ4_NL.gguf IQ4_NL 32.72GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.IQ4_XS.gguf IQ4_XS 31.00GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q3_K.gguf Q3_K 27.51GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q3_K_L.gguf Q3_K_L 29.79GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.Q3_K_S.gguf Q3_K_S 24.91GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.IQ3_M IQ3_M - ⏳ Processing βšͺ Static -
Qwen2-57B-A14B-Instruct.IQ3_S IQ3_S - ⏳ Processing βšͺ Static -
Qwen2-57B-A14B-Instruct.IQ3_XS IQ3_XS - ⏳ Processing βšͺ Static -
Qwen2-57B-A14B-Instruct.IQ3_XXS IQ3_XXS - ⏳ Processing βšͺ Static -
Qwen2-57B-A14B-Instruct.Q2_K.gguf Q2_K 21.06GB βœ… Available βšͺ Static πŸ“¦ No
Qwen2-57B-A14B-Instruct.IQ2_M IQ2_M - ⏳ Processing βšͺ Static -

Downloading using huggingface-cli

If you do not have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Download the specific file you want:

huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0.gguf" --local-dir ./

If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's

Inference

Simple chat template

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>

Chat template with system prompt

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>

Llama.cpp

llama.cpp/main -m Qwen2-57B-A14B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
  2. Locate your GGUF chunks folder (ex: Qwen2-57B-A14B-Instruct.Q8_0)
  3. Run gguf-split --merge Qwen2-57B-A14B-Instruct.Q8_0/Qwen2-57B-A14B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-57B-A14B-Instruct.Q8_0.gguf
    • Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!