metadata
base_model: mattshumer/Reflection-Llama-3.1-70B
inference: false
library_name: gguf
license: llama3.1
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
Reflection-Llama-3.1-70B-IMat-GGUF
Llama.cpp imatrix quantization of mattshumer/Reflection-Llama-3.1-70B
Original Model: mattshumer/Reflection-Llama-3.1-70B
Original dtype: FP32
(float32
)
Quantized by: llama.cpp b3671
IMatrix dataset: here
Files
IMatrix
Status: β
Available
Link: here
Common Quants
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
Reflection-Llama-3.1-70B.Q8_0/* | Q8_0 | 74.98GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q6_K/* | Q6_K | 57.89GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q4_K.gguf | Q4_K | 42.52GB | β Available | π’ IMatrix | π¦ No |
Reflection-Llama-3.1-70B.Q3_K.gguf | Q3_K | 34.27GB | β Available | π’ IMatrix | π¦ No |
Reflection-Llama-3.1-70B.Q2_K.gguf | Q2_K | 26.38GB | β Available | π’ IMatrix | π¦ No |
All Quants
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
Reflection-Llama-3.1-70B.BF16/* | BF16 | 141.12GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.FP16/* | F16 | 141.12GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q8_0/* | Q8_0 | 74.98GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q6_K/* | Q6_K | 57.89GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q5_K/* | Q5_K | 49.95GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q5_K_S/* | Q5_K_S | 48.66GB | β Available | βͺ Static | β Yes |
Reflection-Llama-3.1-70B.Q4_K.gguf | Q4_K | 42.52GB | β Available | π’ IMatrix | π¦ No |
Reflection-Llama-3.1-70B.Q4_K_S.gguf | Q4_K_S | 40.35GB | β Available | π’ IMatrix | π¦ No |
Reflection-Llama-3.1-70B.IQ4_NL | IQ4_NL | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ4_XS | IQ4_XS | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.Q3_K.gguf | Q3_K | 34.27GB | β Available | π’ IMatrix | π¦ No |
Reflection-Llama-3.1-70B.Q3_K_L | Q3_K_L | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.Q3_K_S | Q3_K_S | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ3_M | IQ3_M | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ3_S | IQ3_S | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ3_XS | IQ3_XS | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.Q2_K.gguf | Q2_K | 26.38GB | β Available | π’ IMatrix | π¦ No |
Reflection-Llama-3.1-70B.Q2_K_S | Q2_K_S | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ2_M | IQ2_M | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | - |
Reflection-Llama-3.1-70B.IQ1_S | IQ1_S | - | β³ Processing | π’ IMatrix | - |
Downloading using huggingface-cli
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/Reflection-Llama-3.1-70B-IMat-GGUF --include "Reflection-Llama-3.1-70B.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/Reflection-Llama-3.1-70B-IMat-GGUF --include "Reflection-Llama-3.1-70B.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
Inference
Simple chat template
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
{user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
{next_user_prompt}<|eot_id|>
Chat template with system prompt
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|>
{user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
{next_user_prompt}<|eot_id|>
Llama.cpp
llama.cpp/main -m Reflection-Llama-3.1-70B.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
FAQ
Why is the IMatrix not applied everywhere?
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
How do I merge a split GGUF?
- Make sure you have
gguf-split
available- To get hold of
gguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find
gguf-split
- To get hold of
- Locate your GGUF chunks folder (ex:
Reflection-Llama-3.1-70B.Q8_0
) - Run
gguf-split --merge Reflection-Llama-3.1-70B.Q8_0/Reflection-Llama-3.1-70B.Q8_0-00001-of-XXXXX.gguf Reflection-Llama-3.1-70B.Q8_0.gguf
- Make sure to point
gguf-split
to the first chunk of the split.
- Make sure to point
Got a suggestion? Ping me @legraphista!