File size: 10,460 Bytes
a5541ed 68bbe38 a5541ed c946849 42b4e12 b4c13bd 313f382 a5541ed bd1033d 6fbe8f6 ca7ddc7 b154c3a 8f463ac ed94ba5 9e04bde c7e30be 6da9da2 54a6512 6daa7cc 7a006b9 5ac72ac 56ef889 b2a44bd e827066 f3bbb21 a5541ed |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 |
---
base_model: openchat/openchat-3.6-8b-20240522
inference: false
library_name: gguf
license: llama3
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
---
# openchat-3.6-8b-20240522-IMat-GGUF
_Llama.cpp imatrix quantization of openchat/openchat-3.6-8b-20240522_
Original Model: [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3006](https://github.com/ggerganov/llama.cpp/releases/tag/b3006)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
- [openchat-3.6-8b-20240522-IMat-GGUF](#openchat-3-6-8b-20240522-imat-gguf)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: β
Available
Link: [here](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [openchat-3.6-8b-20240522.Q8_0.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q8_0.gguf) | Q8_0 | 8.54GB | β
Available | βͺ Static | π¦ No
| [openchat-3.6-8b-20240522.Q6_K.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q6_K.gguf) | Q6_K | 6.60GB | β
Available | βͺ Static | π¦ No
| [openchat-3.6-8b-20240522.Q4_K.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q4_K.gguf) | Q4_K | 4.92GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.Q3_K.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q3_K.gguf) | Q3_K | 4.02GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.Q2_K.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [openchat-3.6-8b-20240522.FP16.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.FP16.gguf) | F16 | 16.07GB | β
Available | βͺ Static | π¦ No
| [openchat-3.6-8b-20240522.BF16.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.BF16.gguf) | BF16 | 16.07GB | β
Available | βͺ Static | π¦ No
| [openchat-3.6-8b-20240522.Q5_K.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q5_K.gguf) | Q5_K | 5.73GB | β
Available | βͺ Static | π¦ No
| [openchat-3.6-8b-20240522.Q5_K_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q5_K_S.gguf) | Q5_K_S | 5.60GB | β
Available | βͺ Static | π¦ No
| [openchat-3.6-8b-20240522.Q4_K_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q4_K_S.gguf) | Q4_K_S | 4.69GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.Q3_K_L.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q3_K_L.gguf) | Q3_K_L | 4.32GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.Q3_K_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q3_K_S.gguf) | Q3_K_S | 3.66GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.Q2_K_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.Q2_K_S.gguf) | Q2_K_S | 2.99GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ4_NL.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ4_NL.gguf) | IQ4_NL | 4.68GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ4_XS.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ4_XS.gguf) | IQ4_XS | 4.45GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ3_M.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ3_M.gguf) | IQ3_M | 3.78GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ3_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ3_S.gguf) | IQ3_S | 3.68GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ3_XS.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ3_XS.gguf) | IQ3_XS | 3.52GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ3_XXS.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ2_M.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ2_M.gguf) | IQ2_M | 2.95GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ2_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ2_S.gguf) | IQ2_S | 2.76GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ2_XS.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ2_XS.gguf) | IQ2_XS | 2.61GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ2_XXS.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ2_XXS.gguf) | IQ2_XXS | 2.40GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ1_M.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ1_M.gguf) | IQ1_M | 2.16GB | β
Available | π’ IMatrix | π¦ No
| [openchat-3.6-8b-20240522.IQ1_S.gguf](https://huggingface.co/legraphista/openchat-3.6-8b-20240522-IMat-GGUF/blob/main/openchat-3.6-8b-20240522.IQ1_S.gguf) | IQ1_S | 2.02GB | β
Available | π’ IMatrix | π¦ No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/openchat-3.6-8b-20240522-IMat-GGUF --include "openchat-3.6-8b-20240522.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/openchat-3.6-8b-20240522-IMat-GGUF --include "openchat-3.6-8b-20240522.Q8_0/*" --local-dir openchat-3.6-8b-20240522.Q8_0
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|begin_of_text|><|start_header_id|>GPT4 Correct User<|end_header_id|>
Can you provide ways to eat combinations of bananas and dragonfruits?<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>
Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|eot_id|><|start_header_id|>GPT4 Correct User<|end_header_id|>
What about solving an 2x + 3 = 7 equation?<|eot_id|>
```
### Chat template with system prompt
```
<|begin_of_text|><|start_header_id|>System<|end_header_id|>
You are a helpful AI.<|eot_id|><|start_header_id|>GPT4 Correct User<|end_header_id|>
Can you provide ways to eat combinations of bananas and dragonfruits?<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>
Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|eot_id|><|start_header_id|>GPT4 Correct User<|end_header_id|>
What about solving an 2x + 3 = 7 equation?<|eot_id|>
```
### Llama.cpp
```
llama.cpp/main -m openchat-3.6-8b-20240522.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `openchat-3.6-8b-20240522.Q8_0`)
3. Run `gguf-split --merge openchat-3.6-8b-20240522.Q8_0/openchat-3.6-8b-20240522.Q8_0-00001-of-XXXXX.gguf openchat-3.6-8b-20240522.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |