Update README.md
Browse files
README.md
CHANGED
@@ -116,8 +116,16 @@ This repo contains GGUF format model files for [ibm-granite/granite-34b-code-bas
|
|
116 |
|
117 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
118 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
## Prompt template
|
120 |
|
|
|
121 |
```
|
122 |
|
123 |
```
|
@@ -126,18 +134,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
126 |
|
127 |
| Filename | Quant type | File Size | Description |
|
128 |
| -------- | ---------- | --------- | ----------- |
|
129 |
-
| [granite-34b-code-base-8k-Q2_K.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
130 |
-
| [granite-34b-code-base-8k-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
131 |
-
| [granite-34b-code-base-8k-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
132 |
-
| [granite-34b-code-base-8k-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
133 |
-
| [granite-34b-code-base-8k-Q4_0.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
134 |
-
| [granite-34b-code-base-8k-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
135 |
-
| [granite-34b-code-base-8k-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
136 |
-
| [granite-34b-code-base-8k-Q5_0.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
137 |
-
| [granite-34b-code-base-8k-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
138 |
-
| [granite-34b-code-base-8k-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
139 |
-
| [granite-34b-code-base-8k-Q6_K.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
140 |
-
| [granite-34b-code-base-8k-Q8_0.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/
|
141 |
|
142 |
|
143 |
## Downloading instruction
|
|
|
116 |
|
117 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
118 |
|
119 |
+
|
120 |
+
<div style="text-align: left; margin: 20px 0;">
|
121 |
+
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
122 |
+
Run them on the TensorBlock client using your local machine ↗
|
123 |
+
</a>
|
124 |
+
</div>
|
125 |
+
|
126 |
## Prompt template
|
127 |
|
128 |
+
|
129 |
```
|
130 |
|
131 |
```
|
|
|
134 |
|
135 |
| Filename | Quant type | File Size | Description |
|
136 |
| -------- | ---------- | --------- | ----------- |
|
137 |
+
| [granite-34b-code-base-8k-Q2_K.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q2_K.gguf) | Q2_K | 12.207 GB | smallest, significant quality loss - not recommended for most purposes |
|
138 |
+
| [granite-34b-code-base-8k-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q3_K_S.gguf) | Q3_K_S | 13.791 GB | very small, high quality loss |
|
139 |
+
| [granite-34b-code-base-8k-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q3_K_M.gguf) | Q3_K_M | 16.361 GB | very small, high quality loss |
|
140 |
+
| [granite-34b-code-base-8k-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q3_K_L.gguf) | Q3_K_L | 18.207 GB | small, substantial quality loss |
|
141 |
+
| [granite-34b-code-base-8k-Q4_0.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q4_0.gguf) | Q4_0 | 17.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
142 |
+
| [granite-34b-code-base-8k-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q4_K_S.gguf) | Q4_K_S | 18.110 GB | small, greater quality loss |
|
143 |
+
| [granite-34b-code-base-8k-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q4_K_M.gguf) | Q4_K_M | 19.915 GB | medium, balanced quality - recommended |
|
144 |
+
| [granite-34b-code-base-8k-Q5_0.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q5_0.gguf) | Q5_0 | 21.800 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
145 |
+
| [granite-34b-code-base-8k-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q5_K_S.gguf) | Q5_K_S | 21.800 GB | large, low quality loss - recommended |
|
146 |
+
| [granite-34b-code-base-8k-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q5_K_M.gguf) | Q5_K_M | 23.050 GB | large, very low quality loss - recommended |
|
147 |
+
| [granite-34b-code-base-8k-Q6_K.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q6_K.gguf) | Q6_K | 25.926 GB | very large, extremely low quality loss |
|
148 |
+
| [granite-34b-code-base-8k-Q8_0.gguf](https://huggingface.co/tensorblock/granite-34b-code-base-8k-GGUF/blob/main/granite-34b-code-base-8k-Q8_0.gguf) | Q8_0 | 33.518 GB | very large, extremely low quality loss - not recommended |
|
149 |
|
150 |
|
151 |
## Downloading instruction
|