legraphista
commited on
Commit
•
ab36d21
1
Parent(s):
8df1c2f
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -261,8 +261,8 @@ Link: [here](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/
|
|
261 |
| [Llama-3.2-3B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.82GB | ✅ Available | 🟢 IMatrix | 📦 No
|
262 |
| [Llama-3.2-3B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.54GB | ✅ Available | 🟢 IMatrix | 📦 No
|
263 |
| [Llama-3.2-3B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.IQ3_M.gguf) | IQ3_M | 1.60GB | ✅ Available | 🟢 IMatrix | 📦 No
|
264 |
-
| Llama-3.2-3B-Instruct.IQ3_S | IQ3_S |
|
265 |
-
| Llama-3.2-3B-Instruct.IQ3_XS | IQ3_XS |
|
266 |
| Llama-3.2-3B-Instruct.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟢 IMatrix | -
|
267 |
| [Llama-3.2-3B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q2_K.gguf) | Q2_K | 1.36GB | ✅ Available | 🟢 IMatrix | 📦 No
|
268 |
| [Llama-3.2-3B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q2_K_S.gguf) | Q2_K_S | 1.27GB | ✅ Available | 🟢 IMatrix | 📦 No
|
|
|
261 |
| [Llama-3.2-3B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.82GB | ✅ Available | 🟢 IMatrix | 📦 No
|
262 |
| [Llama-3.2-3B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.54GB | ✅ Available | 🟢 IMatrix | 📦 No
|
263 |
| [Llama-3.2-3B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.IQ3_M.gguf) | IQ3_M | 1.60GB | ✅ Available | 🟢 IMatrix | 📦 No
|
264 |
+
| [Llama-3.2-3B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.IQ3_S.gguf) | IQ3_S | 1.54GB | ✅ Available | 🟢 IMatrix | 📦 No
|
265 |
+
| [Llama-3.2-3B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.IQ3_XS.gguf) | IQ3_XS | 1.48GB | ✅ Available | 🟢 IMatrix | 📦 No
|
266 |
| Llama-3.2-3B-Instruct.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟢 IMatrix | -
|
267 |
| [Llama-3.2-3B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q2_K.gguf) | Q2_K | 1.36GB | ✅ Available | 🟢 IMatrix | 📦 No
|
268 |
| [Llama-3.2-3B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-3B-Instruct-IMat-GGUF/blob/main/Llama-3.2-3B-Instruct.Q2_K_S.gguf) | Q2_K_S | 1.27GB | ✅ Available | 🟢 IMatrix | 📦 No
|