Supa-AI commited on
Commit
8a8361a
1 Parent(s): c6b6253

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +34 -22
README.md CHANGED
@@ -247,26 +247,38 @@ tags:
247
  - gguf
248
  ---
249
 
250
- # Supa-AI/Ministral-8B-Instruct-2410-gguf
251
- This model was converted to GGUF format from [`mistralai/Ministral-8B-Instruct-2410`](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) using llama.cpp.
252
- Refer to the [original model card](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) for more details on the model.
253
-
254
- ## Available Versions
 
 
 
 
255
  - `Ministral-8B-Instruct-2410.q8_0.gguf` (q8_0)
256
-
257
- ## Use with llama.cpp
258
- Replace `FILENAME` with one of the above filenames.
259
-
260
- ### CLI:
261
- ```bash
262
- llama-cli --hf-repo Supa-AI/Ministral-8B-Instruct-2410-gguf --hf-file FILENAME -p "Your prompt here"
263
- ```
264
-
265
- ### Server:
266
- ```bash
267
- llama-server --hf-repo Supa-AI/Ministral-8B-Instruct-2410-gguf --hf-file FILENAME -c 2048
268
- ```
269
-
270
- ## Model Details
271
- - **Original Model:** [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410)
272
- - **Format:** GGUF
 
 
 
 
 
 
 
 
 
247
  - gguf
248
  ---
249
 
250
+ # Supa-AI/Ministral-8B-Instruct-2410-gguf
251
+ This model was converted to GGUF format from [`mistralai/Ministral-8B-Instruct-2410`](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) using llama.cpp.
252
+ Refer to the [original model card](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) for more details on the model.
253
+
254
+ ## Available Versions
255
+ - `Ministral-8B-Instruct-2410.q4_0.gguf` (q4_0)
256
+ - `Ministral-8B-Instruct-2410.q4_1.gguf` (q4_1)
257
+ - `Ministral-8B-Instruct-2410.q5_0.gguf` (q5_0)
258
+ - `Ministral-8B-Instruct-2410.q5_1.gguf` (q5_1)
259
  - `Ministral-8B-Instruct-2410.q8_0.gguf` (q8_0)
260
+ - `Ministral-8B-Instruct-2410.q3_k_s.gguf` (q3_K_S)
261
+ - `Ministral-8B-Instruct-2410.q3_k_m.gguf` (q3_K_M)
262
+ - `Ministral-8B-Instruct-2410.q3_k_l.gguf` (q3_K_L)
263
+ - `Ministral-8B-Instruct-2410.q4_k_s.gguf` (q4_K_S)
264
+ - `Ministral-8B-Instruct-2410.q4_k_m.gguf` (q4_K_M)
265
+ - `Ministral-8B-Instruct-2410.q5_k_s.gguf` (q5_K_S)
266
+ - `Ministral-8B-Instruct-2410.q5_k_m.gguf` (q5_K_M)
267
+ - `Ministral-8B-Instruct-2410.q6_k.gguf` (q6_K)
268
+
269
+ ## Use with llama.cpp
270
+ Replace `FILENAME` with one of the above filenames.
271
+
272
+ ### CLI:
273
+ ```bash
274
+ llama-cli --hf-repo Supa-AI/Ministral-8B-Instruct-2410-gguf --hf-file FILENAME -p "Your prompt here"
275
+ ```
276
+
277
+ ### Server:
278
+ ```bash
279
+ llama-server --hf-repo Supa-AI/Ministral-8B-Instruct-2410-gguf --hf-file FILENAME -c 2048
280
+ ```
281
+
282
+ ## Model Details
283
+ - **Original Model:** [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410)
284
+ - **Format:** GGUF