qnixsynapse
commited on
Commit
•
1578945
1
Parent(s):
72c1bc8
Update README.md
Browse files
README.md
CHANGED
@@ -203,6 +203,7 @@ inference:
|
|
203 |
---
|
204 |
|
205 |
# qnixsynapse/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF
|
|
|
206 |
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
207 |
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
|
208 |
## Use with llama.cpp
|
|
|
203 |
---
|
204 |
|
205 |
# qnixsynapse/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF
|
206 |
+
## FIXED WITH UP TO DATE LLAMACPP UPDATES AND HF CONFIG
|
207 |
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
208 |
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
|
209 |
## Use with llama.cpp
|