fuzzy-mittenz
commited on
Commit
•
cc07cb4
1
Parent(s):
bf7877b
Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ language:
|
|
27 |
base_model: suayptalha/FastLlama-3.2-1B-Instruct
|
28 |
---
|
29 |
|
30 |
-
# fuzzy-mittenz/FastLlama-3.2-1B-Instruct-Q8_0-GGUF
|
31 |
This model was converted to GGUF format from [`suayptalha/FastLlama-3.2-1B-Instruct`](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
32 |
Refer to the [original model card](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct) for more details on the model.
|
33 |
|
|
|
27 |
base_model: suayptalha/FastLlama-3.2-1B-Instruct
|
28 |
---
|
29 |
|
30 |
+
# fuzzy-mittenz/FastLlama-3.2-1B-Instruct-Q8_0-GGUF Suayptalha's Multi lingual model should be perfect for swarm use
|
31 |
This model was converted to GGUF format from [`suayptalha/FastLlama-3.2-1B-Instruct`](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
32 |
Refer to the [original model card](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct) for more details on the model.
|
33 |
|