Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,16 @@ datasets:
|
|
29 |
|
30 |
This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
**Update**: Available on Ollama! ```ollama run stuehieyr/synthiq```
|
33 |
|
34 |
# Yaml Config
|
@@ -98,13 +108,5 @@ In summary, based on the evidence from our conversation, SynthIQ can be consider
|
|
98 |
|
99 |
|
100 |
|
101 |
-
GGUF Files
|
102 |
-
|
103 |
-
[Q4_K_M](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q4_K_M.gguf) - medium, balanced quality - recommended
|
104 |
-
|
105 |
-
[Q_6_K](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q6_K.gguf) - very large, extremely low quality loss
|
106 |
-
|
107 |
-
[Q8_0](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q8.gguf) - very large, extremely low quality loss - not recommended
|
108 |
-
|
109 |
|
110 |
License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.
|
|
|
29 |
|
30 |
This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
|
31 |
|
32 |
+
|
33 |
+
|
34 |
+
GGUF Files
|
35 |
+
|
36 |
+
[Q4_K_M](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q4_K_M.gguf) - medium, balanced quality - recommended
|
37 |
+
|
38 |
+
[Q_6_K](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q6_K.gguf) - very large, extremely low quality loss
|
39 |
+
|
40 |
+
[Q8_0](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q8.gguf) - very large, extremely low quality loss - not recommended
|
41 |
+
|
42 |
**Update**: Available on Ollama! ```ollama run stuehieyr/synthiq```
|
43 |
|
44 |
# Yaml Config
|
|
|
108 |
|
109 |
|
110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
111 |
|
112 |
License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.
|