Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ tags:
|
|
11 |
---
|
12 |
|
13 |
# NGalrion/Margnum-12B-v1-Q4_K_S-GGUF
|
14 |
-
This model was converted to GGUF format from [`GalrionSoftworks/
|
15 |
Refer to the [original model card](https://huggingface.co/GalrionSoftworks/Margnum-12B-v1) for more details on the model.
|
16 |
|
17 |
## Use with llama.cpp
|
@@ -25,12 +25,12 @@ Invoke the llama.cpp server or the CLI.
|
|
25 |
|
26 |
### CLI:
|
27 |
```bash
|
28 |
-
llama-cli --hf-repo NGalrion/
|
29 |
```
|
30 |
|
31 |
### Server:
|
32 |
```bash
|
33 |
-
llama-server --hf-repo NGalrion/
|
34 |
```
|
35 |
|
36 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -47,9 +47,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
47 |
|
48 |
Step 3: Run inference through the main binary.
|
49 |
```
|
50 |
-
./llama-cli --hf-repo NGalrion/
|
51 |
```
|
52 |
or
|
53 |
```
|
54 |
-
./llama-server --hf-repo NGalrion/
|
55 |
```
|
|
|
11 |
---
|
12 |
|
13 |
# NGalrion/Margnum-12B-v1-Q4_K_S-GGUF
|
14 |
+
This model was converted to GGUF format from [`GalrionSoftworks/MagnusIntellectus-12B-v1`](https://huggingface.co/GalrionSoftworks/MagnusIntellectus-12B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/GalrionSoftworks/Margnum-12B-v1) for more details on the model.
|
16 |
|
17 |
## Use with llama.cpp
|
|
|
25 |
|
26 |
### CLI:
|
27 |
```bash
|
28 |
+
llama-cli --hf-repo NGalrion/MagnusIntellectus-12B-v1-Q4_K_S-GGUF --hf-file magnusintellectus-12b-v1-q4_k_s.gguf -p "The meaning to life and the universe is"
|
29 |
```
|
30 |
|
31 |
### Server:
|
32 |
```bash
|
33 |
+
llama-server --hf-repo NGalrion/MagnusIntellectus-12B-v1-Q4_K_S-GGUF --hf-file magnusintellectus-12b-v1-q4_k_s.gguf -c 2048
|
34 |
```
|
35 |
|
36 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
47 |
|
48 |
Step 3: Run inference through the main binary.
|
49 |
```
|
50 |
+
./llama-cli --hf-repo NGalrion/MagnusIntellectus-12B-v1-Q4_K_S-GGUF --hf-file magnusintellectus-12b-v1-q4_k_s.gguf -p "The meaning to life and the universe is"
|
51 |
```
|
52 |
or
|
53 |
```
|
54 |
+
./llama-server --hf-repo NGalrion/MagnusIntellectus-12B-v1-Q4_K_S-GGUF --hf-file magnusintellectus-12b-v1-q4_k_s.gguf -c 2048
|
55 |
```
|