Update README.md
Browse files
README.md
CHANGED
@@ -11,14 +11,14 @@ tags:
|
|
11 |
- gguf-my-repo
|
12 |
language:
|
13 |
- en
|
14 |
-
base_model: ZeroXClem/L3
|
15 |
pipeline_tag: text-generation
|
16 |
library_name: transformers
|
17 |
---
|
18 |
|
19 |
# ZeroXClem/L3.1-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF
|
20 |
-
This model was converted to GGUF format from [`ZeroXClem/L3
|
21 |
-
Refer to the [original model card](https://huggingface.co/ZeroXClem/L3
|
22 |
|
23 |
## Use with llama.cpp
|
24 |
Install llama.cpp through brew (works on Mac and Linux)
|
@@ -31,12 +31,12 @@ Invoke the llama.cpp server or the CLI.
|
|
31 |
|
32 |
### CLI:
|
33 |
```bash
|
34 |
-
llama-cli --hf-repo ZeroXClem/L3
|
35 |
```
|
36 |
|
37 |
### Server:
|
38 |
```bash
|
39 |
-
llama-server --hf-repo ZeroXClem/L3
|
40 |
```
|
41 |
|
42 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -53,9 +53,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
53 |
|
54 |
Step 3: Run inference through the main binary.
|
55 |
```
|
56 |
-
./llama-cli --hf-repo ZeroXClem/L3
|
57 |
```
|
58 |
or
|
59 |
```
|
60 |
-
./llama-server --hf-repo ZeroXClem/L3
|
61 |
```
|
|
|
11 |
- gguf-my-repo
|
12 |
language:
|
13 |
- en
|
14 |
+
base_model: ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF
|
15 |
pipeline_tag: text-generation
|
16 |
library_name: transformers
|
17 |
---
|
18 |
|
19 |
# ZeroXClem/L3.1-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF
|
20 |
+
This model was converted to GGUF format from [`ZeroXClem/L3-Aspire-Heart-Matrix-8B`](https://huggingface.co/ZeroXClem/L3.1-Aspire-Heart-Matrix-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
21 |
+
Refer to the [original model card](https://huggingface.co/ZeroXClem/L3-Aspire-Heart-Matrix-8B) for more details on the model.
|
22 |
|
23 |
## Use with llama.cpp
|
24 |
Install llama.cpp through brew (works on Mac and Linux)
|
|
|
31 |
|
32 |
### CLI:
|
33 |
```bash
|
34 |
+
llama-cli --hf-repo ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF --hf-file l3-aspire-heart-matrix-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
|
35 |
```
|
36 |
|
37 |
### Server:
|
38 |
```bash
|
39 |
+
llama-server --hf-repo ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF --hf-file l3.1-aspire-heart-matrix-8b-q4_k_m.gguf -c 2048
|
40 |
```
|
41 |
|
42 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
53 |
|
54 |
Step 3: Run inference through the main binary.
|
55 |
```
|
56 |
+
./llama-cli --hf-repo ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF --hf-file l3-aspire-heart-matrix-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
|
57 |
```
|
58 |
or
|
59 |
```
|
60 |
+
./llama-server --hf-repo ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF --hf-file l3-aspire-heart-matrix-8b-q4_k_m.gguf -c 2048
|
61 |
```
|