jxtngx commited on
Commit
20b7765
1 Parent(s): c6f470f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -41
README.md CHANGED
@@ -17,6 +17,7 @@ tags:
17
  - distillation
18
  - function calling
19
  - json mode
 
20
  - llama-cpp
21
  - gguf-my-repo
22
  widget:
@@ -35,44 +36,4 @@ model-index:
35
 
36
  # jxtngx/Hermes-2-Pro-Mistral-7B-Q4_K_M-GGUF
37
  This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Mistral-7B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
38
- Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) for more details on the model.
39
-
40
- ## Use with llama.cpp
41
- Install llama.cpp through brew (works on Mac and Linux)
42
-
43
- ```bash
44
- brew install llama.cpp
45
-
46
- ```
47
- Invoke the llama.cpp server or the CLI.
48
-
49
- ### CLI:
50
- ```bash
51
- llama-cli --hf-repo jxtngx/Hermes-2-Pro-Mistral-7B-Q4_K_M-GGUF --hf-file hermes-2-pro-mistral-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
52
- ```
53
-
54
- ### Server:
55
- ```bash
56
- llama-server --hf-repo jxtngx/Hermes-2-Pro-Mistral-7B-Q4_K_M-GGUF --hf-file hermes-2-pro-mistral-7b-q4_k_m.gguf -c 2048
57
- ```
58
-
59
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
60
-
61
- Step 1: Clone llama.cpp from GitHub.
62
- ```
63
- git clone https://github.com/ggerganov/llama.cpp
64
- ```
65
-
66
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
67
- ```
68
- cd llama.cpp && LLAMA_CURL=1 make
69
- ```
70
-
71
- Step 3: Run inference through the main binary.
72
- ```
73
- ./llama-cli --hf-repo jxtngx/Hermes-2-Pro-Mistral-7B-Q4_K_M-GGUF --hf-file hermes-2-pro-mistral-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
74
- ```
75
- or
76
- ```
77
- ./llama-server --hf-repo jxtngx/Hermes-2-Pro-Mistral-7B-Q4_K_M-GGUF --hf-file hermes-2-pro-mistral-7b-q4_k_m.gguf -c 2048
78
- ```
 
17
  - distillation
18
  - function calling
19
  - json mode
20
+ - llama-cpp-python
21
  - llama-cpp
22
  - gguf-my-repo
23
  widget:
 
36
 
37
  # jxtngx/Hermes-2-Pro-Mistral-7B-Q4_K_M-GGUF
38
  This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Mistral-7B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
39
+ Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) for more details on the model.