jxtngx commited on
Commit
7bd9f17
1 Parent(s): 0c34d23

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base
3
+ license: other
4
+ license_name: nvidia-open-model-license
5
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
+ tags:
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ ---
10
+
11
+ # jxtngx/Llama-3.1-Minitron-4B-Width-Base-Q4_K_M-GGUF
12
+ This model was converted to GGUF format from [`nvidia/Llama-3.1-Minitron-4B-Width-Base`](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
+ Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base) for more details on the model.
14
+
15
+ ## Use with llama.cpp
16
+ Install llama.cpp through brew (works on Mac and Linux)
17
+
18
+ ```bash
19
+ brew install llama.cpp
20
+
21
+ ```
22
+ Invoke the llama.cpp server or the CLI.
23
+
24
+ ### CLI:
25
+ ```bash
26
+ llama-cli --hf-repo jxtngx/Llama-3.1-Minitron-4B-Width-Base-Q4_K_M-GGUF --hf-file llama-3.1-minitron-4b-width-base-q4_k_m.gguf -p "The meaning to life and the universe is"
27
+ ```
28
+
29
+ ### Server:
30
+ ```bash
31
+ llama-server --hf-repo jxtngx/Llama-3.1-Minitron-4B-Width-Base-Q4_K_M-GGUF --hf-file llama-3.1-minitron-4b-width-base-q4_k_m.gguf -c 2048
32
+ ```
33
+
34
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
35
+
36
+ Step 1: Clone llama.cpp from GitHub.
37
+ ```
38
+ git clone https://github.com/ggerganov/llama.cpp
39
+ ```
40
+
41
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
42
+ ```
43
+ cd llama.cpp && LLAMA_CURL=1 make
44
+ ```
45
+
46
+ Step 3: Run inference through the main binary.
47
+ ```
48
+ ./llama-cli --hf-repo jxtngx/Llama-3.1-Minitron-4B-Width-Base-Q4_K_M-GGUF --hf-file llama-3.1-minitron-4b-width-base-q4_k_m.gguf -p "The meaning to life and the universe is"
49
+ ```
50
+ or
51
+ ```
52
+ ./llama-server --hf-repo jxtngx/Llama-3.1-Minitron-4B-Width-Base-Q4_K_M-GGUF --hf-file llama-3.1-minitron-4b-width-base-q4_k_m.gguf -c 2048
53
+ ```