itlwas commited on
Commit
7ed674c
·
verified ·
1 Parent(s): 5a32915

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ tags:
6
+ - causal-lm
7
+ - code
8
+ - llama-cpp
9
+ - gguf-my-repo
10
+ metrics:
11
+ - code_eval
12
+ library_name: transformers
13
+ base_model: stabilityai/stable-code-instruct-3b
14
+ model-index:
15
+ - name: stabilityai/stable-code-instruct-3b
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ dataset:
20
+ name: MultiPL-HumanEval (Python)
21
+ type: nuprl/MultiPL-E
22
+ metrics:
23
+ - type: pass@1
24
+ value: 32.4
25
+ name: pass@1
26
+ verified: false
27
+ - type: pass@1
28
+ value: 30.9
29
+ name: pass@1
30
+ verified: false
31
+ - type: pass@1
32
+ value: 32.1
33
+ name: pass@1
34
+ verified: false
35
+ - type: pass@1
36
+ value: 32.1
37
+ name: pass@1
38
+ verified: false
39
+ - type: pass@1
40
+ value: 24.2
41
+ name: pass@1
42
+ verified: false
43
+ - type: pass@1
44
+ value: 23.0
45
+ name: pass@1
46
+ verified: false
47
+ ---
48
+
49
+ # AIronMind/stable-code-instruct-3b-Q4_K_M-GGUF
50
+ This model was converted to GGUF format from [`stabilityai/stable-code-instruct-3b`](https://huggingface.co/stabilityai/stable-code-instruct-3b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
51
+ Refer to the [original model card](https://huggingface.co/stabilityai/stable-code-instruct-3b) for more details on the model.
52
+
53
+ ## Use with llama.cpp
54
+ Install llama.cpp through brew (works on Mac and Linux)
55
+
56
+ ```bash
57
+ brew install llama.cpp
58
+
59
+ ```
60
+ Invoke the llama.cpp server or the CLI.
61
+
62
+ ### CLI:
63
+ ```bash
64
+ llama-cli --hf-repo AIronMind/stable-code-instruct-3b-Q4_K_M-GGUF --hf-file stable-code-instruct-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
65
+ ```
66
+
67
+ ### Server:
68
+ ```bash
69
+ llama-server --hf-repo AIronMind/stable-code-instruct-3b-Q4_K_M-GGUF --hf-file stable-code-instruct-3b-q4_k_m.gguf -c 2048
70
+ ```
71
+
72
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
73
+
74
+ Step 1: Clone llama.cpp from GitHub.
75
+ ```
76
+ git clone https://github.com/ggerganov/llama.cpp
77
+ ```
78
+
79
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
80
+ ```
81
+ cd llama.cpp && LLAMA_CURL=1 make
82
+ ```
83
+
84
+ Step 3: Run inference through the main binary.
85
+ ```
86
+ ./llama-cli --hf-repo AIronMind/stable-code-instruct-3b-Q4_K_M-GGUF --hf-file stable-code-instruct-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
87
+ ```
88
+ or
89
+ ```
90
+ ./llama-server --hf-repo AIronMind/stable-code-instruct-3b-Q4_K_M-GGUF --hf-file stable-code-instruct-3b-q4_k_m.gguf -c 2048
91
+ ```