osukhoroslov-hw commited on
Commit
4ed9941
1 Parent(s): 1598deb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +189 -0
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bigcode/starcoder
3
+ datasets:
4
+ - bigcode/the-stack-dedup
5
+ library_name: transformers
6
+ license: bigcode-openrail-m
7
+ metrics:
8
+ - code_eval
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - code
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ inference: true
15
+ widget:
16
+ - text: 'def print_hello_world():'
17
+ example_title: Hello world
18
+ group: Python
19
+ extra_gated_prompt: "## Model License Agreement\nPlease read the BigCode [OpenRAIL-M\
20
+ \ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)\
21
+ \ agreement before accepting it.\n "
22
+ extra_gated_fields:
23
+ ? I accept the above license agreement, and will use the Model complying with the
24
+ set of use restrictions and sharing requirements
25
+ : checkbox
26
+ model-index:
27
+ - name: StarCoder
28
+ results:
29
+ - task:
30
+ type: text-generation
31
+ dataset:
32
+ name: HumanEval (Prompted)
33
+ type: openai_humaneval
34
+ metrics:
35
+ - type: pass@1
36
+ value: 0.408
37
+ name: pass@1
38
+ verified: false
39
+ - type: pass@1
40
+ value: 0.336
41
+ name: pass@1
42
+ verified: false
43
+ - task:
44
+ type: text-generation
45
+ dataset:
46
+ name: MBPP
47
+ type: mbpp
48
+ metrics:
49
+ - type: pass@1
50
+ value: 0.527
51
+ name: pass@1
52
+ verified: false
53
+ - task:
54
+ type: text-generation
55
+ dataset:
56
+ name: DS-1000 (Overall Completion)
57
+ type: ds1000
58
+ metrics:
59
+ - type: pass@1
60
+ value: 0.26
61
+ name: pass@1
62
+ verified: false
63
+ - task:
64
+ type: text-generation
65
+ dataset:
66
+ name: MultiPL-HumanEval (C++)
67
+ type: nuprl/MultiPL-E
68
+ metrics:
69
+ - type: pass@1
70
+ value: 0.3155
71
+ name: pass@1
72
+ verified: false
73
+ - type: pass@1
74
+ value: 0.2101
75
+ name: pass@1
76
+ verified: false
77
+ - type: pass@1
78
+ value: 0.1357
79
+ name: pass@1
80
+ verified: false
81
+ - type: pass@1
82
+ value: 0.1761
83
+ name: pass@1
84
+ verified: false
85
+ - type: pass@1
86
+ value: 0.3022
87
+ name: pass@1
88
+ verified: false
89
+ - type: pass@1
90
+ value: 0.2302
91
+ name: pass@1
92
+ verified: false
93
+ - type: pass@1
94
+ value: 0.3079
95
+ name: pass@1
96
+ verified: false
97
+ - type: pass@1
98
+ value: 0.2389
99
+ name: pass@1
100
+ verified: false
101
+ - type: pass@1
102
+ value: 0.2608
103
+ name: pass@1
104
+ verified: false
105
+ - type: pass@1
106
+ value: 0.1734
107
+ name: pass@1
108
+ verified: false
109
+ - type: pass@1
110
+ value: 0.3357
111
+ name: pass@1
112
+ verified: false
113
+ - type: pass@1
114
+ value: 0.155
115
+ name: pass@1
116
+ verified: false
117
+ - type: pass@1
118
+ value: 0.0124
119
+ name: pass@1
120
+ verified: false
121
+ - type: pass@1
122
+ value: 0.0007
123
+ name: pass@1
124
+ verified: false
125
+ - type: pass@1
126
+ value: 0.2184
127
+ name: pass@1
128
+ verified: false
129
+ - type: pass@1
130
+ value: 0.2761
131
+ name: pass@1
132
+ verified: false
133
+ - type: pass@1
134
+ value: 0.1046
135
+ name: pass@1
136
+ verified: false
137
+ - type: pass@1
138
+ value: 0.2274
139
+ name: pass@1
140
+ verified: false
141
+ - type: pass@1
142
+ value: 0.3229
143
+ name: pass@1
144
+ verified: false
145
+ ---
146
+
147
+ # osukhoroslov-hw/starcoder-Q5_K_M-GGUF
148
+ This model was converted to GGUF format from [`bigcode/starcoder`](https://huggingface.co/bigcode/starcoder) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
149
+ Refer to the [original model card](https://huggingface.co/bigcode/starcoder) for more details on the model.
150
+
151
+ ## Use with llama.cpp
152
+ Install llama.cpp through brew (works on Mac and Linux)
153
+
154
+ ```bash
155
+ brew install llama.cpp
156
+
157
+ ```
158
+ Invoke the llama.cpp server or the CLI.
159
+
160
+ ### CLI:
161
+ ```bash
162
+ llama-cli --hf-repo osukhoroslov-hw/starcoder-Q5_K_M-GGUF --hf-file starcoder-q5_k_m.gguf -p "The meaning to life and the universe is"
163
+ ```
164
+
165
+ ### Server:
166
+ ```bash
167
+ llama-server --hf-repo osukhoroslov-hw/starcoder-Q5_K_M-GGUF --hf-file starcoder-q5_k_m.gguf -c 2048
168
+ ```
169
+
170
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
171
+
172
+ Step 1: Clone llama.cpp from GitHub.
173
+ ```
174
+ git clone https://github.com/ggerganov/llama.cpp
175
+ ```
176
+
177
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
178
+ ```
179
+ cd llama.cpp && LLAMA_CURL=1 make
180
+ ```
181
+
182
+ Step 3: Run inference through the main binary.
183
+ ```
184
+ ./llama-cli --hf-repo osukhoroslov-hw/starcoder-Q5_K_M-GGUF --hf-file starcoder-q5_k_m.gguf -p "The meaning to life and the universe is"
185
+ ```
186
+ or
187
+ ```
188
+ ./llama-server --hf-repo osukhoroslov-hw/starcoder-Q5_K_M-GGUF --hf-file starcoder-q5_k_m.gguf -c 2048
189
+ ```