metadata
pipeline_tag: text-generation
base_model: bigcode/starcoder2-15b
datasets:
- bigcode/self-oss-instruct-sc2-exec-filter-50k
license: bigcode-openrail-m
library_name: transformers
tags:
- code
model-index:
- name: starcoder2-15b-instruct-v0.1
results:
- task:
type: text-generation
dataset:
name: LiveCodeBench (code generation)
type: livecodebench-codegeneration
metrics:
- type: pass@1
value: 20.4
- task:
type: text-generation
dataset:
name: LiveCodeBench (self repair)
type: livecodebench-selfrepair
metrics:
- type: pass@1
value: 20.9
- task:
type: text-generation
dataset:
name: LiveCodeBench (test output prediction)
type: livecodebench-testoutputprediction
metrics:
- type: pass@1
value: 29.8
- task:
type: text-generation
dataset:
name: LiveCodeBench (code execution)
type: livecodebench-codeexecution
metrics:
- type: pass@1
value: 28.1
- task:
type: text-generation
dataset:
name: HumanEval
type: humaneval
metrics:
- type: pass@1
value: 72.6
- task:
type: text-generation
dataset:
name: HumanEval+
type: humanevalplus
metrics:
- type: pass@1
value: 63.4
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@1
value: 75.2
- task:
type: text-generation
dataset:
name: MBPP+
type: mbppplus
metrics:
- type: pass@1
value: 61.2
- task:
type: text-generation
dataset:
name: DS-1000
type: ds-1000
metrics:
- type: pass@1
value: 40.6
quantized_by: bartowski
Exllama v2 Quantizations of starcoder2-15b-instruct-v0.1
Using turboderp's ExLlamaV2 v0.0.20 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1
Prompt format
<|endoftext|>You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
### Instruction
{prompt}
### Response
<|endoftext|>
Available sizes
Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
---|---|---|---|---|---|---|
8_0 | 8.0 | 8.0 | 15.8 GB | 16.8 GB | 18.1 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
6_5 | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, recommended. |
5_0 | 5.0 | 6.0 | 11.0 GB | 12.0 GB | 13.2 GB | Slightly lower quality vs 6.5. |
4_25 | 4.25 | 6.0 | 9.5 GB | 10.5 GB | 11.8 GB | GPTQ equivalent bits per weight. |
3_5 | 3.5 | 6.0 | 8.1 GB | 9.1 GB | 10.4 GB | Lower quality, not recommended. |
Download instructions
With git:
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2 starcoder2-15b-instruct-v0.1-exl2-6_5
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download a specific branch, use the --revision
parameter. For example, to download the 6.5 bpw branch:
Linux:
huggingface-cli download bartowski/starcoder2-15b-instruct-v0.1-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-v0.1-exl2-6_5 --local-dir-use-symlinks False
Windows (which apparently doesn't like _ in folders sometimes?):
huggingface-cli download bartowski/starcoder2-15b-instruct-v0.1-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-v0.1-exl2-6.5 --local-dir-use-symlinks False
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski