Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,66 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
license: bigcode-openrail-m
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- code
|
4 |
+
- starcoder2
|
5 |
+
library_name: transformers
|
6 |
+
pipeline_tag: text-generation
|
7 |
license: bigcode-openrail-m
|
8 |
---
|
9 |
+
|
10 |
+
<p align="center">
|
11 |
+
<img width="300px" alt="starcoder2-instruct" src="https://huggingface.co/TechxGenus/starcoder2-15b-instruct/resolve/main/starcoder2-instruct.jpg">
|
12 |
+
</p>
|
13 |
+
|
14 |
+
### starcoder2-instruct (not my model, I just quantized it)
|
15 |
+
|
16 |
+
We've fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves **77.4 pass@1** on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
|
17 |
+
|
18 |
+
### Usage
|
19 |
+
|
20 |
+
Here give some examples of how to use our model:
|
21 |
+
|
22 |
+
```python
|
23 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
24 |
+
import torch
|
25 |
+
PROMPT = """### Instruction
|
26 |
+
{instruction}
|
27 |
+
### Response
|
28 |
+
"""
|
29 |
+
instruction = <Your code instruction here>
|
30 |
+
prompt = PROMPT.format(instruction=instruction)
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-15b-instruct")
|
32 |
+
model = AutoModelForCausalLM.from_pretrained(
|
33 |
+
"TechxGenus/starcoder2-15b-instruct",
|
34 |
+
torch_dtype=torch.bfloat16,
|
35 |
+
device_map="auto",
|
36 |
+
)
|
37 |
+
inputs = tokenizer.encode(prompt, return_tensors="pt")
|
38 |
+
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
|
39 |
+
print(tokenizer.decode(outputs[0]))
|
40 |
+
```
|
41 |
+
|
42 |
+
With text-generation pipeline:
|
43 |
+
|
44 |
+
|
45 |
+
```python
|
46 |
+
from transformers import pipeline
|
47 |
+
import torch
|
48 |
+
PROMPT = """### Instruction
|
49 |
+
{instruction}
|
50 |
+
### Response
|
51 |
+
"""
|
52 |
+
instruction = <Your code instruction here>
|
53 |
+
prompt = PROMPT.format(instruction=instruction)
|
54 |
+
generator = pipeline(
|
55 |
+
model="TechxGenus/starcoder2-15b-instruct",
|
56 |
+
task="text-generation",
|
57 |
+
torch_dtype=torch.bfloat16,
|
58 |
+
device_map="auto",
|
59 |
+
)
|
60 |
+
result = generator(prompt, max_length=2048)
|
61 |
+
print(result[0]["generated_text"])
|
62 |
+
```
|
63 |
+
|
64 |
+
### Note
|
65 |
+
|
66 |
+
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
|