sahil2801 commited on
Commit
46af0c9
1 Parent(s): 60b0815

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -6,4 +6,46 @@ language:
6
  - en
7
  tags:
8
  - code
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  tags:
8
  - code
9
+ ---
10
+
11
+ # Glaive-coder-7b
12
+
13
+ Glaive-coder-7b is a 7B parameter code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform.
14
+
15
+ The model is fine-tuned on the CodeLlama-7b model.
16
+
17
+ ## Usage:
18
+
19
+ The model is trained to act as a code assistant, and can do both single instruction following and multi-turn conversations.
20
+ It follows the same prompt format as CodeLlama-7b-Instruct-
21
+ ```
22
+ <s>[INST]
23
+ <<SYS>>
24
+ {{ system_prompt }}
25
+ <</SYS>>
26
+
27
+ {{ user_msg }} [/INST] {{ model_answer }} </s>
28
+ <s>[INST] {{ user_msg }} [/INST]
29
+ ```
30
+
31
+ You can run the model in the following way-
32
+
33
+ ```python
34
+ from transformers import AutoModelForCausalLM , AutoTokenizer
35
+
36
+ tokenizer = AutoTokenizer.from_pretrained("glaiveai/glaive-coder-7b")
37
+ model = AutoModelForCausalLM.from_pretrained("glaiveai/glaive-coder-7b").half().cuda()
38
+
39
+ def fmt_prompt(prompt):
40
+ return f"<s> [INST] {prompt} [/INST]"
41
+
42
+ inputs = tokenizer(fmt_prompt(prompt),return_tensors="pt").to(model.device)
43
+
44
+ outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100)
45
+
46
+ print(tokenizer.decode(outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=False))
47
+ ```
48
+
49
+ ## Benchmarks
50
+
51
+ The model achieves a 63.1% pass@1 on HumanEval and a 45.2% pass@1 on MBPP, however it is evident that these benchmarks are not representative of real-world usage of code models so we are launching the [Code Models Arena](https://arena.glaive.ai/) to let users vote on model outputs so we can have a better understanding of user preference on code models and come up with new and better benchmarks. We plan to release the Arena results as soon as we have a sufficient amount of data.