Update README.md
Browse files
README.md
CHANGED
@@ -46,6 +46,8 @@ outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_
|
|
46 |
print(tokenizer.decode(outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=False))
|
47 |
```
|
48 |
|
49 |
-
## Benchmarks
|
50 |
|
51 |
-
The model achieves a 63.1% pass@1 on HumanEval and a 45.2% pass@1 on MBPP, however it is evident that these benchmarks are not representative of real-world usage of code models so we are launching the [Code Models Arena](https://arena.glaive.ai/) to let users vote on model outputs so we can have a better understanding of user preference on code models and come up with new and better benchmarks. We plan to release the Arena results as soon as we have a sufficient amount of data.
|
|
|
|
|
|
46 |
print(tokenizer.decode(outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=False))
|
47 |
```
|
48 |
|
49 |
+
## Benchmarks:
|
50 |
|
51 |
+
The model achieves a 63.1% pass@1 on HumanEval and a 45.2% pass@1 on MBPP, however it is evident that these benchmarks are not representative of real-world usage of code models so we are launching the [Code Models Arena](https://arena.glaive.ai/) to let users vote on model outputs so we can have a better understanding of user preference on code models and come up with new and better benchmarks. We plan to release the Arena results as soon as we have a sufficient amount of data.
|
52 |
+
|
53 |
+
Join the Glaive [discord](https://discord.gg/fjQ4uf3yWD) for improvement suggestions, bug-reports and collaborating on more open-source projects.
|