deniskokosss
commited on
Commit
•
268e39b
1
Parent(s):
a76237b
Update README.md
Browse files
README.md
CHANGED
@@ -46,9 +46,8 @@ DatasetDict({
|
|
46 |
```
|
47 |
## How to evaluate your models
|
48 |
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
|
49 |
-
1. Clone
|
50 |
-
2.
|
51 |
-
3. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
|
52 |
```console
|
53 |
# mkdir -p ./outs/humaneval_ru
|
54 |
# mkdir -p ./results/humaneval_ru
|
|
|
46 |
```
|
47 |
## How to evaluate your models
|
48 |
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
|
49 |
+
1. Clone https://github.com/NLP-Core-Team/bigcode-evaluation-harness
|
50 |
+
2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
|
|
|
51 |
```console
|
52 |
# mkdir -p ./outs/humaneval_ru
|
53 |
# mkdir -p ./results/humaneval_ru
|