Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
deniskokosss commited on
Commit
268e39b
1 Parent(s): a76237b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -46,9 +46,8 @@ DatasetDict({
46
  ```
47
  ## How to evaluate your models
48
  To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
49
- 1. Clone and setup [Code Generation LM Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness)
50
- 2. Copy our files lm_eval/tasks/humaneval_ru.py and lm_eval/tasks/__init__.py to lm_eval/tasks of the cloned repo
51
- 3. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
52
  ```console
53
  # mkdir -p ./outs/humaneval_ru
54
  # mkdir -p ./results/humaneval_ru
 
46
  ```
47
  ## How to evaluate your models
48
  To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
49
+ 1. Clone https://github.com/NLP-Core-Team/bigcode-evaluation-harness
50
+ 2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
 
51
  ```console
52
  # mkdir -p ./outs/humaneval_ru
53
  # mkdir -p ./results/humaneval_ru