Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
deniskokosss commited on
Commit
962b349
1 Parent(s): f8e3d86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md CHANGED
@@ -1,3 +1,74 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - ru
7
+ - en
8
+ tags:
9
+ - code
10
+ size_categories:
11
+ - n<1K
12
  ---
13
+ # HumanEval_ru Dataset
14
+ ## Dataset Summary
15
+ This is a version of Code Geneneration [HumanEval dataset](https://huggingface.co/datasets/openai_humaneval) translated to Russian.
16
+ ## Supported tasks
17
+ The task is to generate body of the function based on the function signature and docstring. The programming problems are written in Python and contain Russian natural text in comments and docstrings.
18
+ ## Task example
19
+ ```python
20
+ from typing import List
21
+ def string_xor(a: str, b: str) -> str:
22
+ """
23
+ Входными данными являются две строки a и b, состоящие только из 1 и 0.
24
+ Выполните двоичное XOR для этих входных данных и верните результат также в виде строки.
25
+ >>> string_xor('010', '110')
26
+ '100'
27
+ """
28
+ # Your code here
29
+ ```
30
+ ## Dataset structure
31
+ Please refer to the structure of the [original HumanEval dataset](https://huggingface.co/datasets/openai_humaneval)
32
+ ## Translation
33
+ Textual descriptions of tasks were translated automatically via Yandex.Translate API and then manually edited. Feel free to report errors in translations.
34
+ # Usage
35
+ ## Load dataset
36
+ ```python
37
+ from datasets import load_dataset
38
+ load_dataset('NLPCoreTeam/humaneval_ru')
39
+
40
+ DatasetDict({
41
+ train: Dataset({
42
+ features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point', 'signature', 'docstring', 'context', 'instruction', 'instruction_noexamples'],
43
+ num_rows: 164
44
+ })
45
+ })
46
+ ```
47
+ ## How to evaluate your models
48
+ To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
49
+ 1. Clone and setup [Code Generation LM Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness)
50
+ 2. Copy our files lm_eval/tasks/humaneval_ru.py and lm_eval/tasks/__init__.py to lm_eval/tasks of the cloned repo
51
+ 3. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
52
+ ```console
53
+ # mkdir -p ./outs/humaneval_ru
54
+ # mkdir -p ./results/humaneval_ru
55
+ accelerate launch main.py \
56
+ --model codellama/CodeLlama-7b-Python-hf \
57
+ --max_length_generation 512 \
58
+ --tasks humaneval_ru \
59
+ --use_auth_token \
60
+ --temperature 0.2 \
61
+ --n_samples 20 \
62
+ --precision fp16 \
63
+ --batch_size 1 \
64
+ --allow_code_execution \
65
+ --save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
66
+ --metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
67
+ ```
68
+ 4. Resulting metrics of Codellama-7b-Python should be
69
+ ```python
70
+ "humaneval_ru": {
71
+ "pass@1": 0.35,
72
+ "pass@10": 0.5122803695209872
73
+ },
74
+ ```