metadata
license: mit
task_categories:
- text-generation
language:
- ru
- en
tags:
- code
size_categories:
- n<1K
HumanEval_ru Dataset
Dataset Summary
This is a version of Code Geneneration HumanEval dataset translated to Russian.
Supported tasks
The task is to generate body of the function based on the function signature and docstring. The programming problems are written in Python and contain Russian natural text in comments and docstrings.
Task example
from typing import List
def string_xor(a: str, b: str) -> str:
"""
Входными данными являются две строки a и b, состоящие только из 1 и 0.
Выполните двоичное XOR для этих входных данных и верните результат также в виде строки.
>>> string_xor('010', '110')
'100'
"""
# Your code here
Dataset structure
Please refer to the structure of the original HumanEval dataset
Translation
Textual descriptions of tasks were translated automatically via Yandex.Translate API and then manually edited. Feel free to report errors in translations.
Usage
Load dataset
from datasets import load_dataset
load_dataset('NLPCoreTeam/humaneval_ru')
DatasetDict({
train: Dataset({
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point', 'signature', 'docstring', 'context', 'instruction', 'instruction_noexamples'],
num_rows: 164
})
})
How to evaluate your models
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for Codellama-7b-Python):
- Clone and setup Code Generation LM Evaluation Harness
- Copy our files lm_eval/tasks/humaneval_ru.py and lm_eval/tasks/init.py to lm_eval/tasks of the cloned repo
- Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
# mkdir -p ./outs/humaneval_ru
# mkdir -p ./results/humaneval_ru
accelerate launch main.py \
--model codellama/CodeLlama-7b-Python-hf \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size 1 \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
--metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
- Resulting metrics of Codellama-7b-Python should be
"humaneval_ru": {
"pass@1": 0.35,
"pass@10": 0.5122803695209872
},