File size: 7,753 Bytes
73fc022 962b349 73fc022 962b349 268e39b 962b349 dfe2dea 962b349 dfe2dea 962b349 dfe2dea 4c5cd5b e92e337 9d9f5da f327ab0 9d9f5da f327ab0 9d9f5da f327ab0 9d9f5da f327ab0 9d9f5da f327ab0 9d9f5da f327ab0 9d9f5da f327ab0 9d9f5da dfe2dea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
---
license: mit
task_categories:
- text-generation
language:
- ru
- en
tags:
- code
size_categories:
- n<1K
---
# HumanEval_ru Dataset
## Dataset Summary
This is a version of Code Geneneration [HumanEval dataset](https://huggingface.co/datasets/openai_humaneval) translated to Russian.
## Supported tasks
The task is to generate body of the function based on the function signature and docstring. The programming problems are written in Python and contain Russian natural text in comments and docstrings.
## Task example
```python
from typing import List
def string_xor(a: str, b: str) -> str:
"""
Входными данными являются две строки a и b, состоящие только из 1 и 0.
Выполните двоичное XOR для этих входных данных и верните результат также в виде строки.
>>> string_xor('010', '110')
'100'
"""
# Your code here
```
## Dataset structure
Please refer to the structure of the [original HumanEval dataset](https://huggingface.co/datasets/openai_humaneval)
## Translation
Textual descriptions of tasks were translated automatically via Yandex.Translate API and then manually edited. Feel free to report errors in translations.
# Usage
## Load dataset
```python
from datasets import load_dataset
load_dataset('NLPCoreTeam/humaneval_ru')
DatasetDict({
train: Dataset({
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point', 'signature', 'docstring', 'context', 'instruction', 'instruction_noexamples'],
num_rows: 164
})
})
```
## How to evaluate your models
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
1. Clone https://github.com/NLP-Core-Team/bigcode-evaluation-harness
2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
```console
# mkdir -p ./outs/humaneval_ru
# mkdir -p ./results/humaneval_ru
accelerate launch main.py \
--model codellama/CodeLlama-7b-Python-hf \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size 1 \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
--metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
```
4. Resulting metrics of Codellama-7b-Python should be
```python
"humaneval_ru": {
"pass@1": 0.35,
"pass@10": 0.5122803695209872
},
```
# Benchmark
[Starcoder](https://huggingface.co/bigcode/starcoder) and [Codellama](https://huggingface.co/codellama/CodeLlama-7b-hf) models evaluations on HumanEval_Ru and HumanEval are presented in the table below. For further information on Pass@1 and Pass@10 please refer to [original paper](https://arxiv.org/abs/2107.03374).
| model | RU Pass@1 | RU Pass@10 | EN Pass@1 | EN Pass@10 |
|:------------------------|--------------------------:|---------------------------:|--------------------------:|---------------------------:|
| starcoderbase-1b | 0.1420 | 0.1801 | 0.1509 | 0.2045 |
| starcoderbase-3b | 0.1924 | 0.2606 | 0.2137 | 0.3289 |
| starcoderbase-7b | 0.2515 | 0.3359 | 0.2868 | 0.3852 |
| starcoderbase-15b | 0.2676 | 0.3872 | 0.3036 | 0.4611 |
| starcoder-15b-Python | 0.3103 | 0.4132 | 0.3353 | 0.4931 |
| CodeLlama-7b-hf | 0.2673 | 0.3688 | 0.2975 | 0.4351 |
| CodeLlama-7b-Python-hf | 0.3500 | 0.5122 | 0.3960 | 0.5761 |
| CodeLlama-13b-hf | 0.3380 | 0.4884 | 0.3557 | 0.5489 |
| CodeLlama-13b-Python-hf | 0.4380 | 0.5796 | 0.4301 | 0.6226 |
<details>
<summary> Script to reproduce the results in the table </summary>
```console
#!/bin/bash
# use with https://github.com/NLP-Core-Team/bigcode-evaluation-harness
# RU
mkdir -p ./outs/humaneval_ru
mkdir -p ./results/humaneval_ru
MODELS_PATH="bigcode"
echo $MODELS_PATH
declare -A bs=( ["starcoderbase-1b"]=16 ["starcoderbase-3b"]=8 ["starcoderbase-7b"]=4 ["starcoderbase"]=1 ["starcoder"]=1)
for model_name in starcoderbase-1b starcoderbase-3b starcoderbase-7b starcoderbase starcoder
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/$model_name.json \
--metric_output_path ./results/humaneval_ru/$model_name.metrics
done
MODELS_PATH="codellama"
echo $MODELS_PATH
declare -A bs=( ["CodeLlama-7b-Python-hf"]=8 ["CodeLlama-7b-hf"]=16 ["CodeLlama-13b-Python-hf"]=4 ["CodeLlama-13b-hf"]=4 )
for model_name in CodeLlama-7b-hf CodeLlama-7b-Python-hf CodeLlama-13b-hf CodeLlama-13b-Python-hf
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/$model_name.json \
--metric_output_path ./results/humaneval_ru/$model_name.metrics
done
# EN
mkdir -p ./outs/humaneval
mkdir -p ./results/humaneval
MODELS_PATH="bigcode"
echo $MODELS_PATH
declare -A bs=( ["starcoderbase-1b"]=16 ["starcoderbase-3b"]=8 ["starcoderbase-7b"]=4 ["starcoderbase"]=1 ["starcoder"]=1)
for model_name in starcoderbase-1b starcoderbase-3b starcoderbase-7b starcoderbase starcoder
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval/$model_name.json \
--metric_output_path ./results/humaneval/$model_name.metrics
done
MODELS_PATH="codellama"
echo $MODELS_PATH
declare -A bs=( ["CodeLlama-7b-Python-hf"]=8 ["CodeLlama-7b-hf"]=16 ["CodeLlama-13b-Python-hf"]=4 ["CodeLlama-13b-hf"]=4 )
for model_name in CodeLlama-7b-hf CodeLlama-7b-Python-hf CodeLlama-13b-hf CodeLlama-13b-Python-hf
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval/$model_name.json \
--metric_output_path ./results/humaneval/$model_name.metrics
done
```
</details> |