Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
humaneval_ru / README.md
deniskokosss's picture
Update README.md
268e39b
|
raw
history blame
7.75 kB
metadata
license: mit
task_categories:
  - text-generation
language:
  - ru
  - en
tags:
  - code
size_categories:
  - n<1K

HumanEval_ru Dataset

Dataset Summary

This is a version of Code Geneneration HumanEval dataset translated to Russian.

Supported tasks

The task is to generate body of the function based on the function signature and docstring. The programming problems are written in Python and contain Russian natural text in comments and docstrings.

Task example

from typing import List
def string_xor(a: str, b: str) -> str:
    """
    Входными данными являются две строки a и b, состоящие только из 1 и 0.
    Выполните двоичное XOR для этих входных данных и верните результат также в виде строки.
    >>> string_xor('010', '110')
    '100'
    """
    # Your code here

Dataset structure

Please refer to the structure of the original HumanEval dataset

Translation

Textual descriptions of tasks were translated automatically via Yandex.Translate API and then manually edited. Feel free to report errors in translations.

Usage

Load dataset

from datasets import load_dataset
load_dataset('NLPCoreTeam/humaneval_ru')

DatasetDict({
  train: Dataset({
    features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point', 'signature', 'docstring', 'context', 'instruction', 'instruction_noexamples'],
    num_rows: 164
  })
})

How to evaluate your models

To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for Codellama-7b-Python):

  1. Clone https://github.com/NLP-Core-Team/bigcode-evaluation-harness
  2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
# mkdir -p ./outs/humaneval_ru
# mkdir -p ./results/humaneval_ru
accelerate launch main.py \
  --model codellama/CodeLlama-7b-Python-hf \
  --max_length_generation 512 \
  --tasks humaneval_ru \
  --use_auth_token \
  --temperature 0.2 \
  --n_samples 20 \
  --precision fp16 \
  --batch_size 1 \
  --allow_code_execution \
  --save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
  --metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
  1. Resulting metrics of Codellama-7b-Python should be
"humaneval_ru": {
  "pass@1": 0.35,
  "pass@10": 0.5122803695209872
},

Benchmark

Starcoder and Codellama models evaluations on HumanEval_Ru and HumanEval are presented in the table below. For further information on Pass@1 and Pass@10 please refer to original paper.

model RU Pass@1 RU Pass@10 EN Pass@1 EN Pass@10
starcoderbase-1b 0.1420 0.1801 0.1509 0.2045
starcoderbase-3b 0.1924 0.2606 0.2137 0.3289
starcoderbase-7b 0.2515 0.3359 0.2868 0.3852
starcoderbase-15b 0.2676 0.3872 0.3036 0.4611
starcoder-15b-Python 0.3103 0.4132 0.3353 0.4931
CodeLlama-7b-hf 0.2673 0.3688 0.2975 0.4351
CodeLlama-7b-Python-hf 0.3500 0.5122 0.3960 0.5761
CodeLlama-13b-hf 0.3380 0.4884 0.3557 0.5489
CodeLlama-13b-Python-hf 0.4380 0.5796 0.4301 0.6226
Script to reproduce the results in the table
#!/bin/bash
# use with https://github.com/NLP-Core-Team/bigcode-evaluation-harness

# RU
mkdir -p ./outs/humaneval_ru
mkdir -p ./results/humaneval_ru
MODELS_PATH="bigcode"
echo $MODELS_PATH
declare -A bs=( ["starcoderbase-1b"]=16 ["starcoderbase-3b"]=8 ["starcoderbase-7b"]=4 ["starcoderbase"]=1 ["starcoder"]=1)
for model_name in starcoderbase-1b starcoderbase-3b starcoderbase-7b starcoderbase starcoder
do
  echo $MODELS_PATH/$model_name
  accelerate launch --mixed_precision="fp16" main.py \
    --model $MODELS_PATH/$model_name \
    --max_length_generation 512 \
    --tasks humaneval_ru \
    --use_auth_token \
    --temperature 0.2 \
    --n_samples 20 \
    --precision fp16 \
    --batch_size ${bs[$model_name]} \
    --allow_code_execution \
    --save_generations_path ./outs/humaneval_ru/$model_name.json \
    --metric_output_path ./results/humaneval_ru/$model_name.metrics
done

MODELS_PATH="codellama"
echo $MODELS_PATH
declare -A bs=( ["CodeLlama-7b-Python-hf"]=8 ["CodeLlama-7b-hf"]=16 ["CodeLlama-13b-Python-hf"]=4 ["CodeLlama-13b-hf"]=4 )
for model_name in CodeLlama-7b-hf CodeLlama-7b-Python-hf CodeLlama-13b-hf CodeLlama-13b-Python-hf
do
  echo $MODELS_PATH/$model_name
  accelerate launch --mixed_precision="fp16" main.py \
    --model $MODELS_PATH/$model_name \
    --max_length_generation 512 \
    --tasks humaneval_ru \
    --use_auth_token \
    --temperature 0.2 \
    --n_samples 20 \
    --precision fp16 \
    --batch_size ${bs[$model_name]} \
    --allow_code_execution \
    --save_generations_path ./outs/humaneval_ru/$model_name.json \
    --metric_output_path ./results/humaneval_ru/$model_name.metrics
done

# EN

mkdir -p ./outs/humaneval
mkdir -p ./results/humaneval
MODELS_PATH="bigcode"
echo $MODELS_PATH
declare -A bs=( ["starcoderbase-1b"]=16 ["starcoderbase-3b"]=8 ["starcoderbase-7b"]=4 ["starcoderbase"]=1 ["starcoder"]=1)
for model_name in starcoderbase-1b starcoderbase-3b starcoderbase-7b starcoderbase starcoder 
do
  echo $MODELS_PATH/$model_name
  accelerate launch --mixed_precision="fp16" main.py \
    --model $MODELS_PATH/$model_name \
    --max_length_generation 512 \
    --tasks humaneval \
    --use_auth_token \
    --temperature 0.2 \
    --n_samples 20 \
    --precision fp16 \
    --batch_size ${bs[$model_name]} \
    --allow_code_execution \
    --save_generations_path ./outs/humaneval/$model_name.json \
    --metric_output_path ./results/humaneval/$model_name.metrics
done

MODELS_PATH="codellama"
echo $MODELS_PATH
declare -A bs=( ["CodeLlama-7b-Python-hf"]=8 ["CodeLlama-7b-hf"]=16 ["CodeLlama-13b-Python-hf"]=4 ["CodeLlama-13b-hf"]=4 )
for model_name in CodeLlama-7b-hf CodeLlama-7b-Python-hf CodeLlama-13b-hf CodeLlama-13b-Python-hf
do
  echo $MODELS_PATH/$model_name
  accelerate launch --mixed_precision="fp16" main.py \
    --model $MODELS_PATH/$model_name \
    --max_length_generation 512 \
    --tasks humaneval \
    --use_auth_token \
    --temperature 0.2 \
    --n_samples 20 \
    --precision fp16 \
    --batch_size ${bs[$model_name]} \
    --allow_code_execution \
    --save_generations_path ./outs/humaneval/$model_name.json \
    --metric_output_path ./results/humaneval/$model_name.metrics
done