Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Tags:
code
DOI:
Libraries:
Datasets
pandas
License:
tiginamaria's picture
Upload README.md with huggingface_hub
f34ba4c
|
raw
history blame
4.16 kB
metadata
language:
  - en
license: other
task_categories:
  - text-generation
pretty_name: LCA (Bug Localization)
tags:
  - code
dataset_info:
  config_name: bug_localization_data
  features:
    - name: repo_owner
      dtype: string
    - name: repo_name
      dtype: string
    - name: issue_url
      dtype: string
    - name: pull_url
      dtype: string
    - name: comment_url
      dtype: string
    - name: issue_title
      dtype: string
    - name: issue_body
      dtype: string
    - name: base_sha
      dtype: string
    - name: head_sha
      dtype: string
    - name: diff_url
      dtype: string
    - name: changed_files
      dtype: string
    - name: changed_files_exts
      dtype: string
    - name: java_changed_files_count
      dtype: int64
    - name: kt_changed_files_count
      dtype: int64
    - name: py_changed_files_count
      dtype: int64
    - name: code_changed_files_count
      dtype: int64
    - name: pull_create_at
      dtype: string
    - name: stars
      dtype: int64
  splits:
    - name: train
      num_bytes: 3179729
      num_examples: 1000
  download_size: 1275339
  dataset_size: 3179729
configs:
  - config_name: bug_localization_data
    data_files:
      - split: train
        path: bug_localization_data/train-*

LCA (Bug Localization)

This is the data for Bug Localization benchmark as part of LCA.

How-to

  1. Since the dataset is private, if you haven't used HF Hub before, add your token via huggingface-cli first:

    huggingface-cli login
    
  2. List all the available configs via datasets.get_dataset_config_names and choose an appropriate one

  3. Load the data via load_dataset:

    from datasets import load_dataset
    
    configuration = "TODO"  # select a configuration
    dataset = load_dataset("JetBrains-Research/lca-bug-localization", configuration, split="test")
    

    Some notes:

    • All the data we have is considered to be in test split

Dataset Structure

TODO: some overall structure or repo

Bug localization data

This section concerns configuration with full data about each commit (no -labels suffix).

Each example has the following fields:

Field Description
repo_owner Bug issue repository owner.
repo_name Bug issue repository name.
issue_url GitHub link to issue
https://github.com/{repo_owner}/{repo_name}/issues/{issue_id}.
pull_url GitHub link to pull request
https://github.com/{repo_owner}/{repo_name}/pull/{pull_id}.
comment_url GitHub link to comment with pull request to issue reference
https://github.com/{repo_owner}/{repo_name}/pull/{pull_id}#issuecomment-{comment_id}.
issue_title Issue title.
issue_body Issue body.
base_sha Pull request base sha.
head_sha Pull request head sha.
diff_url Pull request diff url between base and head sha
https://github.com/{repo_owner}/{repo_name}/compare/{base_sha}...{head_sha}.
diff Pull request diff content.
changed_files List of changed files parsed from diff.
changed_files_exts Dict from changed files extension to count.
java_changed_files_count Number of changed .java files.
kt_changed_files_count Number of changed .kt files.
py_changed_files_count Number of changed .py files.
code_changed_files_count Number of changed .java, .kt or .py files.
pull_create_at Data of pull request creation in format yyyy-mm-ddThh:mm:ssZ.
stars Number of repo stars.

Repos data

TODO: describe repos data as .tar.gz archives with list of repos metadata