Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
stenotype-eval-ts / README.md
mhyee's picture
Upload README.md with huggingface_hub
daf8537 verified
---
extra_gated_prompt: '## Terms of Use for The Stack
The Stack dataset is a collection of source code in over 300 programming languages.
We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses.
Any use of all or part of the code gathered in The Stack must abide by the terms
of the original licenses, including attribution clauses when relevant. We facilitate
this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking
on "Access repository", you agree to update your own version of The Stack to the
most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7).
If you have questions about dataset versions and allowed uses, please also ask them
in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new).
We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include
[these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack)
and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information
(email address and username) can be shared with the dataset maintainers as well.'
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: float64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: float64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: float64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: annotation_sites
dtype: int64
- name: type_definitions
dtype: int64
- name: loc
dtype: int64
- name: functions
dtype: int64
- name: loc_per_function
dtype: float64
- name: estimated_tokens
dtype: int64
splits:
- name: test
num_bytes: 258988
num_examples: 50
download_size: 132862
dataset_size: 258988
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "stenotype-eval-ts"
This is one of the datasets used to evaluate StenoType ([model](https://huggingface.co/nuprl/stenotype), [GitHub](https://github.com/nuprl/StenoType/)).
This dataset is called `stenotype-eval-dataset-subset` in the code and `TS-Sourced` in the dissertation.
The dataset is derived from TypeScript portion of [The Stack (dedup)](https://huggingface.co/datasets/bigcode/the-stack-dedup), version 1.1, and filtered.
For more details, see the StenoType GitHub repository and the linked dissertation.
## Versions
The evaluation dataset is a sample of 50 files from the "full" dataset, which can be found in the branch `full`.