{}
Dataset Card for Starcoder Data with Python Education and Language Scores
Dataset Summary
The starcoderdata-python-edu-lang-score dataset
contains the Python subset of the starcoderdata dataset
. It augments the existing Python subset with features that assess the educational quality of code and classify the language of code comments. This dataset was created for high-quality Python education and language-based training, with a primary focus on facilitating models that can leverage educational scores and focus on specific languages for code comments (e.g., English or Portuguese). The dataset is suitable for various applications, including educational content evaluation and multilingual code understanding.
Uses
Direct Use
This dataset can be directly used to:
- Train models on code that has high educational value.
- Train language models to focus on specific languages in code comments.
Out-of-Scope Use
The dataset is not intended for tasks unrelated to code content analysis, such as general NLP classification or non-educational content filtering.
Dataset Structure
Each record in the dataset includes:
max_stars_repo_path
: repo pathmax_stars_repo_name
: repo namecontent
: The original code content.content_cleaned
: The content with specific metadata (e.g., reponame) removed for cleaner processing.language
: The detected language of code comments.language_score
: The confidence score for the language classification.comments
: Extracted comments from the code content.edu_score
: The educational score representing the quality of content (ranging from 0 to 5).edu_int_score
: The integer representation of edu_score, rounded for simplified use cases.
Dataset Creation
Curation Rationale
The creation of the starcoderdata-python-edu-lang-score dataset
had two purposes.
- Identification of high-quality code via an educational quality classification
- Filtering for natural languages that are used to write code comments
Data Collection and Processing
Data was collected from the Python subset of the starcoderdata dataset and received the following processing steps:
Content Cleaning: The preprocessing step removes metadata tags like or to ensure the content is ready for processing. This step is useful in creating a standardized input for further classification and scoring.
Language Classification:
- Model Used: The FastText language identification model (lid.176.bin) was employed to detect the language of comments within the code. This model supports a wide range of languages, ensuring robust language detection.
Educational Scoring:
- Model Used: The educational scoring model was pre-trained on sequence classification to evaluate the quality and educational value of Python code content. This model was sourced from Hugging Face’s HuggingFaceTB/python-edu-scorer.
Glossary
- Educational Score: A measure of the quality of code content based on its potential educational value, ranging from 0 (low quality) to 5 (high quality).
- Language Code: A code representing the detected language in code comments, based on FastText classification.