metadata
license: other
task_categories:
- text-generation
language:
- ru
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: question_id
dtype: uint32
- name: url
dtype: string
- name: answer_count
dtype: uint32
- name: text_html
dtype: string
- name: text_markdown
dtype: string
- name: score
dtype: int32
- name: title
dtype: string
- name: tags
sequence: string
- name: views
dtype: uint64
- name: author
dtype: string
- name: timestamp
dtype: uint64
- name: comments
sequence:
- name: text
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: uint32
- name: score
dtype: int32
- name: timestamp
dtype: uint64
- name: answers
sequence:
- name: answer_id
dtype: uint32
- name: is_accepted
dtype: uint8
- name: text_html
dtype: string
- name: text_markdown
dtype: string
- name: score
dtype: int32
- name: author
dtype: string
- name: timestamp
dtype: uint64
- name: comments
sequence:
- name: text
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: uint32
- name: score
dtype: int32
- name: timestamp
dtype: uint64
splits:
- name: train
num_bytes: 3013377174
num_examples: 437604
download_size: 670468664
dataset_size: 3013377174
Russian StackOverflow dataset
Table of Contents
- Table of Contents
- Description
- Usage
- Data Instances
- Source Data
- Personal and Sensitive Information
- Licensing Information
Description
Summary: Dataset of questions, answers, and comments from ru.stackoverflow.com.
Script: create_stackoverflow.py
Point of Contact: Ilya Gusev
Languages: The dataset is in Russian with some programming code.
Usage
Prerequisites:
pip install datasets zstandard jsonlines pysimdjson
Loading:
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train")
Data Instances
{
"question_id": 11235,
"answer_count": 1,
"url": "https://ru.stackoverflow.com/questions/11235",
"score": 2,
"tags": ["c++", "сериализация"],
"title": "Извлечение из файла, запись в файл",
"views": 1309,
"author": "...",
"timestamp": 1303205289,
"text_html": "...",
"text_markdown": "...",
"comments": [
{
"text": "...",
"author": "...",
"comment_id": 11236,
"score": 0,
"timestamp": 1303205411
}
],
"answers": [
{
"answer_id": 11243,
"timestamp": 1303207791,
"is_accepted": 1,
"text_html": "...",
"text_markdown": "...",
"score": 3,
"author": "...",
"comments": [
{
"text": "...",
"author": "...",
"comment_id": 11246,
"score": 0,
"timestamp": 1303207961
}
]
}
]
}
Source Data
- The data source is the Russian StackOverflow website.
- Original XMLs: ru.stackoverflow.com.7z.
- Processing script is here.
Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
Licensing Information
According to the license of original data, this dataset is distributed under CC BY-SA 2.5.