File size: 3,715 Bytes
80b4d9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d33974
80b4d9f
628bcc6
 
 
 
 
 
80b4d9f
628bcc6
 
ba8cdd6
628bcc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: title
    dtype: string
  - name: text_markdown
    dtype: string
  - name: timestamp
    dtype: uint64
  - name: author_id
    dtype: int64
  - name: username
    dtype: string
  - name: rating
    dtype: int64
  - name: pluses
    dtype: int64
  - name: minuses
    dtype: int64
  - name: url
    dtype: string
  - name: tags
    sequence: string
  - name: blocks
    sequence:
    - name: data
      dtype: string
    - name: type
      dtype: string
  - name: comments
    sequence:
    - name: id
      dtype: int64
    - name: timestamp
      dtype: uint64
    - name: parent_id
      dtype: int64
    - name: text_markdown
      dtype: string
    - name: text_html
      dtype: string
    - name: images
      sequence: string
    - name: rating
      dtype: int64
    - name: pluses
      dtype: int64
    - name: minuses
      dtype: int64
    - name: author_id
      dtype: int64
    - name: username
      dtype: string
  splits:
  - name: train
    num_bytes: 96105803658
    num_examples: 6907622
  download_size: 20196853689
  dataset_size: 96105803658
task_categories:
- text-generation
language:
- ru
size_categories:
- 1M<n<10M
---


# Pikabu dataset

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)

## Description

**Summary:** Dataset of posts and comments from [pikabu.ru](https://pikabu.ru/), a website that is Russian Reddit/9gag.

**Script:** [convert_pikabu.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/convert_pikabu.py)

**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)

**Languages:** Mostly Russian.


## Usage

Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```

Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/pikabu', split="train", streaming=True)
for example in dataset:
    print(example["text_markdown"])
```

## Data Instances

```
{
  "id": 69911642,
  "title": "Что можно купить в Китае за цену нового iPhone 11 Pro",
  "text_markdown": "...",
  "timestamp": 1571221527,
  "author_id": 2900955,
  "username": "chinatoday.ru",
  "rating": -4,
  "pluses": 9,
  "minuses": 13,
  "url": "...",
  "tags": ["Китай", "AliExpress", "Бизнес"],
  "blocks": {"data": ["...", "..."], "type": ["text", "text"]},
  "comments": {
    "id": [152116588, 152116426],
    "text_markdown": ["...", "..."],
    "text_html": ["...", "..."],
    "images": [[], []],
    "rating": [2, 0],
    "pluses": [2, 0],
    "minuses": [0, 0],
    "author_id": [2104711, 2900955],
    "username": ["FlyZombieFly", "chinatoday.ru"]
  }
}
```

You can use this little helper to unflatten sequences:

```python
def revert_flattening(records):
    fixed_records = []
    for key, values in records.items():
        if not fixed_records:
            fixed_records = [{} for _ in range(len(values))]
        for i, value in enumerate(values):
            fixed_records[i][key] = value
    return fixed_records
```


## Source Data

* The data source is the [Pikabu](https://pikabu.ru/) website.
* An original dump can be found here: [pikastat](https://pikastat.d3d.info/)
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/convert_pikabu.py). 

## Personal and Sensitive Information

The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.