IlyaGusev commited on
Commit
628bcc6
1 Parent(s): 80b4d9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md CHANGED
@@ -59,4 +59,101 @@ dataset_info:
59
  num_examples: 6907622
60
  download_size: 20197306953
61
  dataset_size: 96105803658
 
 
 
 
 
 
62
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  num_examples: 6907622
60
  download_size: 20197306953
61
  dataset_size: 96105803658
62
+ task_categories:
63
+ - text-generation
64
+ language:
65
+ - ru
66
+ size_categories:
67
+ - 1M<n<10M
68
  ---
69
+
70
+
71
+ # Habr dataset
72
+
73
+ ## Table of Contents
74
+ - [Table of Contents](#table-of-contents)
75
+ - [Description](#description)
76
+ - [Usage](#usage)
77
+ - [Data Instances](#data-instances)
78
+ - [Source Data](#source-data)
79
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
80
+
81
+ ## Description
82
+
83
+ **Summary:** Dataset of posts and comments from [pikabu.ru](https://pikabu.ru/), a website that is Russian Reddit/9gag.
84
+
85
+ **Script:** [convert_pikabu.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/convert_pikabu.py)
86
+
87
+ **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
88
+
89
+ **Languages:** Mostly Russian.
90
+
91
+
92
+ ## Usage
93
+
94
+ Prerequisites:
95
+ ```bash
96
+ pip install datasets zstandard jsonlines pysimdjson
97
+ ```
98
+
99
+ Dataset iteration:
100
+ ```python
101
+ from datasets import load_dataset
102
+ dataset = load_dataset('IlyaGusev/pikabu', split="train", streaming=True)
103
+ for example in dataset:
104
+ print(example["text_markdown"])
105
+ ```
106
+
107
+ ## Data Instances
108
+
109
+ ```
110
+ {
111
+ "id": 69911642,
112
+ "title": "Что можно купить в Китае за цену нового iPhone 11 Pro",
113
+ "text_markdown": "...",
114
+ "timestamp": 1571221527,
115
+ "author_id": 2900955,
116
+ "username": "chinatoday.ru",
117
+ "rating": -4,
118
+ "pluses": 9,
119
+ "minuses": 13,
120
+ "url": "...",
121
+ "tags": ["Китай", "AliExpress", "Бизнес"],
122
+ "blocks": {"data": ["...", "..."], "type": ["text", "text"]},
123
+ "comments": {
124
+ "id": [152116588, 152116426],
125
+ "text_markdown": ["...", "..."],
126
+ "text_html": ["...", "..."],
127
+ "images": [[], []],
128
+ "rating": [2, 0],
129
+ "pluses": [2, 0],
130
+ "minuses": [0, 0],
131
+ "author_id": [2104711, 2900955],
132
+ "username": ["FlyZombieFly", "chinatoday.ru"]
133
+ }
134
+ }
135
+ ```
136
+
137
+ You can use this little helper to unflatten sequences:
138
+
139
+ ```python
140
+ def revert_flattening(records):
141
+ fixed_records = []
142
+ for key, values in records.items():
143
+ if not fixed_records:
144
+ fixed_records = [{} for _ in range(len(values))]
145
+ for i, value in enumerate(values):
146
+ fixed_records[i][key] = value
147
+ return fixed_records
148
+ ```
149
+
150
+
151
+ ## Source Data
152
+
153
+ * The data source is the [Pikabu](https://pikabu.ru/) website.
154
+ * An original dump can be found here: [pikastat](https://pikastat.d3d.info/)
155
+ * Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/convert_pikabu.py).
156
+
157
+ ## Personal and Sensitive Information
158
+
159
+ The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.