File size: 6,921 Bytes
d0c1d13
6befca2
 
 
d0c1d13
6befca2
 
0c57c29
6befca2
 
 
 
 
 
 
 
 
 
 
669bc30
 
 
06e6fa2
 
 
669bc30
06e6fa2
669bc30
06e6fa2
 
669bc30
 
06e6fa2
 
 
 
 
669bc30
06e6fa2
669bc30
06e6fa2
 
d0c1d13
6befca2
669bc30
 
 
 
6befca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f97338e
6befca2
 
 
 
 
8e5bb84
 
 
6befca2
 
 
 
 
 
 
669bc30
6befca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ec885c
6befca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ec885c
06e6fa2
6befca2
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
language:
  - ru
  - en
license: apache-2.0
tags:
  - social-networks
  - not-for-all-audiences
annotation_creators:
  - crowdsourced
language_creators:
  - crowdsourced
pretty_name: batch
size_categories:
  - 100K<n<1M
task_categories:
  - text-generation
  - text-classification
  - question-answering
dataset_info:
  - config_name: written
    features:
      - name: title
        dtype: string
      - name: topics
        sequence:
          - name: posts
            sequence:
              - name: text
                dtype: string
  - config_name: spoken
    features:
      - name: title
        dtype: string
      - name: speech
        dtype: audio
      - name: topics
        sequence:
          - name: posts
            sequence:
              - name: text
                dtype: string
---

<p align="center">
    <img src="https://i.ibb.co/WVkDGyW/image.png"/>
</p>

# Dataset card for batch

## Table of contents

- [Dataset description](#dataset-description)
  - [Dataset summary](#dataset-summary)
- [Dataset structure](#dataset-structure)
  - [Dataset instance](#dataset-instance)
  - [Dataset fields](#dataset-fields)

## Dataset description

- **Homepage**: [batch homepage](https://huggingface.co/datasets/zeio/batch)
- **Repository**: [batch repository](https://huggingface.co/datasets/zeio/batch)
- **Point of contact**: [Zeio Nara](mailto:zeionara@gmail.com)
- **Dataset version**: `31.10.2023`

### Dataset summary

This dataset contains threads parsed from the `/b/` board of [2ch archive][archive]. See dataset viewer at the [derivative repo](/datasets/zeio/auto-batch). **Examples of the dataset reading and usage are provided in [this colab notebook](https://colab.research.google.com/drive/1YOfxiTq6DXIVEaKwyA7TpcTjonaP_A8S?usp=sharing)**.

## Dataset structure

The dataset is represented in three formats - **compressed**, **uncompressed** and **spoken**:

1. `uncompressed` representation is the default and simplest one - in this form the content of dataset is organised inside `txt` files which are grouped into clusters inside [`threads` folder](/datasets/zeio/batch/tree/main/threads). The grouping is done due to `git's` constraints, namely, because it's not possible to have more than 10000 files in a single directory. That's why each cluster contains 10000 items (except the last one, which *could* contain fewer elements). Each cluster name has the format `${START_PAGE}-${END_PAGE}`, where `${START_PAGE}` is the index of the first page in the [archive][archive] from which posts have been put into the cluster, and `${END_PAGE}` is the last such paget respectively;
1. `compressed` representation is slightly more sophisticated than the `uncompressed` one - in consists of a set of `tar.xz` files which are nothing more than **the compressed clusters** of `txt` files described above. This representation corresponds to the [`threads-compressed` folder](/datasets/zeio/batch/tree/main/threads-compressed);
1. `spoken` representation consists of `mp3` files with speech generated for **some threads using an alternating speaker voice pattern** meaning that the 1st post is said by the first speaker, the 2nd post is said by the second speaker, the 3rd post is said by the first speaker, the 4th post is said by the second speaker and so on. The speech is generated automatically using a `TTS` engine. The `mp3` files are located in the [`threads-spoken-compressed`](/datasets/zeio/batch/tree/main/threads-spoken-compressed) and are grouped using `tar.xz` archives in the same way as `txt` files in the [`compressed` dataset representation](/datasets/zeio/batch/tree/main/threads-compressed).

Concerning particular `txt` files under `threads/\*/` folder, each item here corresponds to **one thread** and is organised as follows:

1. Each non-empty line corresponds to a single post from a user;
1. If a non-empty line follows another non-empty line, then it should be treated as a **comment** to one of the posts above it, a **response** to a request above or as an **answer** to a question;
1. If a non-empty line follows an empty line, it should be treated as a beginning of a discussion or a topic.

Therefore, the dataset consists of **threads**, which can be separated into **topics**, which, in turn, consist of **posts**. Posts are the lowermost units in the dataset and are not divided further - they should be interpreted as a plain text.

### Dataset instance

The following code snippet contains text for the thread `0000-0019/119540414`:

```sh
Всем привет. Нужна помощь богов фотошопа, на картинке надо изменить дату на 09/03/2016 и значения тесто на 86.500++
черес код елемента ебаш
Опять ты, сука ебаная? Хули тебе опять надо?

СПАСИБО
Размер шрифта не совпадает, але.
```

This thread consists of two topics, the first one of which includes 3 posts, and the second - 2 posts.

Therefore, this dataset entry can be represented in json in the following format:

```sh
{
  "title": "Всем привет. Нужна помощь богов фотошопа, на картинке надо изменить дату на 09/03/2016 и значения тесто на 86.500++",
  "topics": [
    {
      "posts": [
        {
          "text": "Всем привет. Нужна помощь богов фотошопа, на картинке надо изменить дату на 09/03/2016 и значения тесто на 86.500++"
        },
        {
          "text": "черес код елемента ебаш"
        },
        {
          "text": "Опять ты, сука ебаная? Хули тебе опять надо?"
        }
      ]
    },
    {
      "posts": [
        {
          "text": "СПАСИБО"
        },
        {
          "text": "Размер шрифта не совпадает, але."
        }
      ]
    }
  ]
}
```

### Dataset fields

In `written` configuration the dataset is represented as a list of `Thread` objects, each `Thread` has a single property `topics`, which contains a list of `Topic` objects. Each `Topic` object has a single property `posts`, which points to the list of `Post` objects, making up the `Topic`. Each `Post` object contains a single property `text` which contains text representation of the post (essentially `text` is `html` code without `tags` and explicit links to other posts; there may still be implicit links to other posts in a form of quotes, prefixed with `>` symbol). As an additional field, each instance has a property `title` which is equivalent to the thread's main post content.  
In `spoken` configuration the structure is basically the same, but some `Thread` objects have and additional property `speech` with a spoken representation of the thread.

[archive]: https://2ch.hk/b/arch/