File size: 5,008 Bytes
47e53ec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
---
license: apache-2.0
language:
- en
tags:
- novel
- training
- story
task_categories:
- text-classification
- text-generation
pretty_name: ScribbleHub17K
size_categories:
- 100K<n<1M
duplicated_from: RyokoAI/ScribbleHub17K
---
# Dataset Card for ScribbleHub17K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the
original story sharing site [Scribble Hub](https://scribblehub.com).
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* English
## Dataset Structure
### Data Instances
```json
{
"text": " \n2082 Planet Earth the Fracture War, after a sudden fracture in our dimension unidentified beings with advance technology and u...",
"meta": {
"subset": "scribblehub",
"series": "3811",
"id": "3812",
"q": 0.91,
"title": "The First - Prologue- The Fracture War",
"author": "RobotLove",
"chapters": 1,
"rating": 5,
"rating_ct": 1,
"genre": [
"Action",
"Martial Arts",
"Romance"
],
"tags": [
"Kingdom Building",
"Loyal Subordinates",
"Male Protagonist",
"Organized Crime",
"Scheming"
]
}
}
{
"text": " For anyone that may see this, thanks for reading. I'm just here to see if a story can spill out of my mind if just start writin...",
"meta": {
"subset": "scribblehub",
"series": "586090",
"id": "586099",
"q": 0.82,
"title": "Just writing to write…i guess? - I’m here now",
"author": "BigOofStudios",
"chapters": 1,
"rating": 4.5,
"rating_ct": 2,
"genre": [
"Action",
"Comedy"
],
"tags": []
}
}
```
### Data Fields
* `text`: the actual chapter text
* `meta`: metadata for chapter and series
* `subset`: data source tag: `scribblehub`
* `series`: series ID
* `id`: chapter ID
* `lang`: always `en` (English)
* `q`: quality score (q-score) between (0.0) terrible and 1.0 (perfect); anything with a score `> 0.5` is generally good enough
* `title`: chapter and series title in the format `<chapter title> - <series title>`
* `chapters`: total number of chapters in the series
* `rating`: Scribble Hub rating between 0 and 5 stars
* `rating_ct`: number of ratings
* `author`: author name
* `genre`: array of Scribble Hub genres for the series
* `tags`: array of tags for the series
#### Q-Score Distribution
```
0.00: 0
0.10: 0
0.20: 0
0.30: 84
0.40: 718
0.50: 3775
0.60: 22300
0.70: 72581
0.80: 137982
0.90: 135800
1.00: 59
```
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
Scribble Hub is a home for original web stories, effectively a smaller, English version of Japan's Syosetuka ni Narou. As a
result, it is a good source for reasonably well written creative content.
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Title, ratings, and other metadata were parsed out using scripts that will be provided in the BigKnow2022 GitHub repository.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.**
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. |