File size: 6,761 Bytes
65ae45f 4f77e7c 4307f60 d494331 4219c3c 4307f60 28eb170 4219c3c 28eb170 65ae45f 4f77e7c 6b678b3 4f77e7c 6b678b3 4f77e7c 6b678b3 d494331 6b678b3 4f77e7c d494331 6b678b3 b5b9585 d494331 b5b9585 810fb13 b5b9585 4f77e7c 6b678b3 4f77e7c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
---
license: cc-by-sa-3.0
language:
- en
task_categories:
- text-generation
- fill-mask
tags:
- language-modeling
- masked-language-modeling
pretty_name: SuperWIKI Cleaned
configs:
- config_name: default
default: true
data_files:
- split: lang50NightShade
path:
- "*-lang50NightShade-*.json.gz"
- split: lang50
path:
- "*-lang50-*.json.gz"
- split: lang25
path:
- "*-lang25-*.json.gz"
---
# Dataset Card for SuperWIKI Cleaned
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
> If you show most of those to people and ask them to form an opinion,
> the answer isn't just going to be "I don't know": it'll be "I don't care."
> - [Tom Scott](https://www.youtube.com/watch?v=ALy6e7GbDRQ&t=90s)
>
SuperWIKI Cleaned is a focused dataset on wikipedia articles.
This dataset is derived from raw files provided in [SuperWIKI](https://huggingface.co/datasets/RyokoExtra/SuperWIKI).
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
- English
## Dataset Structure
All the files are located in gzip'd jsonl files.
### Data Instances
Refer to this sample to see all the fields:
```json
{
"id": 35507,
"text": "In computer network communications, the **HTTP 404**, **404 not found**, **404**, **404 error**, **page not found** or **file not found** error message is a hypertext transfer protocol (HTTP) standard response code, to indicate that the browser was able to communicate with a given server, but the server could not find what was requested. The error may also be used when a server does not wish to disclose whether it has the requested information.<TRUNCATED>",
"title": "HTTP 404",
"url": "https://en.wikipedia.org/wiki/HTTP_404",
"filters": {
"issues": [],
"selectors": [],
"templates": [
"template:http",
"template:redirect",
"template:use dmy dates",
"template:cite book",
"template:portal",
"template:anchor",
"template:pp-move-indef",
"template:cite news",
"template:reflist",
"template:short description",
"template:citation",
"template:error messages",
"template:pp-semi-indef",
"template:cite journal",
"template:cite web"
],
"rituals": []
},
"infobox_html": [],
"figures_dict": [
{
"file_url": "./File:Wikipedia_404_Page.png",
"caption": "English Wikipedia's 404 Page"
},
{
"file_url": "./File:Wikimedia_error_404.png",
"caption": "The Wikimedia 404 message"
}
]
}
```
### Data Fields
`id`: The article ID in question
`text`: The HTML Text (After post-processing) from SuperWIKI converted to markdown with links removed and formatting (Bold, italics) kept.
`title`: The title of the wikipedia article.
`url`: The URL of the article.
`filters`: Metadata of filters found/used in the dataset.
- `issues`: A list of custom list of templates that has been removed from the html (ala, pre-processing) for the article.
- `selectors`: `issues` are based on templates, which may have multiple templates but mean the same thing. In that case, the selectors provide a unduplicated css class selectors that were used for the article. (`Template:Few sources` is the same as `Template:More citations needed` for example.)
- `rituals`: List of "Rituals" used to remove even more "Issue" templates. If not present, this field is empty.
- `templates`: Used for debugging but are all the templates found in the article.
`infobox_html`: A list of side infoboxes that ae extracted out from the text.
`figures_dict`: A list of figures used in the article. Again, extracted out from the text.
#### Q-Score Distribution
Not Applicable
### Data Splits
No data splits were done.
## Dataset Creation
### Curation Rationale
"Wikipedia is a wonderful resources however it could be considered too sparse as there are many articles that are not important for the common user..."
> The abundance of less significant or obscure topics can also contribute to the perceived sparsity. While Wikipedia's commitment to covering even niche subjects is commendable, it might be overwhelming for casual users seeking concise and essential information. For instance, niche historical events, minor fictional characters, or obscure scientific theories might exist as standalone articles, but their relevance to the everyday reader could be questioned. - *ChatGPT*
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
This article removes all "Notices" from all articles to provide a cleaner version of wikipedia.
You should consider adding flags back into the dataset if you want to tell the user about the potential issues.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@misc{superwiki,
title = {SuperWIKI Cleaned: Wikipedia for commoners.},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/SuperWIKI}},
}
```
### Name Etymology
N/A
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset.
- [@sirneggles (Twitter)](https://twitter.com/sirneggles) for provided compute. |