|
--- |
|
language: |
|
- en |
|
pretty_name: 'TokenMonster Datasets: English, Code, Fiction, Non-fiction' |
|
size_categories: |
|
- 1B<n<10B |
|
tags: |
|
- text |
|
- english |
|
- fiction |
|
- nonfiction |
|
- non-fiction |
|
- modern fiction |
|
- contemporary fiction |
|
- fiction dataset |
|
- code dataset |
|
- english dataset |
|
- code |
|
- code samples |
|
- tokenization |
|
- tokenization datasets |
|
- datasets |
|
task_categories: |
|
- text-generation |
|
--- |
|
## TokenMonster Datasets: English, Code, Fiction, Non-fiction |
|
|
|
Included are datasets that were used to generate the TokenMonster pre-built vocabularies. All are raw text files. |
|
|
|
The training data mostly came from Red Pajamas [1B Token Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample). However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and [Reddit conversations](https://huggingface.co/datasets/SophieTr/reddit_clean) data were added (also cropped to 100MB.) |
|
|
|
Additionally, equally weighted `code` samples of 2MB per language (code_2mb) and 10MB per language (code_10mb) were added for 30 different programming languages to ensure all programming languages have representation. The source of the `code` samples was [codeparrot/github-code](https://huggingface.co/datasets/codeparrot/github-code). To ensure a range of coding styles, I allowed only 1 file per GitHub repository, and per file a maximum of 200 lines selected from the middle of the file. |
|
|
|
Given the evolving nature of writing styles, I felt that `book_sample.txt`, which consists of out-of-copyright books, was not a good representation of contemporary fiction. To better represent a more modern style, I curated `fiction.txt` and `fiction_100mb.txt` by throwing together a few other datasets and cleaning it up. |
|
|
|
| Filename | Filesize | |
|
|--------------------------|-----------| |
|
| arxiv_sample.txt | 88,925,569 | |
|
| book_sample.txt | 108,069,616 | |
|
| c4_sample.txt | 100,560,318 | |
|
| cc_2023-06_sample.txt | 100,852,231 | |
|
| code_2mb.txt | 62,895,904 | |
|
| code_10mb.txt | 314,006,799 | |
|
| fiction.txt | 357,119,086 | |
|
| fiction_100mb.txt | 94,235,489 | |
|
| github_sample.txt | 191,123,094 | |
|
| stackexchange_sample.txt | 71,940,138 | |
|
| wikipedia_sample.txt | 79,181,873 | |
|
| reddit.txt | 100,027,565 | |
|
|
|
Note: `fiction_100mb.txt` is a subset of `fiction.txt`, and `code_2mb.txt` is a subset of `code_10mb.txt`. |
|
|
|
|
|
### License |
|
|
|
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/) |
|
* [C4 license](https://huggingface.co/datasets/allenai/c4#license) |
|
* [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information) |
|
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html) |
|
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information) |
|
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) |