Datasets:
File size: 3,076 Bytes
7a4e28b ab85ed4 10efae3 ab85ed4 7a4e28b 0abaf8e 10efae3 0abaf8e ab85ed4 0abaf8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
language:
- en
pretty_name: 'TokenMonster Datasets: English, Code, Fiction, Non-fiction'
size_categories:
- 1B<n<10B
tags:
- text
- english
- fiction
- nonfiction
- non-fiction
- modern fiction
- contemporary fiction
- fiction dataset
- code dataset
- english dataset
- code
- code samples
- tokenization
- tokenization datasets
- datasets
task_categories:
- text-generation
---
## TokenMonster Datasets: English, Code, Fiction, Non-fiction
Included are datasets that were used to generate the TokenMonster pre-built vocabularies. All are raw text files.
The training data mostly came from Red Pajamas [1B Token Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample). However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and [Reddit conversations](https://huggingface.co/datasets/SophieTr/reddit_clean) data were added (also cropped to 100MB.)
Additionally, equally weighted `code` samples of 2MB per language (code_2mb) and 10MB per language (code_10mb) were added for 30 different programming languages to ensure all programming languages have representation. The source of the `code` samples was [codeparrot/github-code](https://huggingface.co/datasets/codeparrot/github-code). To ensure a range of coding styles, I allowed only 1 file per GitHub repository, and per file a maximum of 200 lines selected from the middle of the file.
Given the evolving nature of writing styles, I felt that `book_sample.txt`, which consists of out-of-copyright books, was not a good representation of contemporary fiction. To better represent a more modern style, I curated `fiction.txt` and `fiction_100mb.txt` by throwing together a few other datasets and cleaning it up.
| Filename | Filesize |
|--------------------------|-----------|
| arxiv_sample.txt | 88,925,569 |
| book_sample.txt | 108,069,616 |
| c4_sample.txt | 100,560,318 |
| cc_2023-06_sample.txt | 100,852,231 |
| code_2mb.txt | 62,895,904 |
| code_10mb.txt | 314,006,799 |
| fiction.txt | 357,119,086 |
| fiction_100mb.txt | 94,235,489 |
| github_sample.txt | 191,123,094 |
| stackexchange_sample.txt | 71,940,138 |
| wikipedia_sample.txt | 79,181,873 |
| reddit.txt | 100,027,565 |
Note: `fiction_100mb.txt` is a subset of `fiction.txt`, and `code_2mb.txt` is a subset of `code_10mb.txt`.
### License
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
* [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) |