Datasets:
alasdairforsythe
commited on
Commit
•
0abaf8e
1
Parent(s):
e4812b6
Upload 13 files
Browse files- .gitattributes +12 -0
- README.md +36 -0
- arxiv_sample.txt +3 -0
- book_sample.txt +3 -0
- c4_sample.txt +3 -0
- cc_2023-06_sample.txt +3 -0
- code_10mb.txt +3 -0
- code_2mb.txt +3 -0
- fiction.txt +3 -0
- fiction_100mb.txt +3 -0
- github_sample.txt +3 -0
- reddit.txt +3 -0
- stackexchange_sample.txt +3 -0
- wikipedia_sample.txt +3 -0
.gitattributes
CHANGED
@@ -53,3 +53,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
56 |
+
arxiv_sample.txt filter=lfs diff=lfs merge=lfs -text
|
57 |
+
book_sample.txt filter=lfs diff=lfs merge=lfs -text
|
58 |
+
c4_sample.txt filter=lfs diff=lfs merge=lfs -text
|
59 |
+
cc_2023-06_sample.txt filter=lfs diff=lfs merge=lfs -text
|
60 |
+
code_10mb.txt filter=lfs diff=lfs merge=lfs -text
|
61 |
+
code_2mb.txt filter=lfs diff=lfs merge=lfs -text
|
62 |
+
fiction_100mb.txt filter=lfs diff=lfs merge=lfs -text
|
63 |
+
fiction.txt filter=lfs diff=lfs merge=lfs -text
|
64 |
+
github_sample.txt filter=lfs diff=lfs merge=lfs -text
|
65 |
+
reddit.txt filter=lfs diff=lfs merge=lfs -text
|
66 |
+
stackexchange_sample.txt filter=lfs diff=lfs merge=lfs -text
|
67 |
+
wikipedia_sample.txt filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## TokenMonster Datasets: English, Code, Fiction, Non-fiction
|
2 |
+
|
3 |
+
Included are datasets that were used to generate the TokenMonster pre-built vocabularies.
|
4 |
+
|
5 |
+
The training data mostly came from Red Pajamas [1B Token Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample). However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and [Reddit conversations](https://huggingface.co/datasets/SophieTr/reddit_clean) data were added (also cropped to 100MB.)
|
6 |
+
|
7 |
+
Additionally, equally weighted `code` samples of 2MB per language (code_2mb) and 10MB per language (code_10mb) were added for 30 different programming languages to ensure all programming languages have representation. The source of the `code` samples was [codeparrot/github-code](https://huggingface.co/datasets/codeparrot/github-code). To ensure a range of coding styles, I allowed only 1 file per GitHub repository, and per file a maximum of 200 lines selected from the middle of the file.
|
8 |
+
|
9 |
+
Given the evolving nature of writing styles, I felt that `book_sample.txt`, which consists of out-of-copyright books, was not a good representation of contemporary fiction. To better represent a more modern style, I curated `fiction.txt` and `fiction_100mb.txt` by throwing together a few other datasets and cleaning it up.
|
10 |
+
|
11 |
+
| Filename | Filesize |
|
12 |
+
|--------------------------|-----------|
|
13 |
+
| arxiv_sample.txt | 88,925,569 |
|
14 |
+
| book_sample.txt | 108,069,616 |
|
15 |
+
| c4_sample.txt | 100,560,318 |
|
16 |
+
| cc_2023-06_sample.txt | 100,852,231 |
|
17 |
+
| code_2mb.txt | 62,895,904 |
|
18 |
+
| code_10mb.txt | 314,006,799 |
|
19 |
+
| fiction.txt | 357,119,086 |
|
20 |
+
| fiction_100mb.txt | 94,235,489 |
|
21 |
+
| github_sample.txt | 191,123,094 |
|
22 |
+
| stackexchange_sample.txt | 71,940,138 |
|
23 |
+
| wikipedia_sample.txt | 79,181,873 |
|
24 |
+
| reddit.txt | 100,027,565 |
|
25 |
+
|
26 |
+
> **Note:** `fiction_100mb.txt` is a subset of `fiction.txt`, and `code_2mb.txt` is a subset of `code_10mb.txt`.
|
27 |
+
|
28 |
+
|
29 |
+
### License
|
30 |
+
|
31 |
+
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
|
32 |
+
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
|
33 |
+
* [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
|
34 |
+
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
|
35 |
+
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
|
36 |
+
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
|
arxiv_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b0d86bcb307809455ad118900167528115d66a174db684a31f5ddddb080b7cb
|
3 |
+
size 88925569
|
book_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a01e10e507f74f7e244aac200841d7ce4aa5f42a66fe30e14ec6e03f99a5dc50
|
3 |
+
size 108069616
|
c4_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:316811d2cac435473f7d03cdc8578dbdd398d1bd3d7b941c6929f4d63030d22a
|
3 |
+
size 100560318
|
cc_2023-06_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e26dd755ca898c3aaec2ee0ffd3bea318980e14364b6dac1a8f14e4e9e238a01
|
3 |
+
size 100852231
|
code_10mb.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:201753d29341123d57fccde1dcfa3439d4646480085ba461b6188c7e221f6a2f
|
3 |
+
size 314006799
|
code_2mb.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5c3b920b28c42cf5e22cefa3141489f3d6f9bfcab44a550c8da8c08db3078585
|
3 |
+
size 62895904
|
fiction.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:37b09f88a678ab0839d0629628e6362a25ff46b96318655df18d8bf3a6bea5ef
|
3 |
+
size 357119086
|
fiction_100mb.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:476319805d4c91d76655aac59d812ec9839c72fb30e57c8a33e24a7cc02fcf33
|
3 |
+
size 94235489
|
github_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:13ec625c84c7c47210ac69f4ea330099b3fbf5fcc8db3b45a603e9b315937372
|
3 |
+
size 191123094
|
reddit.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:98db3adf58719bbea17fb5ae0dd13100286eed46eb915db732d6fa4720b99485
|
3 |
+
size 100027565
|
stackexchange_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0f47b37817197f067c561a5b485db6a2e1a1fa2813c0ea62bcc76049208e5072
|
3 |
+
size 71940138
|
wikipedia_sample.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:61e2200397adfcf77593b950644a0e95b11ffc08d4e715c008eebfc08f7bd68a
|
3 |
+
size 79181873
|