Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -77,7 +77,9 @@ source_datasets: graelo/wikipedia
|
|
77 |
|
78 |
This is really more of a "high quality diverse sample" rather than _"we are trying to remove literal duplicate documents"_. Source dataset: [graelo/wikipedia](https://huggingface.co/datasets/graelo/wikipedia).
|
79 |
|
80 |
-
##
|
|
|
|
|
81 |
|
82 |
command:
|
83 |
|
@@ -137,4 +139,23 @@ config_name = "text-only"
|
|
137 |
dataset = load_dataset("BEE-spoke-data/wikipedia-deduped", config_name)
|
138 |
```
|
139 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
---
|
|
|
77 |
|
78 |
This is really more of a "high quality diverse sample" rather than _"we are trying to remove literal duplicate documents"_. Source dataset: [graelo/wikipedia](https://huggingface.co/datasets/graelo/wikipedia).
|
79 |
|
80 |
+
## configs
|
81 |
+
|
82 |
+
### `default`
|
83 |
|
84 |
command:
|
85 |
|
|
|
139 |
dataset = load_dataset("BEE-spoke-data/wikipedia-deduped", config_name)
|
140 |
```
|
141 |
|
142 |
+
## token counts
|
143 |
+
|
144 |
+
### train
|
145 |
+
|
146 |
+
Using `tiktoken` GPT-4 tokenizer, `train` split, and `text` column:
|
147 |
+
|
148 |
+
| | num_tokens |
|
149 |
+
|:------|----------------:|
|
150 |
+
| count | 5.67337e+06 |
|
151 |
+
| mean | 612.413 |
|
152 |
+
| std | 739.331 |
|
153 |
+
| min | 3 |
|
154 |
+
| 25% | 163 |
|
155 |
+
| 50% | 359 |
|
156 |
+
| 75% | 761 |
|
157 |
+
| max | 34298 |
|
158 |
+
|
159 |
+
total: 3,474,446,396
|
160 |
+
|
161 |
---
|