Update files from the datasets library (from 1.12.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.12.0
- README.md +6 -6
- dataset_infos.json +1 -1
README.md
CHANGED
@@ -36,9 +36,9 @@ paperswithcode_id: openwebtext
|
|
36 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
37 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
38 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
39 |
-
- **Size of downloaded dataset files:**
|
40 |
-
- **Size of the generated dataset:**
|
41 |
-
- **Total amount of disk used:**
|
42 |
|
43 |
### Dataset Summary
|
44 |
|
@@ -60,9 +60,9 @@ We show detailed information for up to 5 configurations of the dataset.
|
|
60 |
|
61 |
#### plain_text
|
62 |
|
63 |
-
- **Size of downloaded dataset files:**
|
64 |
-
- **Size of the generated dataset:**
|
65 |
-
- **Total amount of disk used:**
|
66 |
|
67 |
An example of 'train' looks as follows.
|
68 |
```
|
|
|
36 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
37 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
38 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
39 |
+
- **Size of downloaded dataset files:** 12880.03 MB
|
40 |
+
- **Size of the generated dataset:** 39769.50 MB
|
41 |
+
- **Total amount of disk used:** 52649.52 MB
|
42 |
|
43 |
### Dataset Summary
|
44 |
|
|
|
60 |
|
61 |
#### plain_text
|
62 |
|
63 |
+
- **Size of downloaded dataset files:** 12880.03 MB
|
64 |
+
- **Size of the generated dataset:** 39769.50 MB
|
65 |
+
- **Total amount of disk used:** 52649.52 MB
|
66 |
|
67 |
An example of 'train' looks as follows.
|
68 |
```
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"plain_text": {"description": "An open-source replication of the WebText dataset from OpenAI.\n", "citation": "@misc{Gokaslan2019OpenWeb
|
|
|
1 |
+
{"plain_text": {"description": "An open-source replication of the WebText dataset from OpenAI.\n", "citation": "@misc{Gokaslan2019OpenWeb,\n title={OpenWebText Corpus},\n author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},\n howpublished{\\url{http://Skylion007.github.io/OpenWebTextCorpus}},\n year={2019}\n}\n", "homepage": "https://skylion007.github.io/OpenWebTextCorpus/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "openwebtext", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 39769494896, "num_examples": 8013769, "dataset_name": "openwebtext"}}, "download_checksums": {"https://zenodo.org/record/3834942/files/openwebtext.tar.xz": {"num_bytes": 12880027468, "checksum": "9fe39d154c5bc67da8c359415372b79510eb1e2edb0d035fe4f7fc3a732b9336"}}, "download_size": 12880027468, "post_processing_size": null, "dataset_size": 39769494896, "size_in_bytes": 52649522364}}
|