yury-zyphra commited on
Commit
61a8b4b
1 Parent(s): 90f3de6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -11
README.md CHANGED
@@ -1,8 +1,36 @@
1
  ---
2
  license: odc-by
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- # Zynemo-5T
6
 
7
  <!-- Provide a quick summary of the dataset. -->
8
 
@@ -24,12 +52,31 @@ According to our evaluations, Zynemo is the most performant per-token open datas
24
  For more information, please see our technical blog (-/TODO LINK)
25
 
26
  ## How to download
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- // TODO YURY
29
 
30
  ## Breakdown by component
31
 
32
- // TODO YURY
 
 
 
 
 
 
33
 
34
  ### Dataset Description
35
 
@@ -44,18 +91,15 @@ For more information, please see our technical blog (-/TODO LINK)
44
 
45
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
46
 
47
- // TODO IS THIS CORRECT YURY?
48
 
 
49
 
50
- Dataset fields:
51
- - `text`: contains actual text for training
52
- - `source`: component the text is coming from
53
- - `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
54
- - `source_other`: metadata from the source dataset (converted to json string)
55
 
56
  ### Source Data
57
 
58
- Zynemo is comprised of four high quality open-source datasets:
59
 
60
  Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
61
 
@@ -63,7 +107,7 @@ Dolma-1.7-cc https://huggingface.co/datasets/allenai/dolma
63
 
64
  DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
65
 
66
- FineWeb-Edu-2 https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
67
 
68
  <center>
69
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="ZyNeMo dataset composition">
 
1
  ---
2
  license: odc-by
3
+ pretty_name: Zyda2-5T
4
+ task_categories:
5
+ - text-generation
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - n>1T
10
+ configs:
11
+ - config_name: default
12
+ data_files:
13
+ - split: train
14
+ path: data/*/*/*
15
+ - config_name: dclm-crossdeduped
16
+ data_files:
17
+ - split: train
18
+ path: data/dclm-baseline/*/*
19
+ - config_name: zyda-crossdeduped-filtered
20
+ data_files:
21
+ - split: train
22
+ path: data/zyda/*/*
23
+ - config_name: dolma_cc-crossdeduped-filtered
24
+ data_files:
25
+ - split: train
26
+ path: data/dolma-cc-v1_7-deduped-filtered/*
27
+ - config_name: fwe3
28
+ data_files:
29
+ - split: train
30
+ path: data/fwe3/*/*
31
  ---
32
 
33
+ # Zyda2-5T
34
 
35
  <!-- Provide a quick summary of the dataset. -->
36
 
 
52
  For more information, please see our technical blog (-/TODO LINK)
53
 
54
  ## How to download
55
+ Since we preserved the schemas of original component datasets, attempting to dowlnoad the whole dataset using `datasets.load_dataset()` might fail during the stage of generating a split.
56
+
57
+ To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, then download individual components separately.
58
+
59
+ Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda2-5T--repo-type dataset`
60
+
61
+ Commands to download individual components:
62
+ - DCLM: `ds = datasets.load_dataset("Zyphra/Zyda2-5T", name="dclm-crossdeduped", split="train")`
63
+ - Zyda: `ds = datasets.load_dataset("Zyphra/Zyda2-5T", name="zyda-crossdeduped-filtered ", split="train")`
64
+ - Dolma-CC: `ds = datasets.load_dataset("Zyphra/Zyda2-5T", name="dolma_cc-crossdeduped-filtered", split="train")`
65
+ - Fineweb-Edu: `ds = datasets.load_dataset("Zyphra/Zyda2-5T", name="fwe3", split="train")`
66
+
67
+ In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to appropriate weights during training.
68
+ We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
69
 
 
70
 
71
  ## Breakdown by component
72
 
73
+ | Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
74
+ | --- | --- | --- | --- |
75
+ | dclm-crossdeduped | 8469.4 | 2,590.5 | 3,348.942 |
76
+ | zyda-crossdeduped-filtered | 452.4 | 247.7 | 163.6 |
77
+ | dolma_cc-crossdeduped-filtered | 668.2 | 445.6 | 238.4 |
78
+ | fwe3 | 3490.5 | 1,279.1 | 1,319.2 |
79
+ | Total | 13080.5 | 4,562.8 | 5,070.2 |
80
 
81
  ### Dataset Description
82
 
 
91
 
92
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
93
 
94
+ Each component has they're own individual schema. Please, consult with their respective sources for exact information.
95
 
96
+ However, in all components the document text is in `text` column, and unique document document id is in `nemo_id` column.
97
 
98
+ Our Zyda1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`.
 
 
 
 
99
 
100
  ### Source Data
101
 
102
+ Zyda2 is comprised of four high quality open-source datasets:
103
 
104
  Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
105
 
 
107
 
108
  DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
109
 
110
+ FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2
111
 
112
  <center>
113
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="ZyNeMo dataset composition">