metadata
dataset_info:
features:
- name: text
dtype: string
- name: gpt_token_count
dtype: int64
- name: llama_token_count
dtype: int64
splits:
- name: train
num_bytes: 193124707426
num_examples: 51700035
- name: valid
num_bytes: 193389444
num_examples: 51752
download_size: 113927709248
dataset_size: 193318096870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
Pretraining dataset
This dataset is composed of 1. dolma 22B, 2. fineweb-edu 21B, and 3. starcoder 2B with common programming language (C, CPP, JAVA, PYTHON, JSON)
Token count
split | GPT2 token count | Llama3 token count |
---|---|---|
train | 45.25B | 41.80B |
valid | 45.04M | 41.64M |