metadata
license: odc-by
dataset_info:
features:
- name: url
dtype: string
- name: tag
dtype: string
- name: text
dtype: string
- name: file_path
dtype: string
- name: dump
dtype: string
- name: file_size_in_byte
dtype: int64
- name: line_count
dtype: int64
splits:
- name: train
num_bytes: 18159796472
num_examples: 5241900
download_size: 9949701917
dataset_size: 18159796472
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
OpenCoder Dataset
The OpenCoder dataset is composed of the following datasets:
- opc-sft-stage1: the sft data used for opencoder sft-stage1
- opc-sft-stage2: the sft data used for opencoder sft-stage2
- opc-annealing-corpus: the synthetic data & algorithmic corpus used for opencoder annealing
- opc-fineweb-code-corpus: the code-related page recalled from fineweb
- opc-fineweb-math-corpus: the math-related page recalled from fineweb <-- you are here
- refineCode-code-corpus-meta: the meta-data of RefineCode
Detailed information about the data can be found in our paper.
opc-fineweb-math-corpus summary
This math-related data from Fineweb was specifically used in OpenCoder pre-training. We employ fastText in three iterative rounds to recall a final dataset of 55B code and math-related data. You can find code-related data at OpenCoder-LLM/fineweb-code-corpus.
This work belongs to INF.
Citation Information
Please consider citing our paper if you find this dataset useful:
@inproceedings{Huang2024OpenCoderTO,
title = {OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models},
author = {Siming Huang and Tianhao Cheng and Jason Klein Liu and Jiaran Hao and Liuyihan Song and Yang Xu and J. Yang and J. H. Liu and Chenchen Zhang and Linzheng Chai and Ruifeng Yuan and Zhaoxiang Zhang and Jie Fu and Qian Liu and Ge Zhang and Zili Wang and Yuan Qi and Yinghui Xu and Wei Chu},
year = {2024},
url = {https://arxiv.org/pdf/2411.04905}
}