Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 1,462 Bytes
60a1535
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
784662f
60a1535
 
1fc5872
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: mit
dataset_info:
  features:
  - name: url
    dtype: string
  - name: tag
    dtype: string
  - name: text
    dtype: string
  - name: file_path
    dtype: string
  - name: dump
    dtype: string
  - name: file_size_in_byte
    dtype: int64
  - name: line_count
    dtype: int64
  splits:
  - name: train
    num_bytes: 254927419643
    num_examples: 100920235
  download_size: 147948949488
  dataset_size: 254927419643
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

This code-related data from [Fineweb](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) was specifically used in [OpenCoder](https://huggingface.co/papers/2411.04905) pre-training. 
We employ fastText in three iterative rounds to recall a final dataset of 55B code and math-related data. 
You can find math-related data at [OpenCoder-LLM/fineweb-math-corpus](https://huggingface.co/datasets/OpenCoder-LLM/fineweb-math-corpus).

## Citation
```
@inproceedings{Huang2024OpenCoderTO,
  title={OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models},
  author={Siming Huang and Tianhao Cheng and Jason Klein Liu and Jiaran Hao and Liuyihan Song and Yang Xu and J. Yang and J. H. Liu and Chenchen Zhang and Linzheng Chai and Ruifeng Yuan and Zhaoxiang Zhang and Jie Fu and Qian Liu and Ge Zhang and Zili Wang and Yuan Qi and Yinghui Xu and Wei Chu},
  year={2024},
  url={https://arxiv.org/pdf/2411.04905}
}
```