--- license: other license_name: other license_link: LICENSE language: - zh - vi - id - ms - tl - my - th - lo - km - ta --- # SEA-LION-Pile SEA-LION-Pile is the pretraining data set for SEA-LION, a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. This repository contains the cleaned mC4 portion of the SEA-LION-Pile. For the remainder of the SEA-LION-Pile dataset, they may be downloaded from the links provided below. ## Dataset Details SEA-LION was trained on 980B tokens of the following data: | Data Source | Unique Tokens | Multiplier | Total Tokens | Percentage | |---------------------------|:-------------:|:----------:|:------------:|:----------:| | RefinedWeb - English | 571.3B | 1 | 571.3B | 58.20% | | mC4 - Chinese | 91.2B | 1 | 91.2B | 9.29% | | mC4 - Indonesian | 3.68B | 4 | 14.7B | 1.50% | | mC4 - Malay | 0.72B | 4 | 2.9B | 0.29% | | mC4 - Filipino | 1.32B | 4 | 5.3B | 0.54% | | mC4 - Burmese | 1.2B | 4 | 4.9B | 0.49% | | mC4 - Vietnamese | 63.4B | 1 | 63.4B | 6.46% | | mC4 - Thai | 5.8B | 2 | 11.6B | 1.18% | | WangChanBERTa - Thai | 5B | 2 | 10B | 1.02% | | mC4 - Lao | 0.27B | 4 | 1.1B | 0.12% | | mC4 - Khmer | 0.97B | 4 | 3.9B | 0.40% | | mC4 - Tamil | 2.55B | 4 | 10.2B | 1.04% | | the Stack - Python | 20.9B | 2 | 41.8B | 4.26% | | the Stack - Javascript | 55.6B | 1 | 55.6B | 5.66% | | the Stack - Shell | 1.2B5 | 2 | 2.5B | 0.26% | | the Stack - SQL | 6.4B | 2 | 12.8B | 1.31% | | the Stack - Markdown | 26.6B | 1 | 26.6B | 2.71% | | RedPajama - StackExchange | 21.2B | 1 | 21.2B | 2.16% | | RedPajama - ArXiv | 30.6B | 1 | 30.6B | 3.12% | ### Additional SEA-LION-Pile (non-mC4) Data Sources This section contains the links to the additional datasets that form the SEA-LION-Pile. - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - [the Stack (Python, Javascript, Shell, SQL, Markdown)](https://huggingface.co/datasets/bigcode/the-stack-dedup) - [RedPajama (StackExchange, ArXiv)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) - WangChanBERTa - [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020) - [prachathai67k](https://huggingface.co/datasets/prachathai67k) - [thaisum](https://huggingface.co/datasets/thaisum) - [Opus - bible-uedin](https://opus.nlpl.eu/bible-uedin.php) - [Opus - Tanzil](https://opus.nlpl.eu/Tanzil.php) - [Opus - Opensubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) - [Opus - QED](https://opus.nlpl.eu/QED.php) - [Opus - Ted2020](https://opus.nlpl.eu/TED2020.php) - [Opus - Oscar](https://oscar-project.org/post/news-23-01) ### Limitations - As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content. - Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant. ### License This public extract of mC4 is made available under [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) license; users should also abide to the [CommonCrawl ToU](https://commoncrawl.org/terms-of-use/). For all other licenses, please refer to their individual pages above. ## References ```bibtex @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } @article{Kocetkov2022TheStack, title={The Stack: 3 TB of permissively licensed source code}, author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm}, journal={Preprint}, year={2022} } @software{together2023redpajama, author = {Together Computer}, title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ```