README / README.md
loubnabnl's picture
loubnabnl HF staff
Update README.md
73bccd6 verified
|
raw
history blame
1.46 kB
---
title: README
emoji: πŸ‘
colorFrom: purple
colorTo: green
sdk: static
pinned: false
---
# HuggingFaceTB
This is the home for small LLMs (SmolLM) and high quality pre-training datasets, such as [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) and [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
We released:
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
- [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmo-1b) a 1B model trained on Cosmopedia.
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content
- [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM models including **Cosmopedia v0.2**, **FineWeb-Edu** and **Python-Edu**.
- [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) and [SmolLM2](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
For more details check our blog posts: https://huggingface.co/blog/cosmopedia and https://huggingface.co/blog/smollm