--- title: README emoji: 👁 colorFrom: purple colorTo: green sdk: static pinned: false --- # HuggingFaceTB This is the home for smol models (SmolLM) and high quality pre-training datasets. We released: - [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content, paper available [here](https://huggingface.co/papers/2406.17557). - [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1. Blog post available [here](https://huggingface.co/blog/cosmopedia). - [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM models including **Cosmopedia v0.2**, **FineWeb-Edu dedup** and **Python-Edu**. Blog post available [here](https://huggingface.co/blog/smollm). - [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) and [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B **News 🗞️** - SmolLM2: you can find our most capable model SmolLM2-1.7B here: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct - We released our SFT mix SmolTalk, a 1M samples synthetic dataset to improve instruction following, chat and reasoning: https://hf.co/datasets/HuggingFaceTB/smoltalk

Comparison of models finetuned on SmolTalk and Orca AgentInstruct 1M. For more details, refer to the dataset card.