Spaces:
Running
Running
metadata
title: README
emoji: π
colorFrom: purple
colorTo: green
sdk: static
pinned: false
HuggingFaceTB
This is the home for smol models (SmolLM) and high quality pre-training datasets.
We released:
- FineWeb-Edu: a filtered version of FineWeb dataset for educational content, paper available here.
- Cosmopedia: the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1. Blog post available here.
- Smollm-Corpus: the pre-training corpus of SmolLM models including Cosmopedia v0.2, FineWeb-Edu dedup and Python-Edu. Blog post available here.
- SmolLM models and SmolLM2 models: a series of strong small models in three sizes: 135M, 360M and 1.7B
News ποΈ
- SmolLM2: you can find our most capable model SmolLM2-1.7B here: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
- We released our SFT mix SmolTalk, a diverse dataset of 1M synthetic instruction and answer pairs to improve instruction following and reasoning: https://huggingface.co/datasets/HuggingFaceTB/smoltalk
Comparison of models finetuned on SmolTalk and Orca AgentInstruct 1M. For more details, refer to the dataset card.