Edit model card

Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!

We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing

unsloth/SmolLM2-360M-Instruct 4bit bitsandbytes pre-quantized

For more details on the model, please go to Hugging Face's original model card

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Llama-3.2 (3B) ▶️ Start on Colab 2.4x faster 58% less
Llama-3.2 (11B vision) ▶️ Start on Colab 2.4x faster 58% less
Llama-3.1 (8B) ▶️ Start on Colab 2.4x faster 58% less
Phi-3.5 (mini) ▶️ Start on Colab 2x faster 50% less
Gemma 2 (9B) ▶️ Start on Colab 2.4x faster 58% less
Mistral (7B) ▶️ Start on Colab 2.2x faster 62% less
DPO - Zephyr ▶️ Start on Colab 1.9x faster 19% less

Special Thanks

A huge thank you to the Hugging Face team for creating and releasing these models.

Model Summary

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.

The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.

The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1.

SmolLM2

image/png

Downloads last month
840
Safetensors
Model size
210M params
Tensor type
F32
·
BF16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for unsloth/SmolLM2-360M-bnb-4bit

Quantized
(31)
this model