SultanR's picture
Update README.md
1b6e6ed verified
|
raw
history blame
6.56 kB
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- Tulu3
- Smollm
- SLMs
- Small
- Huggingface
- Allenai
- SFT
- DPO
- GGUF
base_model:
- HuggingFaceTB/SmolLM2-1.7B
datasets:
- allenai/tulu-3-sft-mixture
- allenai/llama-3.1-tulu-3-8b-preference-mixture
pipeline_tag: text-generation
model-index:
- name: SmolTulu-1.7b-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 65.41
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 12.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.64
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.57
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 7.89
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
---
# SmolLM2 1.7b Instruction Tuned & DPO Aligned through Tulu 3!
![SmolTulu Banner](smoltulubannerv0.png)
SmolTulu-v0.1 is the first model in a series of models meant to leverage [AllenAI's Tulu 3 post-training pipeline](https://allenai.org/blog/tulu-3-technical) to tune the [base version of Huggingface's SmolLM2-1.7b](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B)! The post training pipeline AllenAI came up with seemed like something perfect to apply here.
This model scores the highest current score in both IFEval and GSM8k while maintaining the extremely low contamination levels in Tulu 3 and SmolLM2! I've listed the datasets used to do both the SFT (supervised finetuning) and DPO (direct preference optimization) stages.
## Why v0.1?
There's a few reasons on why I like calling this model v0.1:
1. The model still lags behind the instruction tuned version of SmolLM2 in some other metrics.
2. This model has only undergone SFT and DPO, the RLVR (reinforcement learning with verifiable rewards) stage was too computationally expensive to run on a model that could be better.
3. Initial hyperparameter choice during training was naive, through some napkin math I've been able to find a much better learning rate that scales the one found in the Tulu 3 paper according to my computational resources better.
# Evaluation
I ran these evaluations using [SmolLM2's evaluation code](https://github.com/huggingface/smollm/tree/main/evaluation) for a more fair comparison.
| Metric | SmolTulu-1.7b-Instruct | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
|:----------------------------|:---------------------:|:---------------------:|:---------------------:|:---------------------:|:---------------------:|
| IFEval (Average prompt/inst) | **67.7** | 56.7 | 53.5 | 47.4 | 23.1 |
| GSM8K (5-shot) | **51.6** | 48.2 | 26.8 | 42.8 | 4.6 |
| PIQA | 72.2 | **74.4** | 72.3 | 73.2 | 71.6 |
| BBH (3-shot) | 33.8 | 32.2 | 27.6 | **35.3** | 25.7 |
| ARC (Average) | 51.5 | **51.7** | 41.6 | 46.2 | 43.7 |
| HellaSwag | 61.1 | **66.1** | 56.1 | 60.9 | 55.5 |
| MMLU-Pro (MCF) | 17.4 | 19.3 | 12.7 | **24.2** | 11.7 |
# Usage
Just like any Huggingface model, just run it using the transformers library:
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "SultanR/SmolTulu-1.7b-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
You can also use the model in llama.cpp through the [gguf version](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct-GGUF)!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SultanR__SmolTulu-1.7b-Instruct)
To give a more holistic overview, I also added the Open LLM Leaderboard results, which differ a lot from the script that was used to benchmark SmolLM2-Instruct.
| Metric |Value|
|-------------------|----:|
|Avg. |15.45|
|IFEval (0-Shot) |65.41|
|BBH (3-Shot) |12.26|
|MATH Lvl 5 (4-Shot)| 2.64|
|GPQA (0-shot) | 2.57|
|MuSR (0-shot) | 1.92|
|MMLU-PRO (5-shot) | 7.89|