license: apache-2.0
model-index:
- name: TinyWand-SFT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 31.4
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/TinyWand-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 49.96
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/TinyWand-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.98
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/TinyWand-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.08
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/TinyWand-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.17
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/TinyWand-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.05
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/TinyWand-SFT
name: Open LLM Leaderboard
TinyWand-SFT
ํ๊ตญ์ด ๋ชจ๋ธ ์ค๋ช
1.63B, ํ์ฐฎ์ ํฌ๊ธฐ์ SLM์ ์ด๋จ๊น์?
๋ชจ๋ธ ์๊ฐ
TinyWand-SFT๋ 1.63B์ SLM ๋ชจ๋ธ์ ๋๋ค. ์ด ๋ชจ๋ธ์ 1.63B๋ผ๋ ์์ ํฌ๊ธฐ๋ฅผ ๊ฐ์ง์ผ๋ก์จ ์ํ๊ธฐ๊ธฐ์์ ๊ตฌ๋๋๊ฑฐ๋ ํฐ toks/s๋ฅผ ๊ฐ์ง ์ ์์๊ณผ ๋์์ ๊ฐ๋ ฅํ ์ฑ๋ฅ์ ๋ณด์ฌ์ค๋๋ค.
๋ชจ๋ธ ๋ผ์ด์ผ์ค
apache-2.0
๋ชจ๋ธ ์ฑ๋ฅ
TBD
ํ๊ณ
์์ ํฌ๊ธฐ๋ก ์ธํ์ฌ Insturct ํ์ธํ๋ ํ ํด๋น ํ ํ๋ฆฟ์ด ์๋๊ฒฝ์ฐ ์ ๋๋ก ์๋ตํ์ง ์๋ ๋ชจ์ต์ ๋ณด์. ํน์ task์ ์ฌ์ฉํ๋ค๋ฉด ํ๋กฌํํ ๋ณด๋ค๋ ํ์ธํ๋์ ๊ถ์ฅํจ.
๊ฐ์ ์ด์ ๋ก ์ผ๋ฐ์ ์ธ ๋ฒค์น๋งํฌ์์๋ ์๋นํ ๋ฎ์ ์ ์๋ฅผ ๋ณด์.
ํ์ต ๊ณผ์
TBD
์ฌ์ฉ ์๋ด
์ถ๋ก ์ ํ์ํ VRAM
์์ํ | ์ ๋ ฅ ํ ํฐ ์ | ์ถ๋ ฅ ํ ํฐ ์ | ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋ |
---|---|---|---|
bf16(base) | 64 | 256 | 3,888 MiB |
q4_K_M | 64 | 256 | 1,788 MiB |
ํ๋กฌํํธ ํ ํ๋ฆฟ
๋ณธ ๋ชจ๋ธ์ Alpaca ํ๋กฌํํธ ํ ํ๋ฆฟ์ ์ฌ์ฉํฉ๋๋ค.
ํด๋น ํ
ํ๋ฆฟ์ apply_chat_template()
๋ฅผ ํตํด ํ๊น
ํ์ด์ค ํ
ํ๋ฆฟ์์ ํ์ธ ํ์ค ์ ์์ต๋๋ค.
์๋ ํ์ด์ฌ ์ฝ๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ๋ก๋ ๋ฐ ์ฌ์ฉ ํ ์ ์์ต๋๋ค. transformers, torch๊ฐ ์ฌ์ ์ค์น๋์ด์ผํจ
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # nvidia ๊ทธ๋ํฝ์นด๋ ๊ธฐ์ค
tokenizer = AutoTokenizer.from_pretrained("maywell/TinyWand-SFT")
model = AutoModelForCausalLM.from_pretrained(
"maywell/TinyWand-SFT",
device_map="auto",
torch_dtype=torch.bfloat16, # ์ฌ์ฉํ๋ ์ฅ๋น๊ฐ bfloat16์ ์ง์ํ์ง ์๋ ๊ฒฝ์ฐ torch.float16์ผ๋ก ๋ฐ๊ฟ์ฃผ์ธ์.
)
messages = [
{"role": "system", "content": "Below is an instruction that describes a task. Write a response that appropriately completes the request."}, # ๋น์ธ ๊ฒฝ์ฐ์๋ ๋์ผํ๊ฒ ์ ์ฉ ๋จ.
{"role": "user", "content": "์ธ์ด๋ชจ๋ธ์ ํ๋ผ๋ฏธํฐ ์๊ฐ ์์ผ๋ฉด ์ด๋ค ์ด์ ์ด ์์ด?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 34.61 |
AI2 Reasoning Challenge (25-Shot) | 31.40 |
HellaSwag (10-Shot) | 49.96 |
MMLU (5-Shot) | 25.98 |
TruthfulQA (0-shot) | 43.08 |
Winogrande (5-shot) | 55.17 |
GSM8k (5-shot) | 2.05 |