Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,69 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
4 |
+
|
5 |
+
<p align="left">
|
6 |
+
<img src="./TinyWand.png" width="150"/>
|
7 |
+
<p>
|
8 |
+
|
9 |
+
# ํ๊ตญ์ด ๋ชจ๋ธ ์ค๋ช
|
10 |
+
|
11 |
+
**1.63B, ํ์ฐฎ์ ํฌ๊ธฐ์ SLM์ ์ด๋จ๊น์?**
|
12 |
+
|
13 |
+
## ๋ชจ๋ธ ์๊ฐ
|
14 |
+
**TinyWand-SFT**๋ 1.63B์ SLM ๋ชจ๋ธ์
๋๋ค. ์ด ๋ชจ๋ธ์ 1.63B๋ผ๋ ์์ ํฌ๊ธฐ๋ฅผ ๊ฐ์ง์ผ๋ก์จ ์ํ๊ธฐ๊ธฐ์์ ๊ตฌ๋๋๊ฑฐ๋ ํฐ toks/s๋ฅผ ๊ฐ์ง ์ ์์๊ณผ ๋์์ ๊ฐ๋ ฅํ ์ฑ๋ฅ์ ๋ณด์ฌ์ค๋๋ค.
|
15 |
+
|
16 |
+
## ๋ชจ๋ธ ๋ผ์ด์ผ์ค
|
17 |
+
ํ์ฌ ๋ชจ๋ธ์ ์์
์ ์ด์ฉ ๋ถ๊ฐ์ธ cc-by-nc-4.0์ ๋ผ์ด์ผ์ค๋ฅผ ์ ์ฉ๋ฐ๊ณ ์์ผ๋ฉฐ, ์ด๋ ํด๋น ๋ชจ๋ธ์ weight๋ฅผ ์ด์ฉํ ํ์ธํ๋, Continual-์ฌ์ ํ์ต ๋ชจ๋ธ์๋ ๋์ผํ๊ฒ ์ ์ฉ๋ฉ๋๋ค.
|
18 |
+
|
19 |
+
๋ผ์ด์ผ์ค๋ ๋ฌด๋ฃ ํน์ ์กฐ๊ฑด๋ถ๋ก ๋ฉฐ์น ํ ์์ ๋ ์์ ์
๋๋ค.
|
20 |
+
|
21 |
+
## ๋ชจ๋ธ ์ฑ๋ฅ
|
22 |
+
TBD
|
23 |
+
|
24 |
+
## ํ์ต ๊ณผ์
|
25 |
+
ํ์ฌ ๋น๊ณต๊ฐ
|
26 |
+
|
27 |
+
## ์ฌ์ฉ ์๋ด
|
28 |
+
|
29 |
+
**์ถ๋ก ์ ํ์ํ VRAM**
|
30 |
+
| ์์ํ | ์
๋ ฅ ํ ํฐ ์ | ์ถ๋ ฅ ํ ํฐ ์ | ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋ |
|
31 |
+
|---|---|---|---|
|
32 |
+
| bf16(base) | 64 | 256 | 3,888 MiB |
|
33 |
+
| q4_K_M | 64 | 256 | 1,788 MiB |
|
34 |
+
|
35 |
+
**ํ๋กฌํํธ ํ
ํ๋ฆฟ**
|
36 |
+
|
37 |
+
๋ณธ ๋ชจ๋ธ์ Alpaca ํ๋กฌํํธ ํ
ํ๋ฆฟ์ ์ฌ์ฉํฉ๋๋ค.
|
38 |
+
|
39 |
+
ํด๋น ํ
ํ๋ฆฟ์ `apply_chat_template()`๋ฅผ ํตํด [ํ๊น
ํ์ด์ค ํ
ํ๋ฆฟ](https://huggingface.co/docs/transformers/main/chat_templating)์์ ํ์ธ ํ์ค ์ ์์ต๋๋ค.
|
40 |
+
|
41 |
+
**์๋ ํ์ด์ฌ ์ฝ๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ๋ก๋ ๋ฐ ์ฌ์ฉ ํ ์ ์์ต๋๋ค.**
|
42 |
+
*transformers, torch๊ฐ ์ฌ์ ์ค์น๋์ด์ผํจ*
|
43 |
+
|
44 |
+
```python
|
45 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
46 |
+
|
47 |
+
device = "cuda" # nvidia ๊ทธ๋ํฝ์นด๋ ๊ธฐ์ค
|
48 |
+
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained("maywell/TinyWand-SFT")
|
50 |
+
model = AutoModelForCausalLM.from_pretrained(
|
51 |
+
"maywell/TinyWand-SFT",
|
52 |
+
device_map="auto",
|
53 |
+
torch_dtype=torch.bfloat16, # ์ฌ์ฉํ๋ ์ฅ๋น๊ฐ bfloat16์ ์ง์ํ์ง ์๋ ๊ฒฝ์ฐ torch.float16์ผ๋ก ๋ฐ๊ฟ์ฃผ์ธ์.
|
54 |
+
)
|
55 |
+
|
56 |
+
messages = [
|
57 |
+
{"role": "system", "content": "Below is an instruction that describes a task. Write a response that appropriately completes the request."}, # ๋น์ธ ๊ฒฝ์ฐ์๋ ๋์ผํ๊ฒ ์ ์ฉ ๋จ.
|
58 |
+
{"role": "user", "content": "์ธ์ด๋ชจ๋ธ์ ํ๋ผ๋ฏธํฐ ์๊ฐ ์์ผ๋ฉด ์ด๋ค ์ด์ ์ด ์์ด?"},
|
59 |
+
]
|
60 |
+
|
61 |
+
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
62 |
+
|
63 |
+
model_inputs = encodeds.to(device)
|
64 |
+
model.to(device)
|
65 |
+
|
66 |
+
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
|
67 |
+
decoded = tokenizer.batch_decode(generated_ids)
|
68 |
+
print(decoded[0])
|
69 |
+
```
|