Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,64 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- ko
|
5 |
+
pipeline_tag: text-generation
|
6 |
---
|
7 |
+
|
8 |
+
### How to use GPTQ model
|
9 |
+
https://github.com/jongmin-oh/korean-LLM-quantize
|
10 |
+
```
|
11 |
+
mkdir ./templates && mkdir ./utils && wget -P ./templates https://raw.githubusercontent.com/jongmin-oh/korean-LLM-quantize/main/templates/kullm.json && wget -P ./utils https://raw.githubusercontent.com/jongmin-oh/korean-LLM-quantize/main/utils/prompter.py
|
12 |
+
```
|
13 |
+
|
14 |
+
### install package
|
15 |
+
```
|
16 |
+
pip install torch==2.0.1 auto-gptq==0.4.2
|
17 |
+
```
|
18 |
+
|
19 |
+
- ๊ธํ์ ๋ถ๋ค์ ๋ฐ์ ์์ ์ฝ๋ ์คํํ์๋ฉด ๋ฐ๋ก ํ
์คํธ ๊ฐ๋ฅํฉ๋๋ค. (GPU memory 19GB ์ ์ )
|
20 |
+
- 2023-08-23์ผ ์ดํ๋ถํฐ๋ huggingFace์์ GPTQ๋ฅผ ๊ณต์์ง์ํ๊ฒ๋์์ต๋๋ค.
|
21 |
+
|
22 |
+
```python
|
23 |
+
import torch
|
24 |
+
from transformers import pipeline
|
25 |
+
from auto_gptq import AutoGPTQForCausalLM
|
26 |
+
|
27 |
+
from utils.prompter import Prompter
|
28 |
+
|
29 |
+
MODEL = "j5ng/kullm-5.8b-GPTQ-8bit"
|
30 |
+
model = AutoGPTQForCausalLM.from_quantized(MODEL, device="cuda:0", use_triton=False)
|
31 |
+
|
32 |
+
pipe = pipeline('text-generation', model=model,tokenizer=MODEL)
|
33 |
+
|
34 |
+
prompter = Prompter("kullm")
|
35 |
+
|
36 |
+
def infer(instruction="", input_text=""):
|
37 |
+
prompt = prompter.generate_prompt(instruction, input_text)
|
38 |
+
output = pipe(
|
39 |
+
prompt, max_length=512,
|
40 |
+
temperature=0.2,
|
41 |
+
repetition_penalty=3.0,
|
42 |
+
num_beams=5,
|
43 |
+
eos_token_id=2
|
44 |
+
)
|
45 |
+
s = output[0]["generated_text"]
|
46 |
+
result = prompter.get_response(s)
|
47 |
+
|
48 |
+
return result
|
49 |
+
|
50 |
+
instruction = """
|
51 |
+
์ํฅ๋ฏผ(ํ๊ตญ ํ์: ๅญซ่ๆ
, 1992๋
7์ 8์ผ ~ )์ ๋ํ๋ฏผ๊ตญ์ ์ถ๊ตฌ ์ ์๋ก ํ์ฌ ์๊ธ๋๋ ํ๋ฆฌ๋ฏธ์ด๋ฆฌ๊ทธ ํ ํธ๋ ํ์คํผ์์ ์์ด๋ก ํ์ฝํ๊ณ ์๋ค.
|
52 |
+
๋ํ ๋ํ๋ฏผ๊ตญ ์ถ๊ตฌ ๊ตญ๊ฐ๋ํํ์ ์ฃผ์ฅ์ด์ 2018๋
์์์ ๊ฒ์ ๊ธ๋ฉ๋ฌ๋ฆฌ์คํธ์ด๋ฉฐ ์๊ตญ์์๋ ์ ์นญ์ธ "์๋"(Sonny)๋ก ๋ถ๋ฆฐ๋ค.
|
53 |
+
์์์ ์ ์๋ก์๋ ์ญ๋ ์ต์ด๋ก ํ๋ฆฌ๋ฏธ์ด๋ฆฌ๊ทธ ๊ณต์ ๋ฒ ์คํธ ์ผ๋ ๋ธ๊ณผ ์์์ ์ ์ ์ต์ด์ ํ๋ฆฌ๋ฏธ์ด๋ฆฌ๊ทธ ๋์ ์์ ๋ฌผ๋ก FIFA ํธ์ค์นด์ค์๊น์ง ํฉ์ธ์๊ณ 2022๋
์๋ ์ถ๊ตฌ ์ ์๋ก๋ ์ต์ด๋ก ์ฒด์กํ์ฅ ์ฒญ๋ฃก์ฅ ์ํ์๊ฐ ๋์๋ค.
|
54 |
+
์ํฅ๋ฏผ์ ํ์ฌ ๋ฆฌ๊ทธ 100ํธ๋ฅผ ๋ฃ์ด์ ํ์ ๊ฐ ๋๊ณ ์๋ค.
|
55 |
+
"""
|
56 |
+
result = infer(instruction=instruction, input_text="์ํฅ๋ฏผ์ ์ ์นญ์ ๋ญ์ผ?")
|
57 |
+
print(result) # ์ํฅ๋ฏผ์ ์ ์นญ์ "์๋"์
๋๋ค.
|
58 |
+
```
|
59 |
+
|
60 |
+
### Reference
|
61 |
+
|
62 |
+
- [EleutherAI/polyglot](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
|
63 |
+
- [๊ณ ๋ ค๋ํ๊ต/kullm](https://huggingface.co/nlpai-lab/kullm-polyglot-12.8b-v2)
|
64 |
+
- [GPTQ](https://github.com/IST-DASLab/gptq)
|