Update README.md
Browse files
README.md
CHANGED
@@ -13,8 +13,8 @@ model-index:
|
|
13 |
|
14 |
์ด ๋ชจ๋ธ์ lcw99 / t5-large-korean-text-summary์ klue-ynat์ผ๋ก ํ๋ จ์์ผ ๋ง๋ ๋ชจ๋ธ์
๋๋ค.<br>
|
15 |
Input = ['IT๊ณผํ','๊ฒฝ์ ','์ฌํ','์ํ๋ฌธํ','์ธ๊ณ','์คํฌ์ธ ','์ ์น']<br>
|
16 |
-
OUTPUT = ๊ฐ label์ ๋ง๋ ๋ด์ค ๊ธฐ์ฌ ์ ๋ชฉ์
|
17 |
-
|
18 |
## Usage
|
19 |
```python
|
20 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
@@ -39,7 +39,8 @@ with torch.no_grad():
|
|
39 |
top_p=0.95, # ๋์ ํ๋ฅ ์ด 95%์ธ ํ๋ณด์งํฉ์์๋ง ์์ฑ
|
40 |
)
|
41 |
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
|
42 |
-
print(decoded_output)
|
|
|
43 |
|
44 |
```
|
45 |
|
|
|
13 |
|
14 |
์ด ๋ชจ๋ธ์ lcw99 / t5-large-korean-text-summary์ klue-ynat์ผ๋ก ํ๋ จ์์ผ ๋ง๋ ๋ชจ๋ธ์
๋๋ค.<br>
|
15 |
Input = ['IT๊ณผํ','๊ฒฝ์ ','์ฌํ','์ํ๋ฌธํ','์ธ๊ณ','์คํฌ์ธ ','์ ์น']<br>
|
16 |
+
OUTPUT = ๊ฐ label์ ๋ง๋ ๋ด์ค ๊ธฐ์ฌ ์ ๋ชฉ์ ์์ฑํฉ๋๋ค.<br>
|
17 |
+
๋ฐฐ์น๋จ์๋ก ์ถ๋ก ํ๊ณ ์ถ๋ค๋ฉด batch_encode_plus๋ฅผ ์ฌ์ฉํ์๋ฉด ๋ฉ๋๋ค.
|
18 |
## Usage
|
19 |
```python
|
20 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
|
|
39 |
top_p=0.95, # ๋์ ํ๋ฅ ์ด 95%์ธ ํ๋ณด์งํฉ์์๋ง ์์ฑ
|
40 |
)
|
41 |
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
|
42 |
+
print(decoded_output)#SKํ
๋ ์ฝค ์ค๋งํธ ๋ชจ๋ฐ์ผ ์๊ธ์ ์์ฆ1 ์ถ์
|
43 |
+
|
44 |
|
45 |
```
|
46 |
|