pko-t5-small
pko-t5 λ νκ΅μ΄ μ μ© λ°μ΄ν°λ‘ νμ΅ν t5 v1.1 λͺ¨λΈμ λλ€.
νκ΅μ΄λ₯Ό tokenize νκΈ° μν΄μ sentencepiece λμ OOV κ° μλ BBPE λ₯Ό μ¬μ©νμΌλ©° νκ΅μ΄ λ°μ΄ν° (λ무μν€, μν€νΌλμ, λͺ¨λμλ§λμΉ λ±..) λ₯Ό T5 μ span corruption task λ₯Ό μ¬μ©ν΄μ unsupervised learning λ§ μ μ©νμ¬ νμ΅μ μ§ννμ΅λλ€.
pko-t5 λ₯Ό μ¬μ©νμ€ λλ λμ task μ νμΈνλνμ¬ μ¬μ©νμκΈ° λ°λλλ€.
Usage
transformers μ API λ₯Ό μ¬μ©νμ¬ μ κ·Ό κ°λ₯ν©λλ€. tokenizer λ₯Ό μ¬μ©ν λλ T5Tokenizer
κ° μλλΌ T5TokenizerFast
λ₯Ό μ¬μ©ν΄μ£Όμμμ€. model μ T5ForConditionalGeneration λ₯Ό κ·Έλλ‘ νμ©νμλ©΄ λ©λλ€.
Example
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-small')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-small')
input_ids = tokenizer(["qa question: λΉμ μ μ΄λ¦μ 무μμΈκ°μ?"]).input_ids
labels = tokenizer(["T5 μ
λλ€."]).input_ids
outputs = model(input_ids, labels)
print(f"loss={outputs.loss} logits={outputs.logits}")
Klue νκ° (dev)
Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) | |
---|---|---|---|---|---|---|---|---|
Baseline | 87.30 | 93.20/86.13 | 89.50 | 86.06 | 71.06 | 87.93 | 75.26/- | |
FT | pko-t5-small (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 66.46 | 93.15 | 43.81/46.58 |
FT | pko-t5-base (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 67.23 | 97.28 | 61.53/64.74 |
FT | pko-t5-large (800M) | 87.12 | 92.05/85.24 | 84.96 | 88.18 | 75.17 | 97.60 | 68.01/71.44 |
MT | pko-t5-small | 84.54 | 68.50/72/02 | 51.16 | 74.69 | 66.11 | 80.40 | 43.60/46.28 |
MT | pko-t5-base | 86.89 | 83.96/80.30 | 72.03 | 85.27 | 66.59 | 95.05 | 61.11/63.94 |
MT | pko-t5-large | 87.57 | 91.93/86.29 | 83.63 | 87.41 | 71.34 | 96.99 | 70.70/73.72 |
- FT: μ±κΈνμ€ν¬ νμΈνλ / MT: λ©ν°νμ€ν¬ νμΈνλ
- Baseline: KLUE λ Όλ¬Έμμ μκ°λ dev set μ λν SOTA μ μ
License
PAUSTμμ λ§λ pko-t5λ MIT license νμ 곡κ°λμ΄ μμ΅λλ€.
- Downloads last month
- 387