KoT5 Quote Generator (KoT5-quoter_v1)

์ธ๊ณต์ง€๋Šฅ์‚ฌ๊ด€ํ•™๊ต ์ˆ˜์—…์—์„œ ์ €ํฌ ํŒ€(์ข‹์€๋ง์”€์ „ํ•˜๋Ÿฌ์™”์Šต๋‹ˆ๋‹ค)์ด ์ˆ˜ํ–‰์ค‘์ธ ๋ฏธ๋‹ˆํ”„๋กœ์ ํŠธ์ž…๋‹ˆ๋‹ค.
ํ”„๋กœ์ ํŠธ ์ฃผ์ œ์ธ ai๋ช…์–ธ์ƒ์„ฑ๊ธฐ๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ํ•œ๊ตญ์–ด ํ‚ค์›Œ๋“œ๋ฅผ ์ž…๋ ฅํ•˜๋ฉด ๋ช…์–ธ์„ ๋งŒ๋“ค์–ด์ฃผ๋Š” ํ˜•์‹์œผ๋กœ KoT5๋ฅผ ํŒŒ์ธํŠœ๋‹ํ•ด๋ณด์•˜์Šต๋‹ˆ๋‹ค.

Quickstart

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_name = "Jongha611/KoT5-quoter_v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

prompt = "๋ช…์–ธ ์ƒ์„ฑ: ์‚ฌ๋ž‘"

inputs = tokenizer(prompt, return_tensors = "pt")
# ํ…Œ
outputs = model.generate(
    **inputs,
    do_sample = True,
    top_p = 0.92,
    temperature = 0.8,
    no_repeat_ngram_size = 2,
    num_return_sequences = 1,
    max_new_tokens = 48,
    eos_token_id = tokenizer.eos_token_id,
    pad_token_id = tokenizer.pad_token_id,
)

text = tokenizer.decode(outputs[0], skip_special_tokens = True)

print(text)
Downloads last month
12
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Jongha611/KoT5-quoter_v1

Base model

psyche/KoT5
Finetuned
(1)
this model