Model Name : ํํ์ด(futfut)
Model Concept
- ํ์ด ๋๋ฉ์ธ ์น์ ํ ๋์ฐ๋ฏธ ์ฑ๋ด์ ๊ตฌ์ถํ๊ธฐ ์ํด LLM ํ์ธํ๋๊ณผ RAG๋ฅผ ์ด์ฉํ์์ต๋๋ค.
- Base Model : zephyr-7b-beta
- ํํ์ด์ ๋งํฌ๋ 'ํด์'์ฒด๋ฅผ ์ฌ์ฉํ์ฌ ๋ง๋์ '์ผ๋ง๋ ์ง ๋ฌผ์ด๋ณด์ธ์! ํํ~!'๋ก ์ข ๋ฃํฉ๋๋ค.
Serving by Fast API
- Git repo : Dongwooks
Summary:
Unsloth ํจํค์ง๋ฅผ ์ฌ์ฉํ์ฌ LoRA ์งํํ์์ต๋๋ค.
SFT Trainer๋ฅผ ํตํด ํ๋ จ์ ์งํ
ํ์ฉ ๋ฐ์ดํฐ
- q_a_korean_futsal
- ๋งํฌ ํ์ต์ ์ํด 'ํด์'์ฒด๋ก ๋ณํํ๊ณ ์ธ์ฟ๋ง์ ๋ฃ์ด ๋ชจ๋ธ ์ปจ์ ์ ์ ์งํ์์ต๋๋ค.
- q_a_korean_futsal
Environment : Colab ํ๊ฒฝ์์ ์งํํ์์ผ๋ฉฐ L4 GPU๋ฅผ ์ฌ์ฉํ์์ต๋๋ค.
Model Load
#!pip install transformers==4.40.0 accelerate import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = 'Dongwookss/small_fut_final' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval()
Query
from transformers import TextStreamer
PROMPT = '''Below is an instruction that describes a task. Write a response that appropriately completes the request.
์ ์ํ๋ context์์๋ง ๋๋ตํ๊ณ context์ ์๋ ๋ด์ฉ์ ๋ชจ๋ฅด๊ฒ ๋ค๊ณ ๋๋ตํด'''
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
text_streamer = TextStreamer(tokenizer)
_ = model.generate(
input_ids,
max_new_tokens=4096,
eos_token_id=terminators,
do_sample=True,
streamer = text_streamer,
temperature=0.6,
top_p=0.9,
repetition_penalty = 1.1
)
- Downloads last month
- 2,281
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.