Model Card for Model ID
Base Model : beomi/Llama-3-Open-Ko-8B-Instruct-preview
Dataset = Bingsu/ko_alpaca_data
Model inference
prompt template
alpaca_prompt = """์๋๋ ์ง๋ฌธ instruction ๊ณผ ์ถ๊ฐ์ ๋ณด๋ฅผ ๋ํ๋ด๋ input ์
๋๋ค. ์ ์ ํ response๋ฅผ ์์ฑํด์ฃผ์ธ์.
### Instruction:
{instruction}
### Input:
{input}
### Response:
{response}"""
inference code
def generate_response(prompt, model):
prompt = alpaca_prompt.format(instruction=prompt, input="", response="")
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": f"{prompt}"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
return tokenizer.decode(response, skip_special_tokens=True)
instruction = "๊ธ๋ฆฌ๊ฐ ์ค๋ฅด๋ฉด ๋ฌผ๊ฐ๊ฐ ์ด๋ป๊ฒ ๋ผ?"
generate_response(instruction, model)
- Downloads last month
- 17
Model tree for gamzadole/llama3_instruct_preview_alpaca_finetuning
Base model
beomi/Llama-3-Open-Ko-8B-Instruct-preview