Merged below called "gx thinking Groove Feeling X-mas"
There is no such thing as a flawless system. It's about using it appropriately and reasonably without pushing it to its limits.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = 'asiansoul/llama-3.2-koen-merged-3b-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
instruction = "μ² μκ° 20κ°μ μ°νμ κ°μ§κ³ μμλλ° μν¬κ° μ λ°μ κ°μ Έκ°κ³ λ―Όμκ° λ¨μ 5κ°λ₯Ό κ°μ Έκ°μΌλ©΄ μ² μμκ² λ¨μ μ°νμ κ°―μλ λͺκ°μΈκ°μ?"
messages = [
{"role": "user", "content": f"{instruction}"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=1024,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
μ² μκ° 20κ°μ μ°νμ κ°μ§κ³ μμκ³ , μν¬κ° μ λ°(20/2 = 10)μ κ°μ Έκ°μ΅λλ€. λ°λΌμ μ² μκ° λ¨μ μ°νμ κ°―μλ 20 - 10 = 10μ
λλ€.
λ―Όμκ° λ¨μ 5κ°λ₯Ό κ°μ Έκ°μΌλ, μ² μκ° λ¨μ μ°νμ κ°―μλ 10 - 5 = 5μ
λλ€.
λ°λΌμ μ² μκ° λ¨μ μ°νμ κ°―μλ 5κ°μ
λλ€.
@article{Llama3.2KoEnMerged3BInstruct,
title={asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF Card},
author={Asiansoul},
merged={Asiansoul},
year={2024},
url = {https://huggingface.co/asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF}
}
- Downloads last month
- 8
Model tree for asiansoul/llama-3.2-koen-merged-3b-instruct
Base model
meta-llama/Llama-3.2-3B-Instruct