korean-gpt-neox-125M
Model Details
Model Description
Uses
Direct Use
# Import the transformers library
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("cateto/korean-gpt-neox-125M")
model = AutoModelForCausalLM.from_pretrained("cateto/korean-gpt-neox-125M")
# Get user input
user_input = "์ฐ๋ฆฌ๋ ์์ผ๋ก ๋๋์ ๋ฏธ๋๋ฅผ"
# Encode the prompt using the tokenizer
input_ids = tokenizer.encode(user_input, return_tensors="pt")
# Generate chatbot output using the model
output_ids = model.generate(
input_ids,
num_beams=4,
repetition_penalty=1.5,
no_repeat_ngram_size=3
)
# Decode chatbot output ids as text
bot_output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
# Print chatbot output
print(f"์ถ๋ ฅ ## ", bot_output)
# ์ถ๋ ฅ ## ์ฐ๋ฆฌ๋ ์์ผ๋ก ๋๋์ ๋ฏธ๋๋ฅผ ํฅํด ๋์๊ฐ ์ ์๋ค.
- Downloads last month
- 594
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.