File size: 1,384 Bytes
0104ad9 c364724 0104ad9 c364724 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
license: cc-by-3.0
language:
- ko
pipeline_tag: text-generation
---
# korean-gpt-neox-125M
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [cateto](http://github.com/cateto)
- **Model type:** [gpt-neox](https://github.com/EleutherAI/gpt-neox)
- **Language(s) (NLP):** Korean
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```python
# Import the transformers library
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("cateto/korean-gpt-neox-125M")
model = AutoModelForCausalLM.from_pretrained("cateto/korean-gpt-neox-125M")
# Get user input
user_input = "์ฐ๋ฆฌ๋ ์์ผ๋ก ๋๋์ ๋ฏธ๋๋ฅผ"
# Encode the prompt using the tokenizer
input_ids = tokenizer.encode(user_input, return_tensors="pt")
# Generate chatbot output using the model
output_ids = model.generate(
input_ids,
num_beams=4,
repetition_penalty=1.5,
no_repeat_ngram_size=3
)
# Decode chatbot output ids as text
bot_output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
# Print chatbot output
print(f"์ถ๋ ฅ ## ", bot_output)
# ์ถ๋ ฅ ## ์ฐ๋ฆฌ๋ ์์ผ๋ก ๋๋์ ๋ฏธ๋๋ฅผ ํฅํด ๋์๊ฐ ์ ์๋ค.
```
|