limhyeonseok's picture
Update README.md
030ad7e verified
|
raw
history blame
8.86 kB
---
base_model:
- meta-llama/Meta-Llama-3-70B
language:
- en
- ko
library_name: transformers
license: llama3
---
<a href="https://github.com/MLP-Lab/Bllossom">
<img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="40%" height="50%">
</a>
# Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) | [Colab-tutorial](https://colab.research.google.com/drive/1fBOzUVZ6NRKk_ugeoTbAOokWKqSN47IG?usp=sharing) |
```bash
์ €ํฌ Bllossom ํ”„๋กœ์ ํŠธ ํŒ€์—์„œ ํ•œ๊ตญ์–ด-์˜์–ด ์ด์ค‘ ์–ธ์–ด๋ชจ๋ธ์ธ Bllossom-70.8B๋ฅผ ๊ณต๊ฐœํ–ˆ์Šต๋‹ˆ๋‹ค!
์„œ์šธ๊ณผ๊ธฐ๋Œ€ ์Šˆํผ์ปดํ“จํŒ… ์„ผํ„ฐ์˜ ์ง€์›์œผ๋กœ 100GB๊ฐ€๋„˜๋Š” ํ•œ๊ตญ์–ด๋กœ ๋ชจ๋ธ์ „์ฒด๋ฅผ ํ’€ํŠœ๋‹ํ•œ ํ•œ๊ตญ์–ด ๊ฐ•ํ™” ์ด์ค‘์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค!
ํ•œ๊ตญ์–ด ์ž˜ํ•˜๋Š” ๋ชจ๋ธ ์ฐพ๊ณ  ์žˆ์ง€ ์•Š์œผ์…จ๋‚˜์š”?
- ํ•œ๊ตญ์–ด ์ตœ์ดˆ! ๋ฌด๋ ค 3๋งŒ๊ฐœ๊ฐ€ ๋„˜๋Š” ํ•œ๊ตญ์–ด ์–ดํœ˜ํ™•์žฅ
- Llama3๋Œ€๋น„ ๋Œ€๋žต 25% ๋” ๊ธด ๊ธธ์ด์˜ ํ•œ๊ตญ์–ด Context ์ฒ˜๋ฆฌ๊ฐ€๋Šฅ
- ํ•œ๊ตญ์–ด-์˜์–ด Pararell Corpus๋ฅผ ํ™œ์šฉํ•œ ํ•œ๊ตญ์–ด-์˜์–ด ์ง€์‹์—ฐ๊ฒฐ (์‚ฌ์ „ํ•™์Šต)
- ํ•œ๊ตญ์–ด ๋ฌธํ™”, ์–ธ์–ด๋ฅผ ๊ณ ๋ คํ•ด ์–ธ์–ดํ•™์ž๊ฐ€ ์ œ์ž‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•œ ๋ฏธ์„ธ์กฐ์ •
- ๊ฐ•ํ™”ํ•™์Šต
์ด ๋ชจ๋“ ๊ฒŒ ํ•œ๊บผ๋ฒˆ์— ์ ์šฉ๋˜๊ณ  ์ƒ์—…์  ์ด์šฉ์ด ๊ฐ€๋Šฅํ•œ Bllossom์„ ์ด์šฉํ•ด ์—ฌ๋Ÿฌ๋ถ„ ๋งŒ์˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค์–ด๋ณด์„ธ์šฅ!
GPU๊ฐ€ ๋ถ€์กฑํ•˜๋ฉด ์–‘์žํ™” ๋ชจ๋ธ๋กœ ๋ฐ”๋กœ ์„œ๋น„์Šค๋ฅผ ํ™œ์šฉํ•ด ๋ณด์„ธ์š” [์–‘์žํ™”๋ชจ๋ธ](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M)!!
1. Bllossom-70.8B๋Š” ์„œ์šธ๊ณผ๊ธฐ๋Œ€, ํ…Œ๋””์ธ, ์—ฐ์„ธ๋Œ€ ์–ธ์–ด์ž์› ์—ฐ๊ตฌ์‹ค์˜ ์–ธ์–ดํ•™์ž์™€ ํ˜‘์—…ํ•ด ๋งŒ๋“  ์‹ค์šฉ์ฃผ์˜๊ธฐ๋ฐ˜ ์–ธ์–ด๋ชจ๋ธ์ž…๋‹ˆ๋‹ค! ์•ž์œผ๋กœ ์ง€์†์ ์ธ ์—…๋ฐ์ดํŠธ๋ฅผ ํ†ตํ•ด ๊ด€๋ฆฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค ๋งŽ์ด ํ™œ์šฉํ•ด์ฃผ์„ธ์š” ๐Ÿ™‚
2. ์ดˆ ๊ฐ•๋ ฅํ•œ Advanced-Bllossom 8B, 70B๋ชจ๋ธ, ์‹œ๊ฐ-์–ธ์–ด๋ชจ๋ธ์„ ๋ณด์œ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค! (๊ถ๊ธˆํ•˜์‹ ๋ถ„์€ ๊ฐœ๋ณ„ ์—ฐ๋ฝ์ฃผ์„ธ์š”!!)
3. Bllossom์€ NAACL2024, LREC-COLING2024 (๊ตฌ๋‘) ๋ฐœํ‘œ๋กœ ์ฑ„ํƒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
4. ์ข‹์€ ์–ธ์–ด๋ชจ๋ธ ๊ณ„์† ์—…๋ฐ์ดํŠธ ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค!! ํ•œ๊ตญ์–ด ๊ฐ•ํ™”๋ฅผ์œ„ํ•ด ๊ณต๋™ ์—ฐ๊ตฌํ•˜์‹ค๋ถ„(ํŠนํžˆ๋…ผ๋ฌธ) ์–ธ์ œ๋“  ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค!!
ํŠนํžˆ ์†Œ๋Ÿ‰์˜ GPU๋ผ๋„ ๋Œ€์—ฌ ๊ฐ€๋Šฅํ•œํŒ€์€ ์–ธ์ œ๋“  ์—ฐ๋ฝ์ฃผ์„ธ์š”! ๋งŒ๋“ค๊ณ  ์‹ถ์€๊ฑฐ ๋„์™€๋“œ๋ ค์š”.
```
The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:
* **Knowledge Linking**: Linking Korean and English knowledge through additional training
* **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness.
* **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture
* **Human Feedback**: DPO has been applied
* **Vision-Language Alignment**: Aligning the vision transformer with this language model
**This model developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)**
## Demo Video
<div style="display: flex; justify-content: space-between;">
<!-- ์ฒซ ๋ฒˆ์งธ ์ปฌ๋Ÿผ -->
<div style="width: 49%;">
<a>
<img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;">
</a>
<p style="text-align: center;">Bllossom-V Demo</p>
</div>
<!-- ๋‘ ๋ฒˆ์งธ ์ปฌ๋Ÿผ (ํ•„์š”ํ•˜๋‹ค๋ฉด) -->
<div style="width: 49%;">
<a>
<img src="https://github.com/lhsstn/lhsstn/blob/main/bllossom_demo_kakao.gif?raw=true" style="width: 70%; height: auto;">
</a>
<p style="text-align: center;">Bllossom Demo(Kakao)ใ…คใ…คใ…คใ…คใ…คใ…คใ…คใ…ค</p>
</div>
</div>
## NEWS
* [2024.05.08] Vocab Expansion Model Update
* [2024.04.25] We released Bllossom v2.0, based on llama-3
* [2023/12] We released Bllossom-Vision v1.0, based on Bllossom
* [2023/08] We released Bllossom v1.0, based on llama-2.
* [2023/07] We released Bllossom v0.7, based on polyglot-ko.
## Example code
### Colab Tutorial
- [Inference-Code-Link](https://colab.research.google.com/drive/1fBOzUVZ6NRKk_ugeoTbAOokWKqSN47IG?usp=sharing)
### Install Dependencies
```bash
pip install torch transformers==4.40.0 accelerate
```
### Python code with Pipeline
```python
import transformers
import torch
model_id = "Bllossom/llama-3-Korean-Bllossom-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
PROMPT = '''You are a helpful AI assistant. Please answer the user's questions kindly. ๋‹น์‹ ์€ ์œ ๋Šฅํ•œ AI ์–ด์‹œ์Šคํ„ดํŠธ ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์นœ์ ˆํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•ด์ฃผ์„ธ์š”.'''
instruction = "์„œ์šธ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™๊ต MLP์—ฐ๊ตฌ์‹ค์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•ด์ค˜"
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
# ์„œ์šธ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™๊ต MLP์—ฐ๊ตฌ์‹ค์€ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์—ฐ๊ตฌ๋ฅผ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์„ฑ์›์€ ์ž„๊ฒฝํƒœ ๊ต์ˆ˜์™€ ๊น€๋ฏผ์ค€, ๊น€์ƒ๋ฏผ, ์ตœ์ฐฝ์ˆ˜, ์›์ธํ˜ธ, ์œ ํ•œ๊ฒฐ, ์ž„ํ˜„์„, ์†ก์Šน์šฐ, ์œก์ •ํ›ˆ, ์‹ ๋™์žฌ ํ•™์ƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค.
```
### Python code with AutoModel
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = 'Bllossom/llama-3-Korean-Bllossom-70B'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
PROMPT = '''You are a helpful AI assistant. Please answer the user's questions kindly. ๋‹น์‹ ์€ ์œ ๋Šฅํ•œ AI ์–ด์‹œ์Šคํ„ดํŠธ ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์นœ์ ˆํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•ด์ฃผ์„ธ์š”.'''
instruction = "์„œ์šธ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™๊ต MLP์—ฐ๊ตฌ์‹ค์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•ด์ค˜"
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
# ์„œ์šธ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™๊ต MLP์—ฐ๊ตฌ์‹ค์€ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์—ฐ๊ตฌ๋ฅผ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์„ฑ์›์€ ์ž„๊ฒฝํƒœ ๊ต์ˆ˜์™€ ๊น€๋ฏผ์ค€, ๊น€์ƒ๋ฏผ, ์ตœ์ฐฝ์ˆ˜, ์›์ธํ˜ธ, ์œ ํ•œ๊ฒฐ, ์ž„ํ˜„์„, ์†ก์Šน์šฐ, ์œก์ •ํ›ˆ, ์‹ ๋™์žฌ ํ•™์ƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค.
```
## Citation
**Language Model**
```text
@misc{bllossom,
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
year = {2024},
journal = {LREC-COLING 2024},
paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
},
}
```
**Vision-Language Model**
```text
@misc{bllossom-V,
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
year = {2024},
publisher = {GitHub},
journal = {NAACL 2024 findings},
paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
},
}
```
## Contact
- ์ž„๊ฒฝํƒœ(KyungTae Lim), Professor at Seoultech. `ktlim@seoultech.ac.kr`
- ํ•จ์˜๊ท (Younggyun Hahm), CEO of Teddysum. `hahmyg@teddysum.ai`
- ๊น€ํ•œ์ƒ˜(Hansaem Kim), Professor at Yonsei. `khss@yonsei.ac.kr`
## Contributor
- ์ตœ์ฐฝ์ˆ˜(Chansu Choi), choics2623@seoultech.ac.kr
- ๊น€์ƒ๋ฏผ(Sangmin Kim), sangmin9708@naver.com
- ์›์ธํ˜ธ(Inho Won), wih1226@seoultech.ac.kr
- ๊น€๋ฏผ์ค€(Minjun Kim), mjkmain@seoultech.ac.kr
- ์†ก์Šน์šฐ(Seungwoo Song), sswoo@seoultech.ac.kr
- ์‹ ๋™์žฌ(Dongjae Shin), dylan1998@seoultech.ac.kr
- ์ž„ํ˜„์„(Hyeonseok Lim), gustjrantk@seoultech.ac.kr
- ์œก์ •ํ›ˆ(Jeonghun Yuk), usually670@gmail.com
- ์œ ํ•œ๊ฒฐ(Hangyeol Yoo), 21102372@seoultech.ac.kr
- ์†ก์„œํ˜„(Seohyun Song), alexalex225225@gmail.com