|
--- |
|
language: |
|
- en |
|
- ko |
|
license: apache-2.0 |
|
library_name: transformers |
|
base_model: |
|
- meta-llama/Meta-Llama-3-8B |
|
--- |
|
|
|
<a href="https://github.com/teddysum/bllossom/blob/main/"> |
|
<img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="40%" height="50%"> |
|
</a> |
|
|
|
# Bllossom | [Demo](https://ee68c3f24513b01f81.gradio.live/) | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) | |
|
|
|
The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features: |
|
|
|
* **Knowledge Linking**: Linking Korean and English knowledge through additional training |
|
* **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness. |
|
* **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture |
|
* **Human Feedback**: DPO has been applied |
|
* **Vision-Language Alignment**: Aligning the vision transformer with this language model |
|
|
|
**This model devel by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)** |
|
|
|
## Demo Video |
|
|
|
<div style="display: flex; justify-content: space-between;"> |
|
<!-- 첫 λ²μ§Έ μ»¬λΌ --> |
|
<div style="width: 49%;"> |
|
<a> |
|
<img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;"> |
|
</a> |
|
<p style="text-align: center;">Bllossom-V Demo</p> |
|
</div> |
|
|
|
<!-- λ λ²μ§Έ μ»¬λΌ (νμνλ€λ©΄) --> |
|
<div style="width: 49%;"> |
|
<a> |
|
<img src="https://github.com/lhsstn/lhsstn/blob/main/bllossom_demo_kakao.gif?raw=true" style="width: 70%; height: auto;"> |
|
</a> |
|
<p style="text-align: center;">Bllossom Demo(Kakao)γ
€γ
€γ
€γ
€γ
€γ
€γ
€γ
€</p> |
|
</div> |
|
</div> |
|
|
|
|
|
|
|
## NEWS |
|
* [2024/04] We released Bllossom v2.0, based on llama-3 |
|
* [2023/12] We released Bllossom-Vision v1.0, based on Bllossom |
|
* [2023/08] We released Bllossom v1.0, based on llama-2. |
|
* [2023/07] We released Bllossom v0.7, based on polyglot-ko. |
|
|
|
|
|
## Example code |
|
### Install Dependencies |
|
```bash |
|
pip install torch transformers==4.40.0 accelerate |
|
``` |
|
|
|
### Python code with Pipeline |
|
```python |
|
import transformers |
|
import torch |
|
|
|
model_id = "MLP-KTLim/Bllossom" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
pipeline.model.eval() |
|
|
|
PROMPT = '''λΉμ μ μ μ©ν AI μ΄μμ€ν΄νΈμ
λλ€. μ¬μ©μμ μ§μμ λν΄ μΉμ νκ³ μ ννκ² λ΅λ³ν΄μΌ ν©λλ€.''' |
|
instruction = "μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λν΄ μκ°ν΄μ€" |
|
|
|
messages = [ |
|
{"role": "system", "content": f"{PROMPT}"}, |
|
{"role": "user", "content": f"{instruction}"} |
|
] |
|
|
|
prompt = pipeline.tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
terminators = [ |
|
pipeline.tokenizer.eos_token_id, |
|
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = pipeline( |
|
prompt, |
|
max_new_tokens=2048, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
repetition_penalty = 1.1 |
|
) |
|
|
|
print(outputs[0]["generated_text"][len(prompt):]) |
|
|
|
# μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λ©ν°λͺ¨λ¬ μμ°μ΄μ²λ¦¬ μ°κ΅¬λ₯Ό νκ³ μμ΅λλ€. ꡬμ±μμ μκ²½ν κ΅μμ κΉλ―Όμ€, κΉμλ―Ό, μ΅μ°½μ, μμΈνΈ, μ νκ²°, μνμ, μ‘μΉμ°, μ‘μ ν, μ λμ¬ νμμ΄ μμ΅λλ€. |
|
``` |
|
|
|
### Python code with AutoModel |
|
```python |
|
|
|
import os |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_id = 'MLP-KTLim/Bllossom' |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
|
|
model.eval() |
|
|
|
PROMPT = '''λΉμ μ μ μ©ν AI μ΄μμ€ν΄νΈμ
λλ€. μ¬μ©μμ μ§μμ λν΄ μΉμ νκ³ μ ννκ² λ΅λ³ν΄μΌ ν©λλ€.''' |
|
instruction = "μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λν΄ μκ°ν΄μ€" |
|
|
|
messages = [ |
|
{"role": "system", "content": f"{PROMPT}"}, |
|
{"role": "user", "content": f"{instruction}"} |
|
] |
|
|
|
input_ids = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
return_tensors="pt" |
|
).to(model.device) |
|
|
|
terminators = [ |
|
tokenizer.eos_token_id, |
|
tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = model.generate( |
|
input_ids, |
|
max_new_tokens=2048, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
repetition_penalty = 1.1 |
|
) |
|
|
|
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) |
|
# μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λ©ν°λͺ¨λ¬ μμ°μ΄μ²λ¦¬ μ°κ΅¬λ₯Ό νκ³ μμ΅λλ€. ꡬμ±μμ μκ²½ν κ΅μμ κΉλ―Όμ€, κΉμλ―Ό, μ΅μ°½μ, μμΈνΈ, μ νκ²°, μνμ, μ‘μΉμ°, μ‘μ ν, μ λμ¬ νμμ΄ μμ΅λλ€. |
|
``` |
|
|
|
|
|
|
|
## Citation |
|
**Language Model** |
|
```text |
|
@misc{bllossom, |
|
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim}, |
|
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean}, |
|
year = {2024}, |
|
journal = {LREC-COLING 2024}, |
|
paperLink = {\url{https://arxiv.org/pdf/2403.10882}}, |
|
}, |
|
} |
|
``` |
|
|
|
**Vision-Language Model** |
|
```text |
|
@misc{bllossom, |
|
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim}, |
|
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment}, |
|
year = {2024}, |
|
publisher = {GitHub}, |
|
journal = {NAACL 2024 findings}, |
|
paperLink = {\url{https://arxiv.org/pdf/2403.11399}}, |
|
}, |
|
} |
|
``` |
|
|
|
## Contact |
|
- μκ²½ν(KyungTae Lim), Professor at Seoultech. `ktlim@seoultech.ac.kr` |
|
- ν¨μκ· (Younggyun Hahm), CEO of Teddysum. `hahmyg@teddysum.ai` |
|
|
|
## Contributor |
|
- μ΅μ°½μ(Chansu Choi), choics2623@seoultech.ac.kr |
|
- κΉμλ―Ό(Sangmin Kim), sangmin9708@naver.com |
|
- μμΈνΈ(Inho Won), wih1226@seoultech.ac.kr |
|
- κΉλ―Όμ€(Minjun Kim), mjkmain@seoultech.ac.kr |
|
- μ‘μΉμ°(Seungwoo Song), sswoo@seoultech.ac.kr |
|
- μ λμ¬(Dongjae Shin), dylan1998@seoultech.ac.kr |
|
- μνμ(Hyeonseok Lim), gustjrantk@seoultech.ac.kr |
|
- μ‘μ ν(Jeonghun Yuk), usually670@gmail.com |
|
- μ νκ²°(Hangyeol Yoo), 21102372@seoultech.ac.kr |
|
- μ‘μν(Seohyun Song), alexalex225225@gmail.com |
|
|