|
--- |
|
language: |
|
- en |
|
- ko |
|
license: llama3 |
|
library_name: transformers |
|
base_model: |
|
- meta-llama/Meta-Llama-3-70B |
|
--- |
|
|
|
<a href="https://github.com/MLP-Lab/Bllossom"> |
|
<img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="40%" height="50%"> |
|
</a> |
|
|
|
# Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) | [Colab-tutorial](https://colab.research.google.com/drive/1fBOzUVZ6NRKk_ugeoTbAOokWKqSN47IG?usp=sharing) | |
|
|
|
|
|
```bash |
|
μ ν¬ μμΈκ³ΌκΈ°λ MLPμ°κ΅¬μ€μμ νκ΅μ΄-μμ΄ μ΄μ€ μΈμ΄λͺ¨λΈμΈ Bllossom-70.8Bλ₯Ό 곡κ°νμ΅λλ€! |
|
μμΈκ³ΌκΈ°λ μνΌμ»΄ν¨ν
μΌν°μ μ§μμΌλ‘ 100GBκ°λλ νκ΅μ΄λ‘ λͺ¨λΈμ 체λ₯Ό ννλν νκ΅μ΄ κ°ν μ΄μ€μΈμ΄ λͺ¨λΈμ
λλ€! |
|
νκ΅μ΄ μνλ λͺ¨λΈ μ°Ύκ³ μμ§ μμΌμ
¨λμ? |
|
- νκ΅μ΄ μ΅μ΄! λ¬΄λ € 3λ§κ°κ° λλ νκ΅μ΄ μ΄ννμ₯ |
|
- Llama3λλΉ λλ΅ 25% λ κΈ΄ κΈΈμ΄μ νκ΅μ΄ Context μ²λ¦¬κ°λ₯ |
|
- νκ΅μ΄-μμ΄ Pararell Corpusλ₯Ό νμ©ν νκ΅μ΄-μμ΄ μ§μμ°κ²° (μ¬μ νμ΅) |
|
- νκ΅μ΄ λ¬Έν, μΈμ΄λ₯Ό κ³ λ €ν΄ μΈμ΄νμκ° μ μν λ°μ΄ν°λ₯Ό νμ©ν λ―ΈμΈμ‘°μ |
|
- κ°ννμ΅ |
|
μ΄ λͺ¨λ κ² νκΊΌλ²μ μ μ©λκ³ μμ
μ μ΄μ©μ΄ κ°λ₯ν Bllossomμ μ΄μ©ν΄ μ¬λ¬λΆ λ§μ λͺ¨λΈμ λ§λ€μ΄λ³΄μΈμ₯! |
|
GPUκ° λΆμ‘±νλ©΄ μμν λͺ¨λΈλ‘ λ°λ‘ μλΉμ€λ₯Ό νμ©ν΄ 보μΈμ [μμνλͺ¨λΈ](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M)!! |
|
|
|
1. Bllossom-70.8Bλ μμΈκ³ΌκΈ°λ, ν
λμΈ, μ°μΈλ μΈμ΄μμ μ°κ΅¬μ€μ μΈμ΄νμμ νμ
ν΄ λ§λ μ€μ©μ£ΌμκΈ°λ° μΈμ΄λͺ¨λΈμ
λλ€! μμΌλ‘ μ§μμ μΈ μ
λ°μ΄νΈλ₯Ό ν΅ν΄ κ΄λ¦¬νκ² μ΅λλ€ λ§μ΄ νμ©ν΄μ£ΌμΈμ π |
|
2. μ΄ κ°λ ₯ν Advanced-Bllossom 8B, 70Bλͺ¨λΈ, μκ°-μΈμ΄λͺ¨λΈμ 보μ νκ³ μμ΅λλ€! (κΆκΈνμ λΆμ κ°λ³ μ°λ½μ£ΌμΈμ!!) |
|
3. Bllossomμ NAACL2024, LREC-COLING2024 (ꡬλ) λ°νλ‘ μ±νλμμ΅λλ€. |
|
4. μ’μ μΈμ΄λͺ¨λΈ κ³μ μ
λ°μ΄νΈ νκ² μ΅λλ€!! νκ΅μ΄ κ°νλ₯Όμν΄ κ³΅λ μ°κ΅¬νμ€λΆ(νΉνλ
Όλ¬Έ) μΈμ λ νμν©λλ€!! |
|
νΉν μλμ GPUλΌλ λμ¬ κ°λ₯ννμ μΈμ λ μ°λ½μ£ΌμΈμ! λ§λ€κ³ μΆμκ±° λμλλ €μ. |
|
``` |
|
|
|
The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features: |
|
|
|
* **Knowledge Linking**: Linking Korean and English knowledge through additional training |
|
* **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness. |
|
* **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture |
|
* **Human Feedback**: DPO has been applied |
|
* **Vision-Language Alignment**: Aligning the vision transformer with this language model |
|
|
|
**This model developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)** |
|
|
|
## Demo Video |
|
|
|
<div style="display: flex; justify-content: space-between;"> |
|
<!-- 첫 λ²μ§Έ μ»¬λΌ --> |
|
<div style="width: 49%;"> |
|
<a> |
|
<img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;"> |
|
</a> |
|
<p style="text-align: center;">Bllossom-V Demo</p> |
|
</div> |
|
|
|
<!-- λ λ²μ§Έ μ»¬λΌ (νμνλ€λ©΄) --> |
|
<div style="width: 49%;"> |
|
<a> |
|
<img src="https://github.com/lhsstn/lhsstn/blob/main/bllossom_demo_kakao.gif?raw=true" style="width: 70%; height: auto;"> |
|
</a> |
|
<p style="text-align: center;">Bllossom Demo(Kakao)γ
€γ
€γ
€γ
€γ
€γ
€γ
€γ
€</p> |
|
</div> |
|
</div> |
|
|
|
|
|
|
|
## NEWS |
|
* [2024.05.08] Vocab Expansion Model Update |
|
* [2024.04.25] We released Bllossom v2.0, based on llama-3 |
|
* [2023/12] We released Bllossom-Vision v1.0, based on Bllossom |
|
* [2023/08] We released Bllossom v1.0, based on llama-2. |
|
* [2023/07] We released Bllossom v0.7, based on polyglot-ko. |
|
|
|
|
|
## Example code |
|
|
|
### Colab Tutorial |
|
- [Inference-Code-Link](https://colab.research.google.com/drive/1fBOzUVZ6NRKk_ugeoTbAOokWKqSN47IG?usp=sharing) |
|
|
|
### Install Dependencies |
|
```bash |
|
pip install torch transformers==4.40.0 accelerate |
|
``` |
|
|
|
### Python code with Pipeline |
|
```python |
|
import transformers |
|
import torch |
|
|
|
model_id = "MLP-KTLim/llama-3-Korean-Bllossom-8B" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
pipeline.model.eval() |
|
|
|
PROMPT = '''λΉμ μ μ μ©ν AI μ΄μμ€ν΄νΈμ
λλ€. μ¬μ©μμ μ§μμ λν΄ μΉμ νκ³ μ ννκ² λ΅λ³ν΄μΌ ν©λλ€. |
|
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.''' |
|
instruction = "μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λν΄ μκ°ν΄μ€" |
|
|
|
messages = [ |
|
{"role": "system", "content": f"{PROMPT}"}, |
|
{"role": "user", "content": f"{instruction}"} |
|
] |
|
|
|
prompt = pipeline.tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
terminators = [ |
|
pipeline.tokenizer.eos_token_id, |
|
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = pipeline( |
|
prompt, |
|
max_new_tokens=2048, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
repetition_penalty = 1.1 |
|
) |
|
|
|
print(outputs[0]["generated_text"][len(prompt):]) |
|
|
|
# μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λ©ν°λͺ¨λ¬ μμ°μ΄μ²λ¦¬ μ°κ΅¬λ₯Ό νκ³ μμ΅λλ€. ꡬμ±μμ μκ²½ν κ΅μμ κΉλ―Όμ€, κΉμλ―Ό, μ΅μ°½μ, μμΈνΈ, μ νκ²°, μνμ, μ‘μΉμ°, μ‘μ ν, μ λμ¬ νμμ΄ μμ΅λλ€. |
|
``` |
|
|
|
### Python code with AutoModel |
|
```python |
|
|
|
import os |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_id = 'MLP-KTLim/llama-3-Korean-Bllossom-8B' |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
|
|
model.eval() |
|
|
|
PROMPT = '''λΉμ μ μ μ©ν AI μ΄μμ€ν΄νΈμ
λλ€. μ¬μ©μμ μ§μμ λν΄ μΉμ νκ³ μ ννκ² λ΅λ³ν΄μΌ ν©λλ€. |
|
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.''' |
|
instruction = "μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λν΄ μκ°ν΄μ€" |
|
|
|
messages = [ |
|
{"role": "system", "content": f"{PROMPT}"}, |
|
{"role": "user", "content": f"{instruction}"} |
|
] |
|
|
|
input_ids = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
return_tensors="pt" |
|
).to(model.device) |
|
|
|
terminators = [ |
|
tokenizer.eos_token_id, |
|
tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = model.generate( |
|
input_ids, |
|
max_new_tokens=2048, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
repetition_penalty = 1.1 |
|
) |
|
|
|
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) |
|
# μμΈκ³ΌνκΈ°μ λνκ΅ MLPμ°κ΅¬μ€μ λ©ν°λͺ¨λ¬ μμ°μ΄μ²λ¦¬ μ°κ΅¬λ₯Ό νκ³ μμ΅λλ€. ꡬμ±μμ μκ²½ν κ΅μμ κΉλ―Όμ€, κΉμλ―Ό, μ΅μ°½μ, μμΈνΈ, μ νκ²°, μνμ, μ‘μΉμ°, μ‘μ ν, μ λμ¬ νμμ΄ μμ΅λλ€. |
|
``` |
|
|
|
|
|
|
|
## Citation |
|
**Language Model** |
|
```text |
|
@misc{bllossom, |
|
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim}, |
|
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean}, |
|
year = {2024}, |
|
journal = {LREC-COLING 2024}, |
|
paperLink = {\url{https://arxiv.org/pdf/2403.10882}}, |
|
}, |
|
} |
|
``` |
|
|
|
**Vision-Language Model** |
|
```text |
|
@misc{bllossom-V, |
|
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim}, |
|
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment}, |
|
year = {2024}, |
|
publisher = {GitHub}, |
|
journal = {NAACL 2024 findings}, |
|
paperLink = {\url{https://arxiv.org/pdf/2403.11399}}, |
|
}, |
|
} |
|
``` |
|
|
|
## Contact |
|
- μκ²½ν(KyungTae Lim), Professor at Seoultech. `ktlim@seoultech.ac.kr` |
|
- ν¨μκ· (Younggyun Hahm), CEO of Teddysum. `hahmyg@teddysum.ai` |
|
- κΉνμ(Hansaem Kim), Professor at Yonsei. `khss@yonsei.ac.kr` |
|
|
|
## Contributor |
|
- μ΅μ°½μ(Chansu Choi), choics2623@seoultech.ac.kr |
|
- κΉμλ―Ό(Sangmin Kim), sangmin9708@naver.com |
|
- μμΈνΈ(Inho Won), wih1226@seoultech.ac.kr |
|
- κΉλ―Όμ€(Minjun Kim), mjkmain@seoultech.ac.kr |
|
- μ‘μΉμ°(Seungwoo Song), sswoo@seoultech.ac.kr |
|
- μ λμ¬(Dongjae Shin), dylan1998@seoultech.ac.kr |
|
- μνμ(Hyeonseok Lim), gustjrantk@seoultech.ac.kr |
|
- μ‘μ ν(Jeonghun Yuk), usually670@gmail.com |
|
- μ νκ²°(Hangyeol Yoo), 21102372@seoultech.ac.kr |
|
- μ‘μν(Seohyun Song), alexalex225225@gmail.com |