Edit model card

Bllossom | Demo | Homepage | Github | Colab-tutorial |

์ €ํฌ BllossomํŒ€ ์—์„œ ํ•œ๊ตญ์–ด-์˜์–ด ์ด์ค‘ ์–ธ์–ด๋ชจ๋ธ์ธ Bllossom์„ ๊ณต๊ฐœํ–ˆ์Šต๋‹ˆ๋‹ค!
์„œ์šธ๊ณผ๊ธฐ๋Œ€ ์Šˆํผ์ปดํ“จํŒ… ์„ผํ„ฐ์˜ ์ง€์›์œผ๋กœ 100GB๊ฐ€๋„˜๋Š” ํ•œ๊ตญ์–ด๋กœ ๋ชจ๋ธ์ „์ฒด๋ฅผ ํ’€ํŠœ๋‹ํ•œ ํ•œ๊ตญ์–ด ๊ฐ•ํ™” ์ด์ค‘์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค!
ํ•œ๊ตญ์–ด ์ž˜ํ•˜๋Š” ๋ชจ๋ธ ์ฐพ๊ณ  ์žˆ์ง€ ์•Š์œผ์…จ๋‚˜์š”?
 - ํ•œ๊ตญ์–ด ์ตœ์ดˆ! ๋ฌด๋ ค 3๋งŒ๊ฐœ๊ฐ€ ๋„˜๋Š” ํ•œ๊ตญ์–ด ์–ดํœ˜ํ™•์žฅ
 - Llama3๋Œ€๋น„ ๋Œ€๋žต 25% ๋” ๊ธด ๊ธธ์ด์˜ ํ•œ๊ตญ์–ด Context ์ฒ˜๋ฆฌ๊ฐ€๋Šฅ
 - ํ•œ๊ตญ์–ด-์˜์–ด Pararell Corpus๋ฅผ ํ™œ์šฉํ•œ ํ•œ๊ตญ์–ด-์˜์–ด ์ง€์‹์—ฐ๊ฒฐ (์‚ฌ์ „ํ•™์Šต)
 - ํ•œ๊ตญ์–ด ๋ฌธํ™”, ์–ธ์–ด๋ฅผ ๊ณ ๋ คํ•ด ์–ธ์–ดํ•™์ž๊ฐ€ ์ œ์ž‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•œ ๋ฏธ์„ธ์กฐ์ •
 - ๊ฐ•ํ™”ํ•™์Šต
์ด ๋ชจ๋“ ๊ฒŒ ํ•œ๊บผ๋ฒˆ์— ์ ์šฉ๋˜๊ณ  ์ƒ์—…์  ์ด์šฉ์ด ๊ฐ€๋Šฅํ•œ Bllossom์„ ์ด์šฉํ•ด ์—ฌ๋Ÿฌ๋ถ„ ๋งŒ์˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค์–ด๋ณด์„ธ์šฅ!
๋ณธ ๋ชจ๋ธ์€ 42GB ์ด์ƒ GPU ํ˜น์€ 42GB ์ด์ƒ์˜ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์žˆ๋Š” CPU์—์„œ ๊ตฌ๋™ ๊ฐ€๋Šฅํ•œ ์–‘์žํ™” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค!

1. Bllossom-8B๋Š” ์„œ์šธ๊ณผ๊ธฐ๋Œ€, ํ…Œ๋””์ธ, ์—ฐ์„ธ๋Œ€ ์–ธ์–ด์ž์› ์—ฐ๊ตฌ์‹ค์˜ ์–ธ์–ดํ•™์ž์™€ ํ˜‘์—…ํ•ด ๋งŒ๋“  ์‹ค์šฉ์ฃผ์˜๊ธฐ๋ฐ˜ ์–ธ์–ด๋ชจ๋ธ์ž…๋‹ˆ๋‹ค! ์•ž์œผ๋กœ ์ง€์†์ ์ธ ์—…๋ฐ์ดํŠธ๋ฅผ ํ†ตํ•ด ๊ด€๋ฆฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค ๋งŽ์ด ํ™œ์šฉํ•ด์ฃผ์„ธ์š” ๐Ÿ™‚
2. ์ดˆ ๊ฐ•๋ ฅํ•œ Advanced-Bllossom 8B, 70B๋ชจ๋ธ, ์‹œ๊ฐ-์–ธ์–ด๋ชจ๋ธ์„ ๋ณด์œ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค! (๊ถ๊ธˆํ•˜์‹ ๋ถ„์€ ๊ฐœ๋ณ„ ์—ฐ๋ฝ์ฃผ์„ธ์š”!!)
3. Bllossom์€ NAACL2024, LREC-COLING2024 (๊ตฌ๋‘) ๋ฐœํ‘œ๋กœ ์ฑ„ํƒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
4. ์ข‹์€ ์–ธ์–ด๋ชจ๋ธ ๊ณ„์† ์—…๋ฐ์ดํŠธ ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค!! ํ•œ๊ตญ์–ด ๊ฐ•ํ™”๋ฅผ์œ„ํ•ด ๊ณต๋™ ์—ฐ๊ตฌํ•˜์‹ค๋ถ„(ํŠนํžˆ๋…ผ๋ฌธ) ์–ธ์ œ๋“  ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค!! 
   ํŠนํžˆ ์†Œ๋Ÿ‰์˜ GPU๋ผ๋„ ๋Œ€์—ฌ ๊ฐ€๋Šฅํ•œํŒ€์€ ์–ธ์ œ๋“  ์—ฐ๋ฝ์ฃผ์„ธ์š”! ๋งŒ๋“ค๊ณ  ์‹ถ์€๊ฑฐ ๋„์™€๋“œ๋ ค์š”.

The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:

  • Knowledge Linking: Linking Korean and English knowledge through additional training
  • Vocabulary Expansion: Expansion of Korean vocabulary to enhance Korean expressiveness.
  • Instruction Tuning: Tuning using custom-made instruction following data specialized for Korean language and Korean culture
  • Human Feedback: DPO has been applied
  • Vision-Language Alignment: Aligning the vision transformer with this language model

This model developed by MLPLab at Seoultech, Teddysum and Yonsei Univ. This model was converted to GGUF format from Bllossom/llama-3-Korean-Bllossom-70B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Demo Video

Bllossom-V Demo

Bllossom Demo(Kakao)ใ…คใ…คใ…คใ…คใ…คใ…คใ…คใ…ค

NEWS

  • [2024.05.08] Vocab Expansion Model Update
  • [2024.04.25] We released Bllossom v2.0, based on llama-3
  • [2023/12] We released Bllossom-Vision v1.0, based on Bllossom
  • [2023/08] We released Bllossom v1.0, based on llama-2.
  • [2023/07] We released Bllossom v0.7, based on polyglot-ko.

Example code

!CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
!huggingface-cli download Bllossom/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M --local-dir='YOUR-LOCAL-FOLDER-PATH'

from llama_cpp import Llama
from transformers import AutoTokenizer

model_id = 'Bllossom/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Llama(
    model_path='YOUR-LOCAL-FOLDER-PATH/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M.gguf',
    n_ctx=512,
    n_gpu_layers=-1        # Number of model layers to offload to GPU
)

PROMPT = \
'''๋‹น์‹ ์€ ์œ ์šฉํ•œ AI ์–ด์‹œ์Šคํ„ดํŠธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์งˆ์˜์— ๋Œ€ํ•ด ์นœ์ ˆํ•˜๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.'''

instruction = 'Your Instruction'

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

prompt = tokenizer.apply_chat_template(
    messages, 
    tokenize = False,
    add_generation_prompt=True
)

generation_kwargs = {
    "max_tokens":512,
    "stop":["<|eot_id|>"],
    "echo":True, # Echo the prompt in the output
    "top_p":0.9,
    "temperature":0.6,
}

resonse_msg = model(prompt, **generation_kwargs)
print(resonse_msg['choices'][0]['text'][len(prompt):])

Citation

Language Model

@misc{bllossom,
  author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
  title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
  year = {2024},
  journal = {LREC-COLING 2024},
  paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
 },
}

Vision-Language Model

@misc{bllossom-V,
  author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
  title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
  year = {2024},
  publisher = {GitHub},
  journal = {NAACL 2024 findings},
  paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
 },
}

Contact

  • ์ž„๊ฒฝํƒœ(KyungTae Lim), Professor at Seoultech. ktlim@seoultech.ac.kr
  • ํ•จ์˜๊ท (Younggyun Hahm), CEO of Teddysum. hahmyg@teddysum.ai
  • ๊น€ํ•œ์ƒ˜(Hansaem Kim), Professor at Yonsei. khss@yonsei.ac.kr

Contributor

Downloads last month
939
GGUF
Model size
70.8B params
Architecture
llama

4-bit

Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for Bllossom/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M

Quantized
(2)
this model