Edit model card

Alpha-Instruct We are thrilled to introduce Alpha-Instruct, our latest language model, which demonstrates exceptional capabilities in both Korean and English. Alpha-Instruct is developed using the Evolutionary Model Merging technique, enabling it to excel in complex language tasks and logical reasoning.

A key aspect of Alpha-Instruct's development is our community-based approach. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation.

Alpha-Instruct has achieved outstanding performance on the LogicKor, scoring an impressive 6.62. Remarkably, this performance rivals that of 70B models, showcasing the efficiency and power of our 8B model. This achievement highlights Alpha-Instruct's advanced computational and reasoning skills, making it a leading choice for diverse and demanding language tasks.

For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our website (Soon).


Overview

Alpha-Instruct is our latest language model, developed using 'Evolutionary Model Merging' technique. This method employs a 1:1 ratio of task-specific datasets from KoBEST and Haerae, resulting in a model with named 'Alpha-Ko-8B-Evo'. The following models were used for merging:

To refine and enhance Alpha-Instruct, we utilized a carefully curated high-quality datasets aimed at 'healing' the model's output, significantly boosting its human preference scores. We use ORPO specifically for this "healing" (RLHF) phase. The datasets* used include:

*Some of these datasets were partially used and translated for training, and we ensured there was no contamination during the evaluation process.

This approach effectively balances human preferences with the model's capabilities, making Alpha-Instruct well-suited for real-life scenarios where user satisfaction and performance are equally important.

Benchmark Results

Results in LogicKor* are as follows:

Model Single turn* Multi turn* Overall*
MLP-KTLim/llama-3-Korean-Bllossom-8B 4.238 3.404 3.821
Alpha-Ko-Evo 5.143 5.238 5.190
Alpha-Ko-Instruct (alt) 7.095 6.571 6.833
Alpha-Ko-Instruct 7.143 6.065 6.620
Alpha-Ko-Instruct-marlin (4bit) 6.857 5.738 6.298

*Self report(Default settings with 'alpha' template, mean of 3).

Result in KoBEST(acc, num_shot=5) are as follows:

Task beomi/Llama-3-Open-Ko-8B-Instruct maywell/Llama-3-Ko-8B-Instruct Alpha-Ko-Evo Alpha-Ko-Instruct
kobest overall 0.6220 0.6852 0.7229 0.7055
kobest_boolq 0.6254 0.7208 0.8547 0.8369
kobest_copa 0.7110 0.7650 0.7420 0.7420
kobest_hellaswag 0.3840 0.4440 0.4220 0.4240
kobest_sentineg 0.8388 0.9194 0.9471 0.9244
kobest_wic 0.5738 0.6040 0.6095 0.5730

*For reference, 'merged' models are chosen.

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "allganize/Llama-3-Alpha-Ko-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "system", "content": "당신은 인곡지λŠ₯ μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€. λ¬»λŠ” 말에 μΉœμ ˆν•˜κ³  μ •ν™•ν•˜κ²Œ λ‹΅λ³€ν•˜μ„Έμš”."},
    {"role": "user", "content": "ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ΄ 뭐야? 그리고 ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ— λŒ€ν•΄ 파이썬 μ½”λ“œλ₯Ό 짜쀘볼래?"},
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=512,
    eos_token_id=terminators,
    do_sample=False,
    repetition_penalty=1.05,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

Correspondence to

Special Thanks

  • @beomi for providing us with a great model!

License

The use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT

Citation

If you use this model in your research, please cite it as follows:

@misc{alpha-instruct,
  author       = {Ji soo Kim},
  title        = {Alpha-Instruct: Allganize Bilingual Model},
  year         = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  url          = {https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct},
}
Downloads last month
719
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for allganize/Llama-3-Alpha-Ko-8B-Instruct

Finetunes
4 models
Merges
2 models
Quantizations
6 models

Spaces using allganize/Llama-3-Alpha-Ko-8B-Instruct 5

Collection including allganize/Llama-3-Alpha-Ko-8B-Instruct