Edit model card

Llama.cpp imatrix quantizations of LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct

exaone

Using llama.cpp commit d565bb2 for quantization.

Original model: https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct

All quants were made using the imatrix option and Bartowski's calibration file.


Perplexity table (the lower the better)

Quant Size (MB) PPL Size (%) Accuracy (%) PPL error rate
IQ1_S 1821 34.4982 12.21 25.63 0.24486
IQ1_M 1956 23.2086 13.11 38.1 0.1634
IQ2_XXS 2183 15.3455 14.63 57.62 0.11012
IQ2_XS 2380 12.6876 15.95 69.69 0.08873
IQ2_S 2515 11.7368 16.86 75.33 0.08181
IQ2_M 2696 10.6106 18.07 83.33 0.07359
Q2_K_S 2731 11.9604 18.31 73.92 0.08477
Q2_K 2913 11.1576 19.53 79.24 0.08044
IQ3_XXS 3007 9.8746 20.16 89.54 0.06889
IQ3_XS 3227 9.5005 21.63 93.06 0.06492
Q3_K_S 3366 10.0229 22.56 88.21 0.07139
IQ3_S 3383 9.3452 22.68 94.61 0.06387
IQ3_M 3480 9.3058 23.33 95.01 0.06328
Q3_K_M 3704 9.3607 24.83 94.45 0.06498
Q3_K_L 3993 9.2338 26.77 95.75 0.06436
IQ4_XS 4102 9.0189 27.5 98.03 0.06276
Q4_0 4317 9.3239 28.94 94.83 0.06556
IQ4_NL 4319 9.0359 28.95 97.85 0.0629
Q4_K_S 4333 9.0954 29.05 97.21 0.06279
Q4_K_M 4550 9.066 30.5 97.52 0.06271
Q4_1 4744 9.0719 31.8 97.46 0.0627
Q5_K_S 5185 8.8731 34.76 99.64 0.06147
Q5_0 5199 8.9472 34.85 98.82 0.06245
Q5_K_M 5312 8.8943 35.61 99.41 0.06161
Q5_1 5626 8.89 37.71 99.45 0.0616
Q6_K 6122 8.9112 41.04 99.22 0.06194
Q8_0 7928 8.8278 53.14 100.15 0.06114
F16 14918 8.8414 100 100 0.06124


Original Model Card

πŸ‘‹πŸ‘‹ We have revised our license for revitalizing the research ecosystem.πŸ‘‹πŸ‘‹

Introduction

We introduce EXAONE-3.0-7.8B-Instruct, a pre-trained and instruction-tuned bilingual (English and Korean) generative model with 7.8 billion parameters. The model was pre-trained with 8T curated tokens and post-trained with supervised fine-tuning and direct preference optimization. It demonstrates highly competitive benchmark performance against other state-of-the-art open models of similar size.

For more details, please refer to our technical report, blog and GitHub.

Quickstart

We recommend to use transformers v4.41 or later.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct")

# Choose your prompt
prompt = "Explain who you are"  # English example
prompt = "λ„ˆμ˜ μ†Œμ›μ„ 말해봐"   # Korean example

messages = [
    {"role": "system", 
     "content": "You are EXAONE model from LG AI Research, a helpful assistant."},
    {"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
)

output = model.generate(
    input_ids.to("cuda"),
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=128
)
print(tokenizer.decode(output[0]))

Note

The EXAONE 3.0 instruction-tuned language model was trained to utilize the system prompt, so we highly recommend using the system prompts provided in the code snippet above.

Evaluation

We compared EXAONE-3.0-7.8B-Instruct with similar-sized instruction-tuned LLMs. To verify the performance of real-world use cases, we measured benchmarks that have a high correlation with LMSYS Chatbot Arena. Some experimental results are shown below. The full evaluation results can be found in the technical report.

Language Benchmark EXAONE 3.0
7.8B Inst.
Llama 3.1
8B Inst.
Gemma 2
9B Inst.
QWEN 2
7B Inst.
Phi 3
7B Inst.
Mistral 7B
Inst.
English MT-Bench 9.01 7.95 8.52 8.41 8.52 7.72
Arena-Hard-v0.1 46.8 28.0 42.1 21.7 29.1 16.2
WildBench 48.2 34.5 41.5 34.9 32.8 29.0
AlpacaEval 2.0 LC 45.0 31.5 47.5 24.5 37.1 31.0
Korean KoMT-Bench[1] 8.92 6.06 7.92 7.69 4.87 5.20
LogicKor 8.62 5.40 8.07 6.12 3.76 3.42
  • [1] KoMT-Bench is a dataset created by translating MT-Bench into Korean; see README for more details.

Limitation

The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflects the views of LG AI Research.

  • Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
  • Biased responses may be generated, which are associated with age, gender, race, and so on.
  • The generated responses rely heavily on statistics from the training data, which can result in the generation of semantically or syntactically incorrect sentences.
  • Since the model does not reflect the latest information, the responses may be false or contradictory.

LG AI Research strives to reduce potential risks that may arise from EXAONE language model. Users are not allowed to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate outputs violating LG AI’s ethical principles when using EXAONE language model.

License

The model is licensed under EXAONE AI Model License Agreement 1.1 - NC

Citation

@article{exaone-3.0-7.8B-instruct,
  title={EXAONE 3.0 7.8B Instruction Tuned Language Model},
  author={LG AI Research},
  journal={arXiv preprint arXiv:2408.03541},
  year={2024}
}

Contact

LG AI Research Technical Support: contact_us@lgresearch.ai

Downloads last month
482
GGUF
Model size
7.82B params
Architecture
exaone

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .