Steamout's picture
Update README.md
dd43166 verified
|
raw
history blame
No virus
2.87 kB
metadata
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
  - ko
  - en
tags:
  - meta
  - llama
  - llama-3
  - akallama
library_name: transformers

AKALLAMA

We introduce AKALLAMA-70B, korean focused opensource 70b large language model. It demonstrate considerable improvement in korean fluence, specially compared to base llama 3 model. To our knowledge, this is one of the first 70b opensource Korean-speaking language models.

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub.

How to use

This repo provides full model weight files for AkaLlama-70B-v0.1.

Use with transformers

See the snippet below for usage with Transformers:

import transformers
import torch

model_id = "mirlab/AkaLlama-llama3-70b-v0.1"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="auto",
)

system_prompt = """
"""

messages = [
    {"role": "system", "content": "system_prompt"},
    {"role": "user", "content": "네 이름은 뭐야?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])

Training Details

Training Procedure

We trained AkaLlama using a preference learning alignment algorithm called Odds Ratio Preference Optimization (ORPO). Our training pipeline is almost identical to that of HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1, aside from minor hyperparameter changes. Please check out Huggingface's alignment handbook for further details, including the chat template.

Training Data

Detailed descriptions regarding training data will be announced later.

Examples

WIP

Special Thanks

  • Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources