|
--- |
|
libray_name: transformers |
|
pipeline_tag: text-generation |
|
license: other |
|
license_name: llama3 |
|
license_link: LICENSE |
|
language: |
|
- ko |
|
- en |
|
tags: |
|
- meta |
|
- llama |
|
- llama-3 |
|
- akallama |
|
library_name: transformers |
|
--- |
|
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> |
|
<img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/> |
|
</a> |
|
|
|
|
|
# AKALLAMA |
|
|
|
We introduce AKALLAMA-70B, korean focused opensource 70b large language model. |
|
It demonstrate considerable improvement in korean fluence, specially compared to base llama 3 model. |
|
To our knowledge, this is one of the first 70b opensource Korean-speaking language models. |
|
|
|
### Model Description |
|
|
|
This is the model card of a ๐ค transformers model that has been pushed on the Hub. |
|
|
|
- **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/) |
|
- **Language(s) (NLP):** Korean, English |
|
- **License:** llama3 |
|
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) |
|
|
|
## How to use |
|
|
|
This repo provides full model weight files for AkaLlama-70B-v0.1. |
|
|
|
# Use with transformers |
|
|
|
See the snippet below for usage with Transformers: |
|
|
|
```python |
|
import transformers |
|
import torch |
|
|
|
model_id = "mirlab/AkaLlama-llama3-70b-v0.1" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device="auto", |
|
) |
|
|
|
system_prompt = """ |
|
""" |
|
|
|
messages = [ |
|
{"role": "system", "content": "system_prompt"}, |
|
{"role": "user", "content": "๋ค ์ด๋ฆ์ ๋ญ์ผ?"}, |
|
] |
|
|
|
prompt = pipeline.tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
terminators = [ |
|
pipeline.tokenizer.eos_token_id, |
|
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = pipeline( |
|
prompt, |
|
max_new_tokens=256, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
) |
|
print(outputs[0]["generated_text"][len(prompt):]) |
|
``` |
|
|
|
## Training Details |
|
### Training Procedure |
|
|
|
We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691). |
|
Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes. |
|
Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template. |
|
|
|
### Training Data |
|
|
|
Detailed descriptions regarding training data will be announced later. |
|
|
|
### Examples |
|
|
|
WIP |
|
|
|
## Special Thanks |
|
|
|
- Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources |