File size: 2,871 Bytes
b18f73e
 
 
 
 
 
 
 
fd6d4b1
b18f73e
 
 
 
 
 
 
61fd251
dd43166
b18f73e
 
 
 
fd6d4b1
b18f73e
 
 
 
 
 
 
 
fd6d4b1
b18f73e
 
fd6d4b1
 
 
 
 
 
 
 
 
 
f85b4a9
fd6d4b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b18f73e
fd6d4b1
 
 
 
b18f73e
fd6d4b1
 
 
 
 
 
 
 
 
 
b18f73e
 
 
 
fd6d4b1
 
b18f73e
 
 
 
fd6d4b1
b18f73e
 
 
fd6d4b1
b18f73e
fd6d4b1
b18f73e
fd6d4b1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
- ko
- en
tags:
- meta
- llama
- llama-3
- akallama
library_name: transformers
---
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/>
</a>


# AKALLAMA

We introduce AKALLAMA-70B, korean focused opensource 70b large language model. 
It demonstrate considerable improvement in korean fluence, specially compared to base llama 3 model.
To our knowledge, this is one of the first 70b opensource Korean-speaking language models.

### Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub.

- **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
- **Language(s) (NLP):** Korean, English
- **License:** llama3
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)

## How to use

This repo provides full model weight files for AkaLlama-70B-v0.1.

# Use with transformers

See the snippet below for usage with Transformers:

```python
import transformers
import torch

model_id = "mirlab/AkaLlama-llama3-70b-v0.1"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="auto",
)

system_prompt = """
"""

messages = [
    {"role": "system", "content": "system_prompt"},
    {"role": "user", "content": "네 이름은 뭐야?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```

## Training Details
### Training Procedure

We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691).
Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes.
Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template. 

### Training Data

Detailed descriptions regarding training data will be announced later.

### Examples

WIP

## Special Thanks

- Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources