---
license: other
base_model:
- beomi/Llama-3-Open-Ko-8B-Instruct-preview
- beomi/Llama-3-Open-Ko-8B
library_name: transformers
tags:
- mergekit
- merge
- llama.cpp
---
# π Llama-3-Open-Ko-Linear-8B-GGUF
Quantized by [llama.cpp](https://github.com/ggerganov/llama.cpp)
## ποΈ Merge Details
"I thought about it yesterdayβmerging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs."
### π°π· Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) as a base.
### π°π· Models Merged
The following models were included in the merge:
* [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
### π Ollama
```
ollama create Llama-3-Open-Ko-Linear-8B -f ./Modelfile_Q5_K_M
```
Change it to suit your taste.
[Modelfile_Q5_K_M]
```
FROM llama-3-open-ko-linear-8b-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
{{ .System }}
{{- end }}
user
Human:
{{ .Prompt }}
assistant
Assistant:
"""
SYSTEM """
μΉμ ν μ±λ΄μΌλ‘μ μλλ°©μ μμ²μ μ΅λν μμΈνκ³ μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.
"""
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop ""
PARAMETER stop ""
PARAMETER top_k 50
PARAMETER top_p 0.95
```
### πΎ Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- layer_range: [0, 31]
model: beomi/Llama-3-Open-Ko-8B
parameters:
weight: 0.2
- layer_range: [0, 31]
model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
parameters:
weight: 0.8
merge_method: task_arithmetic
base_model: beomi/Llama-3-Open-Ko-8B
dtype: bfloat16
random_seed: 0
```