Text Generation
Transformers
PyTorch
Korean
llama
text-generation-inference
Inference Endpoints
File size: 780 Bytes
af533b0
 
 
 
 
 
 
 
 
 
 
786967d
af533b0
 
 
 
 
 
 
 
 
cb5dfc6
 
af533b0
 
35e3c93
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
language:
- ko
datasets: DopeorNope/OpenOrca-near-dedup-v1
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**  
**The license is `cc-by-nc-sa`.**
  
## Model Details

**Model Developers** SeungyooLee (DopeorNope)

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture**  
pub-llama-13b-v6 is an auto-regressive language model based on the LLaMA2 transformer architecture.


## Base Model : [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)

**Training Dataset**  
DopeorNope/OpenOrca-near-dedup-v1 dataset was created by [Near dedup algorithm](https://arxiv.org/abs/2107.06499) to reduce similarity.
We will open it soon.