Update README.md
Browse files
README.md
CHANGED
@@ -3,10 +3,10 @@ license: cc-by-4.0
|
|
3 |
---
|
4 |
# **KoQuality-Polyglot-5.8b**
|
5 |
|
6 |
-
KoQuality-Polyglot-5.8b is a fine-tuned iteration of the [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) model, specifically trained on the [KoQuality dataset](https://huggingface.co/datasets/DILAB-HYU/KoQuality). Notably, when excluding models employing COT datasets, KoQuality-Polyglot-5.8b exhibits exceptional performance, even though it operates with a relatively small dataset.
|
7 |
|
8 |
## Open Ko-LLM LeaderBoard
|
9 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/iYzR_mdvkcjnVquho0Y9R.png" width= "1000px">
|
10 |
|
11 |
Our approach centers around leveraging high-quality instruction datasets to deepen our understanding of commands, all the while preserving the performance of the Pre-trained Language Model (PLM). Compared to alternative models, we have achieved this with minimal learning, **utilizing only 1% of the dataset, which equates to 4006 instructions**.
|
12 |
|
|
|
3 |
---
|
4 |
# **KoQuality-Polyglot-5.8b**
|
5 |
|
6 |
+
KoQuality-Polyglot-5.8b is a fine-tuned iteration of the [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) model, specifically trained on the [KoQuality dataset](https://huggingface.co/datasets/DILAB-HYU/KoQuality). Notably, when excluding models employing COT datasets, KoQuality-Polyglot-5.8b exhibits exceptional performance in same size models, even though it operates with a relatively small dataset.
|
7 |
|
8 |
## Open Ko-LLM LeaderBoard
|
9 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/iYzR_mdvkcjnVquho0Y9R.png" width= "1000px" title="하얀 강아지">
|
10 |
|
11 |
Our approach centers around leveraging high-quality instruction datasets to deepen our understanding of commands, all the while preserving the performance of the Pre-trained Language Model (PLM). Compared to alternative models, we have achieved this with minimal learning, **utilizing only 1% of the dataset, which equates to 4006 instructions**.
|
12 |
|