|
--- |
|
license: llama2 |
|
base_model: beomi/llama-2-ko-7b |
|
inference: false |
|
datasets: |
|
- Ash-Hun/Welfare-QA |
|
library_name: peft |
|
pipeline_tag: text-generation |
|
tags: |
|
- torch |
|
- llama2 |
|
- domain-specific-lm |
|
--- |
|
|
|
<div align='center'> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6370a4e53d1bd47a4ebc2120/TQSWE0e3dAO_Ksbb8b5Xd.png" width='45%'/> |
|
<h1>"WelSSiSKo : Welfare Domain Specific Model"</h1> |
|
</div> |
|
|
|
--- |
|
|
|
|
|
# Github โผ |
|
> If you want to get how to use this model, please check my github repository :) |
|
๐ [Github Repo](https://github.com/ash-hun/WelSSISKo) |
|
|
|
|
|
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ash-hun/WelSSISKo/blob/main/WelSSiSKo_Inference.ipynb) |
|
|
|
|
|
# What is BaseModel โผ |
|
> ๐ [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) |
|
|
|
|
|
# Training procedure โผ |
|
The following `bitsandbytes` quantization config was used during training: |
|
- **load_in_4bit**: True |
|
- **bnb_4bit_quant_type**: nf4 |
|
- **bnb_4bit_use_double_quant**: False |
|
- **bnb_4bit_compute_dtype**: float16 |
|
|
|
# Framework versions โผ |
|
- PEFT 0.8.2. |
|
|
|
|
|
# Evaluate Score |
|
- ์ ์ ํ Domain Benchmark Set์ด ์๊ธฐ๋๋ฌธ์ ์ ์ฑํ๊ฐ๋ฅผ ์งํํ์๊ณ ๊ทธ์ ๋ฐ๋ฅธ **AVG Score๋ 0.74** ์
๋๋ค. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6370a4e53d1bd47a4ebc2120/HwIKWCJb3bT2pk_tP70e0.png) |
|
|