--- license: llama2 base_model: beomi/llama-2-ko-7b inference: false datasets: - Ash-Hun/Welfare-QA library_name: peft pipeline_tag: text-generation tags: - torch - llama2 - domain-specific-lm ---

"WelSSiSKo : Welfare Domain Specific Model"

--- # Github โ–ผ > If you want to get how to use this model, please check my github repository :) ๐Ÿ‘‰ [Github Repo](https://github.com/ash-hun/WelSSISKo) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ash-hun/WelSSISKo/blob/main/WelSSiSKo_Inference.ipynb) # What is BaseModel โ–ผ > ๐Ÿ‘‰ [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) # Training procedure โ–ผ The following `bitsandbytes` quantization config was used during training: - **load_in_4bit**: True - **bnb_4bit_quant_type**: nf4 - **bnb_4bit_use_double_quant**: False - **bnb_4bit_compute_dtype**: float16 # Framework versions โ–ผ - PEFT 0.8.2. # Evaluate Score - ์ ์ ˆํ•œ Domain Benchmark Set์ด ์—†๊ธฐ๋•Œ๋ฌธ์— ์ •์„ฑํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€๊ณ  ๊ทธ์— ๋”ฐ๋ฅธ **AVG Score๋Š” 0.74** ์ž…๋‹ˆ๋‹ค. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6370a4e53d1bd47a4ebc2120/HwIKWCJb3bT2pk_tP70e0.png)