--- license: llama2 base_model: beomi/llama-2-ko-7b inference: false datasets: - Ash-Hun/Welfare-QA library_name: peft pipeline_tag: text-generation tags: - torch - llama2 - domain-specific-lm ---

"WelSSiSKo : Welfare Domain Specific Model"

--- # Github ▼ > If you want to get how to use this model, please check my github repository :) 👉 [Github Repo](https://github.com/ash-hun/WelSSISKo) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ash-hun/WelSSISKo/blob/main/WelSSiSKo_Inference.ipynb) # What is BaseModel ▼ > 👉 [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) # Training procedure ▼ The following `bitsandbytes` quantization config was used during training: - **load_in_4bit**: True - **bnb_4bit_quant_type**: nf4 - **bnb_4bit_use_double_quant**: False - **bnb_4bit_compute_dtype**: float16 # Framework versions ▼ - PEFT 0.8.2.