metadata
license: llama2
base_model: beomi/llama-2-ko-7b
inference: false
datasets:
- Ash-Hun/Welfare-QA
library_name: peft
pipeline_tag: text-generation
tags:
- torch
- llama2
- domain-specific-lm
"WelSSiSKo : Welfare Domain Specific Model"
Github â–¼
If you want to get how to use this model, please check my github repository :)
👉 Github Repo
What is BaseModel â–¼
👉 beomi/llama-2-ko-7b
Training procedure â–¼
The following bitsandbytes
quantization config was used during training:
- load_in_4bit: True
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions â–¼
- PEFT 0.8.2.