|
--- |
|
license: llama2 |
|
inference: false |
|
datasets: |
|
- Ash-Hun/Welfare-QA |
|
library_name: peft |
|
pipeline_tag: text-generation |
|
tags: |
|
- torch |
|
- llama2 |
|
- domain-specific-lm |
|
--- |
|
|
|
<div align='left'> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6370a4e53d1bd47a4ebc2120/TQSWE0e3dAO_Ksbb8b5Xd.png" width='30%'/> |
|
<h1>"WelSSiSKo : Welfare Domain Specific Model"</h1> |
|
</div> |
|
|
|
--- |
|
|
|
# What is BaseModel â–¼ |
|
👉 [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) |
|
|
|
|
|
# Github â–¼ |
|
👉 [Github Repo](https://github.com/ash-hun/WelSSISKo) |
|
|
|
|
|
# Training procedure â–¼ |
|
The following `bitsandbytes` quantization config was used during training: |
|
- **load_in_4bit**: True |
|
- **bnb_4bit_quant_type**: nf4 |
|
- **bnb_4bit_use_double_quant**: False |
|
- **bnb_4bit_compute_dtype**: torch.bfloat16 |
|
|
|
# Framework versions â–¼ |
|
- PEFT 0.8.2. |