|
--- |
|
license: llama2 |
|
base_model: beomi/llama-2-ko-7b |
|
inference: false |
|
datasets: |
|
- Ash-Hun/Welfare-QA |
|
library_name: peft |
|
pipeline_tag: text-generation |
|
tags: |
|
- torch |
|
- llama2 |
|
- domain-specific-lm |
|
--- |
|
|
|
<div align='center'> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6370a4e53d1bd47a4ebc2120/TQSWE0e3dAO_Ksbb8b5Xd.png" width='45%'/> |
|
<h1>"WelSSiSKo : Welfare Domain Specific Model"</h1> |
|
</div> |
|
|
|
--- |
|
|
|
|
|
# Github â–¼ |
|
> If you want to get how to use this model, please check my github repository :) |
|
👉 [Github Repo](https://github.com/ash-hun/WelSSISKo) |
|
|
|
|
|
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ash-hun/WelSSISKo/blob/main/WelSSiSKo_Inference.ipynb) |
|
|
|
|
|
# What is BaseModel â–¼ |
|
> 👉 [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) |
|
|
|
|
|
# Training procedure â–¼ |
|
The following `bitsandbytes` quantization config was used during training: |
|
- **load_in_4bit**: True |
|
- **bnb_4bit_quant_type**: nf4 |
|
- **bnb_4bit_use_double_quant**: False |
|
- **bnb_4bit_compute_dtype**: float16 |
|
|
|
# Framework versions â–¼ |
|
- PEFT 0.8.2. |