|
--- |
|
library_name: peft |
|
base_model: beomi/llama-2-ko-7b |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
์์ง Study ์ค์ธ ๋ฉํฐ ํด ์ฑ๋ด ๋ชจ๋ธ. |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
<Prompt Template> |
|
์ด์ ๋ํ์ ํ์ฌ ๋ํ์ ๋ช
๋ น์ด๋ฅผ ์ฐธ๊ณ ํ์ฌ ์ํฉ์ ๊ณต๊ฐํ๊ณ ์น์ ํ ์๋ต์ ์์ฑํด์ฃผ์ธ์. ์๋ต ๋ง์ง๋ง์๋ ์ง๊ธ๊น์ง์ ๋ด์ฉ๊ณผ ๊ด๋ จ๋ ์ง๋ฌธ์ ํด์ฃผ์ธ์. |
|
|
|
[์ด์ ๋ํ] |
|
{} |
|
|
|
[ํ์ฌ ๋ํ] |
|
### ๋ช
๋ น์ด: |
|
{} |
|
|
|
### ์๋ต: |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Training procedure |
|
|
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- quant_method: bitsandbytes |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.6.2 |
|
|