Ash-Hun's picture
Update README.md
e28813b verified
|
raw
history blame
837 Bytes
metadata
license: llama2
inference: false
datasets:
  - Ash-Hun/Welfare-QA
library_name: peft
pipeline_tag: text-generation
tags:
  - torch
  - llama2
  - domain-specific-lm

"WelSSiSKo : Welfare Domain Specific Model"


What is BaseModel â–¼

👉 beomi/llama-2-ko-7b

Github â–¼

👉 Github Repo

Training procedure â–¼

The following bitsandbytes quantization config was used during training:

  • load_in_4bit: True
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: torch.bfloat16

Framework versions â–¼

  • PEFT 0.8.2.