File size: 6,848 Bytes
c100893 c6b0748 c100893 d7dfada 63bab77 c100893 758d33d c100893 1ffd489 acfc617 c100893 758d33d acfc617 6bb0093 c100893 d2b1968 b13807a 0dca21d 1ffd489 d2b1968 63bab77 0dca21d c6b0748 0dca21d 1ffd489 c6b0748 0dca21d c6b0748 0dca21d c6b0748 0dca21d 1ffd489 c6b0748 1ffd489 c6b0748 1ffd489 6bb0093 1ffd489 c100893 0e5835a c100893 63bab77 c100893 758d33d 0e5835a c100893 0e5835a c100893 0e5835a c100893 63bab77 6bb0093 63bab77 6bb0093 c100893 0e5835a c100893 0e5835a c100893 0e5835a c100893 0e5835a c100893 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
library_name: transformers
license: cc-by-nc-4.0
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
language:
- ko
- en
tags:
- Economic
- Finance
---
# Model Details
Model Developers: Sogang University SGEconFinlab(<<https://sc.sogang.ac.kr/aifinlab/>)
### Model Description
This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data.
The data sources are listed below, and we are not releasing the data that we trained on because it was used for research/policy purposes.
If you wish to use the original data, please contact the original author directly for permission to use it.
- **Developed by:** Sogang University SGEconFinlab(<https://sc.sogang.ac.kr/aifinlab/>)
- **License:** cc-by-nc-4.0
- **Base Model:** yanolja/KoSOLAR-10.7B-v0.2(<https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2>)
## Loading the Model
peft_model_id = "SGEcon/KoSOLAR-10.7B-v0.2_fin_v4"
config = PeftConfig.from_pretrained(peft_model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0})
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model.eval()
## Conducting Conversation
import re
def gen(x):
inputs = tokenizer(f"### μ§λ¬Έ: {x}\n\n### λ΅λ³:", return_tensors='pt', return_token_type_ids=False)
# λ°μ΄ν°λ₯Ό GPUλ‘ μ΄λ(μ¬μ© κ°λ₯ν κ²½μ°)
inputs = {k: v.to(device="cuda" if torch.cuda.is_available() else "cpu") for k, v in inputs.items()}
gened = model.generate(
**inputs,
max_new_tokens=256, # μλ‘ μμ±ν ν ν°μ μ΅λ κ°μ
early_stopping=True,
num_return_sequences=1, # νλμ λ΅λ³λ§ μμ±
do_sample=True, # λ€μν λ΅λ³ μμ±μ μν΄ μνλ§ νμ±ν
eos_token_id=tokenizer.eos_token_id, # EOS ν ν° ID μ¬μ©
temperature=0.9, # μμ± λ€μμ± μ‘°μ μ μν μ¨λ μ€μ
top_p=0.8, # nucleus samplingμμ μ¬μ©ν p κ°
top_k=50 # top-k samplingμμ μ¬μ©ν k κ°
)
# μμ±λ μνμ€λ₯Ό λμ½λνμ¬ μΆλ ₯ ν
μ€νΈλ‘ λ³ν
decoded = tokenizer.decode(gened[0], skip_special_tokens=True).strip()
# "### λ΅λ³:" λ¬Έμμ΄ μ΄νμ ν
μ€νΈλ§ μΆμΆ
answer_start_idx = decoded.find("### λ΅λ³:") + len("### λ΅λ³:")
complete_answer = decoded[answer_start_idx:].strip()
# 첫 λ²μ§Έ ꡬλμ (. ? !)μ μ°Ύμμ κ·Έ λΆλΆκΉμ§λ§ μΆμΆ
match = re.search(r"[\.\?\!][^\.\?\!]*$", complete_answer)
if match:
complete_answer = complete_answer[:match.end()].strip()
return complete_answer
## Training Details
We use QLora to train the base model.
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
### Training Data
1. νκ΅μν: κ²½μ κΈμ΅μ©μ΄ 700μ (<https://www.bok.or.kr/portal/bbs/B0000249/view.do?nttId=235017&menuNo=200765>)
2. κΈμ΅κ°λ
μ: κΈμ΅μλΉμ μ 보 ν¬νΈ νμΈ κΈμ΅μ©μ΄μ¬μ (<https://fine.fss.or.kr/fine/fnctip/fncDicary/list.do?menuNo=900021>)
3. KDI κ²½μ μ 보μΌν°: μμ¬ μ©μ΄μ¬μ (<https://eiec.kdi.re.kr/material/wordDic.do>)
4. νκ΅κ²½μ μ λ¬Έ/νκ²½λ·μ»΄: νκ²½κ²½μ μ©μ΄μ¬μ (<https://terms.naver.com/list.naver?cid=42107&categoryId=42107>), μ€λμ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=1>), μ€λμ μ£Όλμ΄ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=5>), μκΈμκΈνκ²½(<https://sgsg.hankyung.com/tesat/study>)
5. μ€μλ²€μ²κΈ°μ
λΆ/λνλ―Όκ΅μ λΆ: μ€μλ²€μ²κΈ°μ
λΆ μ λ¬Έμ©μ΄(<https://terms.naver.com/list.naver?cid=42103&categoryId=42103>)
6. κ³ μ±μΌ/λ²λ¬ΈμΆνμ¬: νκ³Β·μΈλ¬΄ μ©μ΄μ¬μ (<https://terms.naver.com/list.naver?cid=51737&categoryId=51737>)
7. 맨νμ κ²½μ ν 8ν Word Index
8. kyujinpy/KOR-OpenOrca-Platypus-v3(<https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3>)
The copyright of the data used belongs to the original author, so please contact the original author when using it.
### Training Hyperparameters
|Hyperparameter|SGEcon/KoSOLAR-10.7B-v0.2_fin_v4|
|------|---|
|Lora Method|Lora|
|load in 4 bit|True|
|learning rate|1e-5|
|lr scheduler|linear|
|lora alpa|16|
|lora rank|16|
|lora dropout|0.05|
|optim|paged_adamw_32bit|
|target_modules|q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head|
### Example
> μ€μμνμ μν μ λν΄μ μ€λͺ
ν΄μ€λ?
>> μ€μμνμ ν΅νλ°νκΆκ³Ό κΈμ΅ν΅μ κΆμ κ°μ§ κΈ°κ΄μ΄λ€. μ€μμνμ κ΅κ°μ ν΅νμ μ±
, μΈνμ μ±
, κΈμ΅μ μ±
μ μ립νλ λμμ μμ
μνκ³Ό κ°μ κΈμ΅κΈ°κ΄μ κ°λ
Β·κ°λ
νλ μ
무λ₯Ό μννλ€. μ€μμνμ μ λΆμ μμ
μνμ λν μκΈλλΆκΈ°κ΄μ΄λ€. μμ
μνμ μ€μμνμ μκΈμ λΉλ¦¬κ±°λ μκΈνλ€. μ€μμνμ ν΅νμ μ©μ μ±
μ μννκΈ° μν΄ κΈμ΅κΈ°κ΄μ ν΅ν΄ μκΈμ λμΆνκ±°λ μκΈ λ°λλ€. μ€μμνμ μμ
μνμ λν μκΈλλΆκΈ°κ΄μ μν κ³Ό ν¨κ» μμ€μνμ λν κ°λ
Β·κ°λ
μ μν μ μννλ€. μμ
μνμ΄ μκΈμ λμΆν λλ 1μ°¨μ μΌλ‘ μμ
μνμ λμΆκΈμ μ§κΈνλ λμ , λμΆμνμ λμΆκΈμ μΌλΆ λλ μ μ‘μ μκΈμΌλ‘ λ°μ μ€μμνμ λμ λΉλ €μ£Όκ³ μκΈνλ€. μκΈμ λν μ΄μμ¨μ λμ¬ μκΈμκ° μ€μμνμ μκΈμ νκ²λ μ λνλ κ²μ΄λ€. ννΈ μμ
μνμ λμΆμ ν λ λμΆμνμ΄ λμΆκΈμ μκΈνλ λμ , λμΆμ λ°λ μνμ λμΆκΈμ μ§κΈνλ€.
|