--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # Model Details Model Developers: Sogang University SGEconFinlab ### Model Description This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data. The data sources are listed below, and we are not releasing the data we trained on because it was used for research/policy purposes. If you wish to use the original data rather than our training data, please contact the original author directly for permission to use it. - **Developed by:** [Sogang University SGEconFinlab] - **Language(s) (NLP):** [Ko/En] - **License:** [apache-2.0] - **Base Model:** [yanolja/KoSOLAR-10.7B-v0.2] ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ### Direct Use [More Information Needed] ## How to Get Started with the Model peft_model_id = "SGEcon/KoSOLAR-10.7B-v0.2_fin_v4" config = PeftConfig.from_pretrained(peft_model_id) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0}) model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) model.eval() import re def gen(x): inputs = tokenizer(f"### 질문: {x}\n\n### 답변:", return_tensors='pt', return_token_type_ids=False) # 데이터를 GPU로 이동(사용 가능한 경우) inputs = {k: v.to(device="cuda" if torch.cuda.is_available() else "cpu") for k, v in inputs.items()} gened = model.generate( **inputs, max_new_tokens=256, early_stopping=True, num_return_sequences=4, # 4개의 답변을 생성하도록 설정(답변 개수 설정 가능) do_sample=True, eos_token_id=tokenizer.eos_token_id, # EOS 토큰 ID 사용 temperature=0.9, top_p=0.8, top_k=50 ) complete_answers = [] for gen_seq in gened: decoded = tokenizer.decode(gen_seq, skip_special_tokens=True).strip() # "### 답변:" 문자열 이후의 텍스트만 추출 first_answer_start_idx = decoded.find("### 답변:") + len("### 답변:") temp_answer = decoded[first_answer_start_idx:].strip() # 두 번째 "### 답변:" 문자열 이전까지의 텍스트만 추출 second_answer_start_idx = temp_answer.find("### 답변:") if second_answer_start_idx != -1: complete_answer = temp_answer[:second_answer_start_idx].strip() else: complete_answer = temp_answer # 두 번째 "### 답변:"이 없는 경우 전체 답변 반환 complete_answers.append(complete_answer) return complete_answers ## Training Details ### Training Data [More Information Needed] ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional]