--- license: cc-by-nc-4.0 base_model: KT-AI/midm-bitext-S-7B-inst-v1 tags: - generated_from_trainer model-index: - name: lora-midm-7b-food-order-understanding results: [] --- # 모델: Midm
데이터셋: nsmc
https://huggingface.co/datasets/nsmc
Train 데이터: 3000
Test 데이터: 1000 ## [테스트 결과] **정확도: 89.00%** **혼동행렬(Confusion Matrix)** ||정답 Positive|정답 Negative| |:------:|:------:|:------:| |예측 Positive|474|76| |예측 Negative|34|416| **평가지표** |||| |:------:|:------:|:------:| |정밀도(Precision)|0.862| |재현율(Recall)|0.933| |F1 Score|0.927| ## [성능 향상]
train 데이터 수를 2000에서 2500, 3000으로 늘려가며 성능을 약 8% 정도 높였으며, TrainingArguments의 max_steps 등의 파라미터를 조절해가며 성능을 높이고자 노력하였다. ------------------------------------------------------------------------------------------------------------------------ # lora-midm-7b-food-order-understanding This model is a fine-tuned version of [KT-AI/midm-bitext-S-7B-inst-v1](https://huggingface.co/KT-AI/midm-bitext-S-7B-inst-v1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 300 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0