Home Life Improved Classifier (ONNX)

๊ฐœ์„ ๋œ ์ƒํ™œ ์นดํ…Œ๊ณ ๋ฆฌ ๋ถ„๋ฅ˜ ๋ชจ๋ธ (ONNX ๋ฒ„์ „)

๋ชจ๋ธ ์„ค๋ช…

์ด ๋ชจ๋ธ์€ ํ•œ๊ตญ์–ด ์ƒํ™œ ๊ด€๋ จ ์งˆ๋ฌธ์„ 8๊ฐœ ์นดํ…Œ๊ณ ๋ฆฌ๋กœ ๋ถ„๋ฅ˜ํ•˜๋Š” BERT ๊ธฐ๋ฐ˜ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

์นดํ…Œ๊ณ ๋ฆฌ

  1. ์ƒํ™œ๊ฒฝ์ œ/๊ณ„์•ฝ
  2. ์ƒํ™œ์ˆ˜๋ฆฌ/DIY
  3. ์Šค๋งˆํŠธํ™ˆ/๊ฐ€์ „
  4. ์š”๋ฆฌ/์‹ํ’ˆ๊ด€๋ฆฌ
  5. ์œก์•„/๋ฐ˜๋ ค๋™๋ฌผ
  6. ์ด์‚ฌ/์ธํ…Œ๋ฆฌ์–ด
  7. ์ฒญ์†Œ/์„ธํƒ
  8. ํ™˜๊ฒฝ/๊ฑด๊ฐ•

์„ฑ๋Šฅ

  • Test Accuracy: 87.5% (7/8 ์ผ€์ด์Šค)
  • ์ด์ „ ๋ชจ๋ธ ๋Œ€๋น„: +37.5% ํ–ฅ์ƒ (50% โ†’ 87.5%)

๊ฐœ์„  ์‚ฌํ•ญ

  • Hard Negative Learning์œผ๋กœ ์–ด๋ ค์šด ์ผ€์ด์Šค ํ•™์Šต
  • ๋ณตํ•ฉ ํ‚ค์›Œ๋“œ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• (์˜ˆ: "์—์–ด์ปจ ํ•„ํ„ฐ ์ฒญ์†Œ")
  • Focal Loss ์ ์šฉ (ฮฑ=0.75, ฮณ=1.5)
  • Early Stopping์œผ๋กœ ๊ณผ์ ํ•ฉ ๋ฐฉ์ง€

์‚ฌ์šฉ ๋ฐฉ๋ฒ•

import onnxruntime as ort
import numpy as np
from transformers import BertTokenizer

# ๋ชจ๋ธ ๋กœ๋“œ
tokenizer = BertTokenizer.from_pretrained("MongsangGa/home_life_improved-onnx")
session = ort.InferenceSession("model.onnx")

# ์ถ”๋ก 
text = "๊น€์น˜์ฐŒ๊ฐœ ๋ง›์žˆ๊ฒŒ ๋“์ด๋Š” ๋ฐฉ๋ฒ•"
inputs = tokenizer(text, truncation=True, padding="max_length", max_length=128, return_tensors="np")
ort_inputs = {
    "input_ids": inputs["input_ids"].astype(np.int64),
    "attention_mask": inputs["attention_mask"].astype(np.int64)
}
logits = session.run(None, ort_inputs)[0]

ํ•™์Šต ์ƒ์„ธ

  • Base Model: klue/roberta-small
  • Learning Rate: 1e-5
  • Batch Size: 32
  • Focal Loss: ฮฑ=0.75, ฮณ=1.5
  • Early Stopping: patience=3
  • Training Data: 270,465๊ฐœ (์›๋ณธ + ์ฆ๊ฐ•)

๋ผ์ด์„ ์Šค

Apache 2.0

Downloads last month
83
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support