Edit model card

SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1.0
  • '자전거 등산 골프 겨울 발 다리토시 레그워머 브라운 디플코리아 (Digital Plus Korea)'
  • '국산 면 탁텔 겨울 방한 팔 다리 수면 토시 발 임산부 산후용품 수족냉증 겨울 방한 보온 기본 수면토시 그레이 세자매 양말'
  • '세븐다스 여자 레그워머 수면 여성 발토시 겨울 보온 SD001 그레이_FREE 아이보리'
0.0
  • '[매장발송] 마리떼 11/6 배송 3PACK EMBROIDERY SOCKS multi OS 와이에스마켓'
  • '에브리데이 플러스 쿠션 트레이닝 크루 삭스(3켤레) SX6888-100 024 '
  • '[롯데백화점]언더아머(백) 유니섹스 UA 코어 쿼터 양말 - 3켤레 1358344-100 1.LG 롯데백화점_'

Evaluation

Metrics

Label Metric
all 0.7735

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_ac8")
# Run inference
preds = model("도톰 엄지 양말 발가락 여 타비 삭스 기모 보온 컬러 여자 두꺼운 무지 연브라운 김민주")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 4 10.82 24
Label Training Sample Count
0.0 50
1.0 50

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (20, 20)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 40
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0625 1 0.4226 -
3.125 50 0.0022 -
6.25 100 0.0001 -
9.375 150 0.0001 -
12.5 200 0.0001 -
15.625 250 0.0001 -
18.75 300 0.0001 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0.dev0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.46.1
  • PyTorch: 2.4.0+cu121
  • Datasets: 2.20.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
816
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_ac8

Base model

klue/roberta-base
Finetuned
(71)
this model

Evaluation results