Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'The answer accurately matches the document, confirming that a dime holds a monetary value of 10 cents, which is one-tenth of a dollar. This answer is relevant to the question and appropriately grounded in the provided document.\n\nThe final evaluation:'
  • 'The answer states that "Father Josh Carrier held the professorship of Chemistry and Physics at Notre Dame." However, the document clearly mentions "Father Joseph Carrier, C.S.C." as holding those positions, not Father Josh Carrier. There is no mention of a Father Josh Carrier in the provided document, making this a case of incorrect naming.\n\nFinal evaluation:'
  • 'The answer provided is well-aligned with the information from the document. It gives multiple methods for grinding weed without a grinder such as using scissors in a shot glass, a kitchen knife on a cutting board, and a coffee bean grinder. These methods reflect the detailed instructions provided in the document. \n\nFurthermore, it covers several alternative grinding techniques like using scissors in a glass, a pill bottle with a coin, a coffee grinder, a kitchen knife, and a mortar and pestle, which are all mentioned in the document.\n\nOverall, the answer accurately uses the information from the document to provide a comprehensive response to the question.\n\nFinal evaluation:\xa0****'
0
  • "The answer provided effectively responds to the question by listing relevant factors clearly and directly, which are also supported by the content in the provided document. \n\nMy reasoning includes:\n\n1. Relevance to Document: The factors mentioned in the answer such as aligning the learning opportunity with personal development goals, roles and responsibilities, and investment value align with the guidance provided in Document 1.\n\n2. Clarity and Completeness: The answer is clear, logically structured, and comprehensive. It lists each factor separately, making it easy to understand and follow.\n\n3. Additional Insights: The answer also brings in the importance of self-assessment, feedback, and staying informed, which are closely related to effectively determining the alignment of learning opportunities with one's role and goals. This is supported by Document 2, which discusses the importance of checking one's work, job and role, and staying informed.\n\nGiven the direct correlation between the response and the document, along with its clarity and thoroughness, the answer is well-founded and robust.\n\nFinal evaluation:"
  • 'The answer provided directly addresses the question and correctly lists the specific goals expected for editorial/content team members in their first month. The stated goals are clearly grounded in the text from the provided document. Therefore, the answer is clearly relevant and accurate.\n\n**Final Result: **'
  • 'The answer provided mentions some of the amenities that were lacking as per the visitor’s experience described in the document, namely the lack of a fridge, air conditioning, towels, soap, and a sufficient number of TV channels. Additionally, the absence of a restaurant was correctly highlighted.\n\nHowever, the statement "all hotels built before 2000 are legally required to have these amenities" is not mentioned in the document and seems to be an unsupported addition by the answer. This detracts from the overall accuracy.\n\nGiven that the answer mostly aligns well with the document but includes an erroneous legal claim not supported by the document, the final evaluation is ‘’.'

Evaluation

Metrics

Label Accuracy
all 0.6716

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_newrelic_gpt-4o_cot-few_shot_remove_final_evaluation_e1_larger_train_1727")
# Run inference
preds = model("The answer succinctly addresses the question by stating that finance@ORGANIZATION_2.<89312988> should be contacted for questions about travel reimbursement. This is correctly derived from the provided document, which specifies that questions about travel costs and reimbursements should be directed to the finance email.

Final evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 16 76.8061 301
Label Training Sample Count
0 126
1 137

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0015 1 0.2816 -
0.0760 50 0.2483 -
0.1520 100 0.2315 -
0.2280 150 0.1599 -
0.3040 200 0.0965 -
0.3799 250 0.0284 -
0.4559 300 0.006 -
0.5319 350 0.0036 -
0.6079 400 0.0031 -
0.6839 450 0.0026 -
0.7599 500 0.0025 -
0.8359 550 0.0024 -
0.9119 600 0.0023 -
0.9878 650 0.0022 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
9
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_newrelic_gpt-4o_cot-few_shot_remove_final_evaluation_e1_larger_train_1727

Finetuned
(257)
this model

Evaluation results