Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • "Reasoning:\nThe answer provides key insights into the reasons behind the Denver Nuggets' offensive outburst in January, specifically mentioning the comfort and effectiveness of the team, the coaching strategy of taking the first available shot in the rhythm of the offense, and emphasizing pushing the ball after both makes and misses. These points are directly supported by the document. However, the inclusion of information about a new training technique involving virtual reality is not supported by the provided document and thus detracts from the answer's accuracy and relevance.\n\nFinal Evaluation:"
  • 'Reasoning:\nWhile the provided answer attempts to address the differences between film and digital photography, it contains several inaccuracies and inconsistencies with the document. The answer incorrectly states that film under-exposes better and compresses the range into the bottom end, whereas the document clearly states that film over-exposes better and compresses the range into the top end. Additionally, it mentions that the digital sensors capture all three colors at each point, which is inaccurate as per the document; it states digital sensors capture only one color at each point and then interpolate the data to create an RGB image. Moreover, the answer states the comparison is to 5MP digital sensors whereas the document talks about 10MP sensors.\n\nThese inaccuracies undermine the grounding andreliability of the answer.\n\nFinal Evaluation:'
  • "Reasoning:\nThe provided answer addresses a topic entirely unrelated to the given question. The question is about the main conflict in the third book of the Arcana Chronicles by Kresley Cole, but the answer discusses the result of an MMA event featuring Antonio Rogerio Nogueira. There is no connection between the document's content and the question, leading to a clear lack of context grounding, relevance, and conciseness.\n\nFinal Evaluation: \nEvaluation:"
1
  • 'Reasoning:\nThe answer provided effectively draws upon the document to list several best practices that a web designer can incorporate into their client discovery and web design process to avoid unnecessary revisions and conflicts. Each practice mentioned—getting to know the client and their needs, signing a detailed contract, and communicating honestly about extra charges—is directly supported by points raised in the document. The answer is relevant, concise, and closely aligned with the specific question asked.\n\nFinal Evaluation:'
  • "Reasoning:\nThe answer provided correctly identifies the author's belief on what creates a connection between the reader and the characters in a story. It states that drawing from the author's own painful and emotional experiences makes the story genuine and relatable, thus engaging the reader. This is consistent with the document, which emphasizes the importance of authenticity and emotional depth derived from the author's personal experiences to make characters and their struggles real to the reader. The answer is relevant to the question and concisely captures the essence of the author's argument without includingunnecessary information.\n\nFinal Evaluation:"
  • 'Reasoning:\nThe answer correctly identifies Mauro Rubin as the CEO of JoinPad during the event at Talent Garden Calabiana, Milan. This is well-supported by the context provided in the document, which specifically mentions Mauro Rubin, the JoinPad CEO, taking the stage at the event. The answer is relevant to the question andconcise.\n\nFinal Evaluation: \nEvaluation:'

Evaluation

Metrics

Label Accuracy
all 0.9333

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_rag_ds_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remove_fi")
# Run inference
preds = model("Reasoning:
The answer directly addresses the question by stating that China's Ning Zhongyan won the gold medal in the men's 1,500m final at the speed skating World Cup. This information is clearly found in the document, which confirms Ning's achievement at the event in Stavanger, Norway. The answer is concise, relevant, and well-supported by the given context, avoiding extraneous details.

Final Evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 33 74.0 125
Label Training Sample Count
0 34
1 37

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0056 1 0.2349 -
0.2809 50 0.2502 -
0.5618 100 0.1567 -
0.8427 150 0.0112 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.1+cu121
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
0
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Netta1994/setfit_baai_rag_ds_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remove_fi

Finetuned
this model

Evaluation results