Edit model card

SetFit with sentence-transformers/all-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
yes
  • 'The First Afterlife of Pope Benedict XVI Pope Benedict’s legacy will be felt across decades or even centuries. The first pope to resign was Celestine V, born Pietro Da Morrone, who was living the life of a pious hermit when he was elevated to the papacy in 1294, in his 80s, to break a two-year deadlock in the College of Cardinals. Feeling overmastered by the job, he soon resigned in the expectation that he could return to his monastic existence. Instead, he was imprisoned by his successor, Boniface VIII, who feared that some rival faction might make Celestine an antipope. Pope Benedict’s legacy will be felt across decades or even centuries.\n'
  • 'Here is the statement on Ratzinger's death from SNAP, an organization representing victims of abuse from the Catholic Church: "In our view, the death of Pope Benedict XVI is a reminder that, much like John Paul II, Benedict was more concerned about the church’s deteriorating image and financial flow to the hierarchy versus grasping the concept of true apologies followed by true amends to victims of abuse. The rot of clergy sexual abuse of children and adults, even their own professed religious, runs throughout the Catholic church, to every country, and we now have incontrovertible evidence, all the way to the top.Any celebration that marks the life of abuse enablers like Benedict must end. It is past time for the Vatican to refocus on change: tell the truth about known abusive clergy, protect children and adults, and allow justice to those who have been hurt. Honoring Pope Benedict XVI now is not only wrong. It is shameful.It is almost a year after a report into decades of abuse allegations by a law firm in Germany has shown that Pope Benedict XVI did not take action against abusive priests in four child abuse cases while he was Archbishop (Josef Ratzinger). In our view, Pope Benedict XVI is taking decades of the church’s darkest secrets to his grave with him..."https://www.snapnetwork.org/snap_reacts_to_the_death_of_pope_benedict_xvi\n'
  • 'I found the statement "While Benedict felt that celebrations of the new Mass were frequently unedifying and even banal..." to be flawed when compared to the post-Vatican II church in the USA. Where was Benedict when Catholic churches in the US held "folk masses" using guitars and keyboard instruments instead of organs? My confirmation ceremony in 1970 in NJ was held that way and one I remember clearly to this very day.If Benedict was really looking for the "cosmic" dimension of the liturgy, maybe he should have attended Maronite Catholic Masses on a regular basis. At the very least, he would have observed the Maronites' liturgical rituals which date back into the First Century A.D., not to mention the use of Aramaic, the language of Christ, during the consecration of the Eucharist.\n'
no
  • 'I’m not a Rogan fan, but this is just a few adventurers adventuring. As long as they are experienced divers with the proper equipment and aware of the dangers, who knows what they might find at the bottom of the East River? Inquiring minds want to know.\n'
  • 'Mike DiNovi One thing we can agree on is that, depending on our individual backgrounds, there are for each of us, a whole treasure trove of "missing" words. Often but not always, these words may be found in the crosswords, particularly older crosswords. But to ask for all of them to be included here would be asking to change the whole gestalt, as well as immodest. But I think longtime players of the Bee still see some value and delight in posting missing words. It may not be included in the official list but the Hive gives them currency. I personally enjoy all the "missing" word posts, however redundant they often are. I find in them some commonality and I have learned also probably several dozen new words - unofficial and official - including many chemistry words. I was a lousy chemistry student but I absolutely love the vocabulary of it.\n'
  • 'If "work on what comes next" and "innovation" means expanding the definition of "life" to mean all stages of life from womb to tomb (e.g., paternal leave, pre-school, school choice, basic universal healthcare, baby bonds, etc.) that would be a positive and hopefully inclusive step forward that might bridge the awful divide we see across the country and even in the comments of this article.\n'

Evaluation

Metrics

Label Accuracy
all 1.0

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("davidadamczyk/setfit-model-10")
# Run inference
preds = model("Willy Stone  Wouldn't it be a pity if all ancient art could only be seen in the location where it was made?
")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 15 123.625 286
Label Training Sample Count
no 18
yes 22

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 120
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0017 1 0.365 -
0.0833 50 0.1213 -
0.1667 100 0.0018 -
0.25 150 0.0004 -
0.3333 200 0.0002 -
0.4167 250 0.0002 -
0.5 300 0.0001 -
0.5833 350 0.0001 -
0.6667 400 0.0001 -
0.75 450 0.0001 -
0.8333 500 0.0001 -
0.9167 550 0.0001 -
1.0 600 0.0001 -

Framework Versions

  • Python: 3.10.13
  • SetFit: 1.1.0
  • Sentence Transformers: 3.0.1
  • Transformers: 4.45.2
  • PyTorch: 2.4.0+cu124
  • Datasets: 2.21.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
8
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for davidadamczyk/setfit-model-10

Finetuned
(164)
this model

Evaluation results