metadata
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- Precision_micro
- Precision_weighted
- Precision_samples
- Recall_micro
- Recall_weighted
- Recall_samples
- F1-Score
- accuracy
widget:
- text: >-
Amended proposal for a Regulation of the European Parliament and of the
Council on establishing the framework for achieving climate neutrality and
amending Regulation (EU) 2018/1999 (European Climate Law). COM(2020) 563
(currently undergoing the EU internal legislative process)↩︎. Council
conclusions of 7 March 2011 on European Pact for Gender Equality
(2011-2020)↩︎. Council conclusions of 9 April 2019, Towards an ever more
sustainable Union by 2030↩︎. Council conclusions of 15 May 2017 on
Indigenous Peoples↩︎. Regulation (EU) 2018/1999↩︎.
- text: >-
Development of 15,000 ha of shallows and irrigated areas and their
exploitation for the intensive rice cultivation system. Agriculture,
water. 705. 28. Development of research on health and climate change:
total of three activities. Health. 690. 29. Audit of plans to develop all
classified or protected forests for updating purposes. Forests-land use.
685. 30. Strengthening of capabilities to forecast and respond to
phenomena associated with climate change: creation of an MT health care
monitoring centre. Health. 680. 31. Participative development of
sustainable land.
- text: >-
The Ministry of Health notes that any adaptation work should prioritise
vulnerable populations. It also considers that more work is needed in
health system planning, to accommodate a potential increase in migrants
and refugees
- text: >-
The overall outcome is to ensure that projects and programmes are gender
responsive: meaning that it aims to go beyond gender sensitivity to
actively promote gender equality and women’s empowerment. The country is
committed to achieving SDG 5: Gender equality by promoting low carbon
development where men and women contributions to climate change mitigation
and adaptation are recognized and valued, existing gender inequalities are
reduced and opportunities for effective empowerment for women are
promoted.
- text: >-
Cities depend heavily on other cities and regions to provide them with
indispensable services such as food, water and energy and the
infrastructure to deliver them. Ecosystem services from surrounding
regions provide fresh air, store or drain flood water as well as drinking
water
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/all-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: Precision_micro
value: 0.7692307692307693
name: Precision_Micro
- type: Precision_weighted
value: 0.7748199704721445
name: Precision_Weighted
- type: Precision_samples
value: 0.7692307692307693
name: Precision_Samples
- type: Recall_micro
value: 0.7692307692307693
name: Recall_Micro
- type: Recall_weighted
value: 0.7692307692307693
name: Recall_Weighted
- type: Recall_samples
value: 0.7692307692307693
name: Recall_Samples
- type: F1-Score
value: 0.7692307692307693
name: F1-Score
- type: accuracy
value: 0.7692307692307693
name: Accuracy
SetFit with sentence-transformers/all-mpnet-base-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: sentence-transformers/all-mpnet-base-v2
- Classification head: a OneVsRestClassifier instance
- Maximum Sequence Length: 384 tokens
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Evaluation
Metrics
Label | Precision_Micro | Precision_Weighted | Precision_Samples | Recall_Micro | Recall_Weighted | Recall_Samples | F1-Score | Accuracy |
---|---|---|---|---|---|---|---|---|
all | 0.7692 | 0.7748 | 0.7692 | 0.7692 | 0.7692 | 0.7692 | 0.7692 | 0.7692 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("leavoigt/vulnerability_target")
# Run inference
preds = model("The Ministry of Health notes that any adaptation work should prioritise vulnerable populations. It also considers that more work is needed in health system planning, to accommodate a potential increase in migrants and refugees")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Word count | 15 | 72.4819 | 238 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
0.0012 | 1 | 0.2938 | - |
0.0602 | 50 | 0.2188 | - |
0.1205 | 100 | 0.1733 | - |
0.1807 | 150 | 0.1578 | - |
0.2410 | 200 | 0.02 | - |
0.3012 | 250 | 0.0028 | - |
0.3614 | 300 | 0.0004 | - |
0.4217 | 350 | 0.0011 | - |
0.4819 | 400 | 0.0008 | - |
0.5422 | 450 | 0.0005 | - |
0.6024 | 500 | 0.0002 | - |
0.6627 | 550 | 0.0002 | - |
0.7229 | 600 | 0.0004 | - |
0.7831 | 650 | 0.0332 | - |
0.8434 | 700 | 0.0003 | - |
0.9036 | 750 | 0.0003 | - |
0.9639 | 800 | 0.0004 | - |
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.25.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.13.3
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}