Model Card for unsup-simcse-bert-base-uncased
Model Details
Model Description
More information needed
- Developed by: Princeton NLP group
- Shared by [Optional]: Hugging Face
- Model type: Feature Extraction
- Language(s) (NLP): More information needed
- License: More information needed
- Related Models:
- Parent Model: BERT
- Resources for more information:
Uses
Direct Use
This model can be used for the task of Feature Engineering.
Downstream Use [Optional]
More information needed
Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Training Details
Training Data
The model craters note in the Github Repository
We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k).
Training Procedure
Preprocessing
More information needed
Speeds, Sizes, Times
More information needed
Evaluation
Testing Data, Factors & Metrics
Testing Data
The model craters note in the associated paper
Our evaluation code for sentence embeddings is based on a modified version of SentEval. It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See associated paper (Appendix B) for evaluation details.
Factors
More information needed
Metrics
More information needed
Results
More information needed
Model Examination
The model craters note in the associated paper
Uniformity and alignment. We also observe that (1) though pre-trained embeddings have good alignment, their uniformity is poor (i.e., the embeddings are highly anisotropic); (2) post-processing methods like BERT-flow and BERT-whitening greatly improve uniformity but also suffer a degeneration in alignment; (3) unsupervised SimCSE effectively improves uniformity of pre-trained embeddings whereas keeping a good alignment;(4) incorporating supervised data in SimCSE further amends alignment.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Nvidia 3090 GPUs with CUDA 11
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
Technical Specifications [optional]
Model Architecture and Objective
More information needed
Compute Infrastructure
More information needed
Hardware
More information needed
Software
More information needed
Citation
BibTeX:
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
Glossary [optional]
More information needed
More Information [optional]
More information needed
Model Card Authors [optional]
Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team
Model Card Contact
If you have any questions related to the code or the paper, feel free to email Tianyu (tianyug@cs.princeton.edu
) and Xingcheng (yxc18@mails.tsinghua.edu.cn
). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
How to Get Started with the Model
Use the code below to get started with the model.
Click to expand
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/unsup-simcse-bert-base-uncased")
model = AutoModel.from_pretrained("princeton-nlp/unsup-simcse-bert-base-uncased")
- Downloads last month
- 21,913