--- license: apache-2.0 --- # **m**utual **i**nformation **C**ontrastive **S**entence **E**mbedding (**miCSE**): [![arXiv](https://img.shields.io/badge/arXiv-2109.05105-29d634.svg)](https://arxiv.org/abs/2211.04928) Language model of the pre-print arXiv paper titled: "_**miCSE**: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings_" The **miCSE** language model is trained for sentence similarity computation. Training the model imposes alignment between the attention pattern of different views (embeddings of augmentations) during contrastive learning. Learning sentence embeddings with **miCSE** entails enforcing the syntactic consistency across augmented views for every single sentence, making contrastive self-supervised learning more sample efficient. Sentence representations correspond to the embedding of the _**[CLS]**_ token. # Usage ```shell tokenizer = AutoTokenizer.from_pretrained("sap-ai-research/<----Enter Model Name---->") model = AutoModelWithLMHead.from_pretrained("sap-ai-research/<----Enter Model Name---->") ``` # Benchmark Model results on SentEval Benchmark: ```shell +-------+-------+-------+-------+-------+--------------+-----------------+--------+ | STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | S.Avg. | +-------+-------+-------+-------+-------+--------------+-----------------+--------+ | 71.71 | 83.09 | 75.46 | 83.13 | 80.22 | 79.70 | 73.62 | 78.13 | +-------+-------+-------+-------+-------+--------------+-----------------+--------+ ``` ## Citations If you use this code in your research or want to refer to our work, please cite: ``` @article{Klein2022miCSEMI, title={miCSE: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings}, author={Tassilo Klein and Moin Nabi}, journal={ArXiv}, year={2022}, volume={abs/2211.04928} } ``` #### Authors: - [Tassilo Klein](https://tjklein.github.io/) - [Moin Nabi](https://moinnabi.github.io/)