--- library_name: zeroshot_classifier tags: - transformers - sentence-transformers - zeroshot_classifier license: mit datasets: - claritylab/UTCD language: - en pipeline_tag: zero-shot-classification metrics: - accuracy --- # Zero-shot Vanilla Bi-Encoder This is a [sentence-transformers](https://www.SBERT.net) model. It was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***. The code for training and evaluating this model can be found [here](https://github.com/ChrisIsKing/zero-shot-text-classification/tree/master). ## Model description This model is intended for zero-shot text classification. It was trained via the dual encoding classification framework as a baseline with the aspect-normalized [UTCD](https://huggingface.co/datasets/claritylab/UTCD) dataset. - **Finetuned from model:** [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) ## Usage You can use the model like this: ```python >>> from sentence_transformers import SentenceTransformer, util as sbert_util >>> model = SentenceTransformer(model_name_or_path='claritylab/zero-shot-vanilla-bi-encoder') >>> text = "I'd like to have this track onto my Classical Relaxations playlist." >>> labels = [ >>> 'Add To Playlist', 'Book Restaurant', 'Get Weather', 'Play Music', 'Rate Book', 'Search Creative Work', >>> 'Search Screening Event' >>> ] >>> text_embed = model.encode(text) >>> label_embeds = model.encode(labels) >>> scores = [sbert_util.cos_sim(text_embed, lb_embed).item() for lb_embed in label_embeds] >>> print(scores) [ 0.7219685912132263, -0.011121425777673721, 0.04929959028959274, 0.6653788089752197, 0.07093366980552673, 0.2897151708602905, 0.06133288890123367 ] ```