Contrastive Learning Model
This model is a contrastive learning model based on distilbert-base-uncased. It outputs the [CLS] token embedding for similarity comparisons.
Usage
from transformers import AutoTokenizer
from modeling import ContrastiveModel # Import the model class directly
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("SajayR/contrastive-model")
config = AutoConfig.from_pretrained("SajayR/contrastive-model")
model = ContrastiveModel(config)
model.load_state_dict(torch.load("pytorch_model.bin"))
# Prepare input
text = "Hello, world!"
inputs = tokenizer(text, return_tensors="pt")
# Get embeddings (shape: [batch_size, hidden_size])
embeddings = model(**inputs)
The model returns a tensor of shape [batch_size, hidden_size]
containing the [CLS] token embedding.