Contrastive user encoder (multi-post)

This model is a DistilBertModel trained by fine-tuning distilbert-base-uncased on author-based triplet loss.

Details

Training and evaluation details are provided in our EMNLP Findings paper:

  • Rocca, R., & Yarkoni, T. (2022), Language as a fingerprint: Self-supervised learning of user encodings using transformers, to appear in Findings of the Association for Computational Linguistics: EMNLP 2022

Training

We fine-tuned DistilBERT on triplets consisting of:

  • a set of Reddit submissions from a given user (10 posts, called "anchors") - see rbroc/contrastive-user-encoder-singlepost for an equivalent model trained on a single anchor;
  • an additional post from the same user (a "positive example");
  • a post from a different, randomly selected user (the "negative example")

To compute the loss, we use [CLS] encodings of the anchors, positive examples and negative examples from the last layer of the DistilBERT encoder. We perform feature-wise averaging of anchor posts encodings and optimize for max(∣∣f(A)‾−f(n)∣∣−∣∣f(A)‾−f(p)∣∣+α,0)max(||\overline{f(A)} - f(n)|| - ||\overline{f(A)} - f(p)|| + \alpha,0)

where:

  • f(A)‾ \overline{f(A)} is the feature-wise average of the anchor encodings;
  • f(n) f(n) is the [CLS] encoding of the negative example;
  • f(p) f(p) is the [CLS] encoding of the positive example;
  • α \alpha is a tunable parameter called margin. Here, we tuned this to α=1.0 \alpha = 1.0

Evaluation and usage

The model yields performance advantages downstream user-based classification tasks.

We encourage usage and benchmarking on tasks involving:

  • prediction of user traits (e.g., personality);
  • extraction of user-aware text encodings (e.g., style modeling);
  • contextualized text modeling, where standard text representations are complemented with compact user representations

Limitations

Being exclusively trained on Reddit data, our models probably overfit to linguistic markers and traits which are relevant to characterizing the Reddit user population, but less salient in the general population. Domain-specific fine-tuning may be required before deployment.

Furthermore, our self-supervised approach enforces little or no control over biases, which models may actively use as part of their heuristics in contrastive and downstream tasks.

Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.