Flattery Prediction from Text

This model was finetuned to predict flattery in transcripts of English earning calls. It was introduced in This Paper Had the Smartest Reviewers -- Flattery Detection Utilising an Audio-Textual Transformer-Based Approach, which was accepted at INTERSPEECH 2024.

Model Details

Model Description

This is a fine-tuned variant of RoBERTa-base. It is trained using a dataset comprising single sentences uttered in business calls, which were labeled for flattery in a binary manner. The training set comprised 7167 sentences, 1878 sentences were used as development set. For more details, please refer to the paper(TODO), especially Sections 2 for the dataset, 3.1 for the training procedure and 4.1 for the results. The checkpoint provided here was trained using human gold-standard transcripts. It achieves Unweighed Average Recall (UAR) values of .8512 and .8865 on the development and test partition, respectively.

Model Sources

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]

Uses

The following snippet illustrates the usage of the model.

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch import sigmoid
import torch

# initialize model and tokenizer
checkpoint = "chrlukas/flattery_prediction_text"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
model.eval()

# predict flattery in a sentence
example = 'This is a great example!'    # should predict flattery
tokenized = tokenizer(example, return_tensors='pt')
with torch.no_grad():
  logits = model(**tokenized).logits
prediction = sigmoid(logits).item()
flattery = prediction >= 0.5
print(f'Flattery detected? {flattery}')

Bias, Risks, and Limitations

The model is trained on a highly-domain specific dataset sourced from earning calls, i.e., typically conversations between business analysts and CEOs of US-American companies. Hence, it can not be expected to generalize well to other domains and contexts.

Citation

BibTeX:

TODO

Downloads last month
14
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.