Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

MentalRoBERTa

MentalRoBERTa is a model initialized with RoBERTa-Base (cased_L-12_H-768_A-12) and trained with mental health-related posts collected from Reddit.

We follow the standard pretraining protocols of BERT and RoBERTa with Huggingfaceโ€™s Transformers library.

We use four Nvidia Tesla v100 GPUs to train the two language models. We set the batch size to 16 per GPU, evaluate every 1,000 steps, and train for 624,000 iterations. Training with four GPUs takes around eight days.

More domain-specific pretrained models for mental health are available at https://huggingface.co/AIMH

Usage

Load the model via Huggingfaceโ€™s Transformers library:

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("mental/mental-roberta-base")
model = AutoModel.from_pretrained("mental/mental-roberta-base")

To minimize the influence of worrying mask predictions, this model is gated. To download a gated model, youโ€™ll need to be authenticated. Know more about gated models.

Paper

For more details, refer to the paper MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare.

@inproceedings{ji2022mentalbert,
  title     = {{MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare}},
  author    = {Shaoxiong Ji and Tianlin Zhang and Luna Ansari and Jie Fu and Prayag Tiwari and Erik Cambria},
  year      = {2022},
  booktitle = {Proceedings of LREC}
}

Social Impact

We train and release masked language models for mental health to facilitate the automatic detection of mental disorders in online social content for non-clinical use. The models may help social workers find potential individuals in need of early prevention. However, the model predictions are not psychiatric diagnoses. We recommend anyone who suffers from mental health issues to call the local mental health helpline and seek professional help if possible.

Data privacy is an important issue, and we try to minimize the privacy impact when using social posts for model training. During the data collection process, we only use anonymous posts that are manifestly available to the public. We do not collect user profiles even though they are also manifestly public online. We have not attempted to identify the anonymous users or interact with any anonymous users. The collected data are stored securely with password protection even though they are collected from the open web. There might also be some bias, fairness, uncertainty, and interpretability issues during the data collection and model training. Evaluation of those issues is essential in future research.

Downloads last month
904
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mental/mental-roberta-base

Finetunes
8 models

Spaces using mental/mental-roberta-base 2