Edit model card

Model Card for mental-flan-t5-xxl

This is a fine-tuned large language model for mental health prediction via online text data.

Model Details

Model Description

We fine-tune a FLAN-T5-XXL model with 4 high-quality text (6 tasks in total) datasets for the mental health prediction scenario: Dreaddit, DepSeverity, SDCNL, and CCRS-Suicide. We have a separate model, fine-tuned on Alpaca, namely Mental-Alpaca, shared here

  • Developed by: Northeastern University Human-Centered AI Lab
  • Model type: Sequence-to-sequence Text-generation
  • Language(s) (NLP): English
  • License: Apache 2.0 License
  • Finetuned from model : FLAN-T5-XXL

Model Sources

Uses

Direct Use

The model is intended to be used for research purposes only in English. The model has been fine-tuned for mental health prediction via online text data. Detailed information about the fine-tuning process and prompts can be found in our paper. The use of this model should also comply with the restrictions from FLAN-T5-XXL

Out-of-Scope Use

The out-of-scope use of this model should comply with FLAN-T5-XXL.

Bias, Risks, and Limitations

The Bias, Risks, and Limitations of this model should also comply with FLAN-T5-XXL.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import T5ForConditionalGeneration, T5Tokenizer

tokenizer = T5ForConditionalGeneration.from_pretrained("NEU-HAI/mental-flan-t5-xxl")
mdoel =  T5Tokenizer.from_pretrained("NEU-HAI/mental-flan-t5-xxl")

Training Details and Evaluation

Detailed information about our work can be found in our paper.

Citation

@article{xu2023leveraging,
  title={Mental-LLM: Leveraging large language models for mental health prediction via online text data},
  author={Xu, Xuhai and Yao, Bingshen and Dong, Yuanzhe and Gabriel, Saadia and Yu, Hong and Ghassemi, Marzyeh and Hendler, James and Dey, Anind K and Wang, Dakuo},
  journal={arXiv preprint arXiv:2307.14385},
  year={2023}
}
Downloads last month
91
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.