metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 75975910.63587219
num_examples: 185574
- name: test
num_bytes: 18994182.36412781
num_examples: 46394
download_size: 53587175
dataset_size: 94970093
license: mit
task_categories:
- text-classification
language:
- en
pretty_name: Suicidal Tendency Prediction Dataset
size_categories:
- 100K<n<1M
Dataset Card for "vibhorag101/suicide_prediction_dataset_phr"
- The dataset is sourced from Reddit and is available on Kaggle.
- The dataset contains text with binary labels for suicide or non-suicide.
- The dataset was cleaned and following steps were applied
- Converted to lowercase
- Removed numbers and special characters.
- Removed URLs, Emojis and accented characters.
- Removed any word contractions.
- Remove any extra white spaces and any extra spaces after a single space.
- Removed any consecutive characters repeated more than 3 times.
- Tokenised the text, then lemmatized it and then removed the stopwords (excluding not).
- The
class_label
column was renamed tolabel
for use with trainer API.
- The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. a 80:10:10 (train:test:val) split.
Note
Since this dataset was preprocessed, and stopwords and punctuation marks such as "?!" were removed from it, it might be possible that in some cases that, the text is having incorrect labels, as the meaning changed against the original text after preprocessing.