hate_offensive / README.md
albertvillanova's picture
Replace YAML keys from int to str (#1)
8a0bf2c
|
raw
history blame
4.72 kB
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - machine-generated
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-classification
task_ids:
  - multi-class-classification
paperswithcode_id: hate-speech-and-offensive-language
pretty_name: HateOffensive
tags:
  - hate-speech-detection
dataset_info:
  features:
    - name: total_annotation_count
      dtype: int32
    - name: hate_speech_annotations
      dtype: int32
    - name: offensive_language_annotations
      dtype: int32
    - name: neither_annotations
      dtype: int32
    - name: label
      dtype:
        class_label:
          names:
            '0': hate-speech
            '1': offensive-language
            '2': neither
    - name: tweet
      dtype: string
  splits:
    - name: train
      num_bytes: 2811298
      num_examples: 24783
  download_size: 2546446
  dataset_size: 2811298

Dataset Card for HateOffensive

Table of Contents

Dataset Description

Dataset Summary

Supported Tasks and Leaderboards

[More Information Needed]

Languages

English (en)

Dataset Structure

Data Instances

{
"count": 3,
 "hate_speech_annotation": 0,
 "offensive_language_annotation": 0,
 "neither_annotation": 3,
 "label": 2,  # "neither"
 "tweet": "!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...")
}

Data Fields

count: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable, hate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech, offensive_language_annotation: (Integer) number of users who judged the tweet to be offensive, neither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive, label: (Class Label) integer class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'), tweet: (string)

Data Splits

This dataset is not splitted, only the train split is available.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

Usernames are not anonymized in the dataset.

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

MIT License

Citation Information

@inproceedings{hateoffensive, title = {Automated Hate Speech Detection and the Problem of Offensive Language}, author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media}, series = {ICWSM '17}, year = {2017}, location = {Montreal, Canada}, pages = {512-515} }

Contributions

Thanks to @MisbahKhan789 for adding this dataset.