Datasets:
annotations_creators:
- expert
language_creators:
- expert-generated
languages:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HatemojiBuild
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
extra_gated_prompt: >-
We have deactivated the automatic preview for this dataset because it contains
hate speech. If you want to see the preview, you can continue.
Dataset Card for HatemojiBuild
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Content Warning
This datasets contains examples of hateful language.
Dataset Description and Details
- Repository: https://github.com/HannahKirk/Hatemoji
- Paper: https://arxiv.org/abs/2108.05921
- Point of Contact: hannah.kirk@oii.ox.ac.uk
Dataset Summary
HatemojiBuild can be used to train, develop and test models on emoji-based hate with challenging adversarial examples and perturbations. HatemojiBuild is a dataset of 5,912 adversarially-generated examples created on Dynabench using a human-and-model-in-the-loop approach. We collect data in three consecutive rounds. Our work follows on from Vidgen et al (2021) Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection (http://arxiv.org/abs/2012.15761) who collect four rounds of textual adversarial examples. The R1-R4 data is available at https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset. The entries in HatemojiBuild are labeled by round (R5-7). The text of each entry is given with its gold-standard label from majority agreement of three annotators. Each original entry is associated with a perturbation so each row of the dataset. matches these two cases. We also provide granular labels of type and target for hateful entries.
Supported Tasks
Hate Speech Detection
Languages
English
Dataset Structure
Data Instances
5,912 adversarially-generated instances
Data Fields
entry_id: The unique ID of the entry (assigned to each of the 5,912 cases generated).
text: The text of the entry.
type: The type of hate assigned to hateful entries.
target: The target of hate assigned to hateful entries.
round.base: The round where the entry was generated.
round.set: The round and whether the entry came from an original statement (a) or a perturbation (b).
set: Whether the entry is an original or perturbation.
split: The randomly-assigned train/dev/test split using in our work (80:10:10).
label_gold: The gold standard label (hateful/non-hateful) of the test case.
matched_text: The text of the paired perturbation. Each original entry has one perturbation.
matched_id: The unique entry ID of the paired perturbation.
Data Splits
Train, Validation and Test.
Dataset Creation
Curation Rationale
The genre of texts is hateful and non-hateful statements using emoji constructions. The purpose of HatemojiBuild is address the model weaknesses to emoji-baaed hate, to "build" better models. 50% of the 5,912 test cases are hateful. 50% of the entries in the dataset are original content and 50% are perturbations.
Source Data
Initial Data Collection and Normalization
We use an online interface designed for dynamic dataset generation and model benchmarking (Dynabench) to collect synthetic adversarial examples in three successive rounds, running between 24th May--11th June. Each round contains approximately 2,000 entries, where each original entry inputed to the interface is paired with an offline perturbation. Data was synthetically-generated by a team of trained annotators, i.e., not sampled from social media.
Who are the source language producers?
The language producers are also the annotators.
Annotations
Annotation process
We implemented three successive rounds of data generation and model re-training to create the HatemojiBuild dataset. In each round we tasked a team of 10 trained annotators with entering content the model-in-the-loop would misclassify. We refer to this model as the target model. Annotators were instructed to generate linguistically diverse entries while ensuring each entry was (1) realistic, (2) clearly hateful or non-hateful and (3) contained at least one emoji. Each entry was first given a binary label of hateful or non-hateful, and hateful content was assigned secondary labels for the type and target of hate. Each entry was validated by two additional annotators, and an expert resolved disagreements. After validation, annotators created a perturbation for each entry that flips the label. To maximize similarity between originals and perturbations, annotators could either make an emoji substitution while fixing the text or fix the emoji and minimally change the surrounding text. Each perturbation received two additional annotations, and disagreements were resolved by the expert. This weekly cadence of annotator tasks was repeated in three consecutive weeks.
Who are the annotators?
Ten annotators were recruited to work for three weeks, and paid £16/hour. An expert annotator was recruited for quality control purposes and paid £20/hour. In total, there were 11 annotators. All annotators received a training session prior to data collection and had previous experience working on hate speech projects. A daily `stand-up' meeting was held every morning to communicate feedback and update guidelines as rounds progressed. Annotators were able to contact the research team at any point using a messaging platform. Of 11 annotators, 8 were between 18--29 years old and 3 between 30--39 years old. The completed education level was high school for 3 annotators, undergraduate degree for 1 annotators, taught graduate degree for 4 annotators and post-graduate research degree for 3 annotators. 6 annotators were female, and 5 were male. Annotators came from a variety of nationalities, with 7 British, as well as Jordanian, Irish, Polish and Spanish. 7 annotators identified as ethnically White and the remaining annotators came from various ethnicities including Turkish, Middle Eastern, and Mixed White and South Asian. 4 annotators were Muslim, and others identified as Atheist or as having no religious affiliation. 9 annotators were native English speakers and 2 were non-native but fluent. The majority of annotators (9) used emoji and social media more than once per day. 10 annotators had seen others targeted by abuse online, and 7 had been personally targeted.
Personal and Sensitive Information
HatemojiBuild contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.
Considerations for Using the Data
Social Impact of Dataset
HatemojiBuild contains challenging emoji examples which have "tricked" state-of-the-art transformers models. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to build model robustness to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.
Discussion of Biases
Annotators were given substantial freedom in the targets of hate resulting in 54 unique targets, and 126 unique intersections of these. The entries from R5-R7 contain 1,082 unique emoji out of 3,521 defined in the Unicode Standard as of September 2020. This diversity helped to mitigate biases in classification towards certain targets but biases likely remain, especially since HatemojiBuild was designed for English-language use of emoji.
Other Known Limitations
While annotators were trained on real-world examples of emoji-based hate from Twitter, the entries in HatemojiBuild are synthetically-generated so may deviate from real-world instances of emoji-based hate.
Additional Information
Dataset Curators
The dataset was curated by the lead author (Hannah Rose Kirk), using the Dynabench platform.
Licensing Information
Creative Commons Attribution 4.0 International Public License. For full detail see: https://github.com/HannahKirk/Hatemoji/blob/main/LICENSE
Citation Information
If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.
@article{kirk2021hatemoji,
title={Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate},
author={Kirk, Hannah Rose and Vidgen, Bertram and R{\"o}ttger, Paul and Thrush, Tristan and Hale, Scott A},
journal={arXiv preprint arXiv:2108.05921},
year={2021}
}
Contributions
Thanks to @HannahKirk for adding this dataset.