Datasets:

Modalities:
Tabular
Text
Formats:
csv
Languages:
Portuguese
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 11,582 Bytes
b013318
 
35d58b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b19a0f
794ff91
 
fea9333
 
 
 
794ff91
 
e5ceb8d
 
 
 
35d58b6
 
 
 
 
 
 
 
 
 
 
 
3e73d86
 
1f903a7
35d58b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e73d86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35d58b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f903a7
 
 
 
35d58b6
 
 
 
 
 
 
1f903a7
 
 
 
 
 
 
 
 
 
 
 
35d58b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
---
license: cc-by-4.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: TuPy-Dataset
language_bcp47:
- pt-BR
tags:
- hate-speech-detection
configs:
- config_name: multilabel
  data_files:
  - split: train
    path: multilabel/multilabel_train.csv
  - split: test
    path: multilabel/multilabel_test.csv
- config_name: binary
  data_files:
  - split: train
    path: binary/binary_train.csv
  - split: test
    path: binary/binary_test.csv
---

# Portuguese Hate Speech Expanded Dataset (TuPyE)
TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically 
selected for the purpose of hate speech detection within diverse social network contexts. 
This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from 
[Fortuna et al. (2019)](https://aclanthology.org/W19-3510/), 
[Leite et al. (2020)](https://arxiv.org/abs/2010.04543), 
and [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972),
complemented by an infusion of 10,000 original documents from the [TuPy-Dataset](https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset).

In light of the constrained availability of annotated data in Portuguese pertaining to the English language, 
TuPyE is committed to the expansion and enhancement of existing datasets. 
This augmentation serves to facilitate the development of advanced hate speech detection models through the utilization of machine learning (ML) 
and natural language processing (NLP) techniques.
This repository is organized as follows:

```sh
root.
    ├── binary     : binary dataset (including training and testing split)
    ├── multilabel : multilabel dataset (including training and testing split)
    └── README.md  : documentation and card metadata
```
## Security measures
To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted

## Annotation and voting process


Regarding the unpublished part of the TuPyE dataset, we utilized a simple voting process to generate the binary matrices. Each document underwent three separate evaluations. 
If a document received two or more identical classifications, the assigned value was set to 1; otherwise, it was marked as 0. 
The annotated raw data can be accessed in the [project repository](https://github.com/Silly-Machine/TuPy-Dataset). 
The following table offers a brief summary of the annotators' profiles and qualifications:

#### Table 1 – Annotators

| Annotator    | Gender | Education                                     | Political  | Color  |
|--------------|--------|-----------------------------------------------|------------|--------|
| Annotator 1  | Female | Ph.D. Candidate in civil engineering           | Far-left   | White  |
| Annotator 2  | Male   | Master's candidate in human rights             | Far-left   | Black  |
| Annotator 3  | Female | Master's degree in behavioral psychology       | Liberal    | White  |
| Annotator 4  | Male   | Master's degree in behavioral psychology       | Right-wing | Black  |
| Annotator 5  | Female | Ph.D. Candidate in behavioral psychology       | Liberal    | Black  |
| Annotator 6  | Male   | Ph.D. Candidate in linguistics                 | Far-left   | White  |
| Annotator 7  | Female | Ph.D. Candidate in civil engineering           | Liberal    | White  |
| Annotator 8  | Male   | Ph.D. Candidate in civil engineering           | Liberal    | Black  |
| Annotator 9  | Male   | Master's degree in behavioral psychology       | Far-left   | White  |





User
make more scientific: 
TuPyE, the expanded version of TuPy, includes 43,668 annotated documents curated for hate speech detection across various social networks.
This extended dataset incorporates additional annotations and merges with datasets from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022) and 10,000 (ten thousand) of orignal came from TuPy.
The combination of new annotations and integration with cited datasets enhances TuPyE's effectiveness for developing advanced hate speech detection models using
ML and NLP techniques.
ChatGPT
TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically selected for the purpose of hate speech detection within diverse social network contexts. This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022), complemented by an infusion of 10,000 original documents from the TuPy corpus.

The amalgamation of these new annotations and the seamless integration with the aforementioned cited datasets significantly amplifies the efficacy of TuPyE. This augmentation facilitates the development of sophisticated hate speech detection models, leveraging Machine Learning (ML) and Natural Language Processing (NLP) techniques.

User
mkae it more professional : 
To consolidate data from these prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:

Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification;

The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. To align with our definition of hate speech, texts containing solely offensive and/or obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification;

Vargas et al. (2022) compiled a collection of 7,000 comments extracted from the Instagram platform, which were labeled by three annotators. These data had previously undergone a simple majority-voting process, eliminating the need for additional text classification procedures.
ChatGPT
In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database. This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset, we adhered to stringent guidelines for text integration, detailed as follows:

1. Fortuna et al. (2019): This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset.

2. Leite et al. (2020): The dataset from this research encompassed 21,000 tweets, annotated by 129 volunteers. Each tweet was reviewed by three different assessors. The study identified six categories of toxic speech, namely: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. In aligning with our operational definition of hate speech, we chose to exclude texts that solely fell under the categories of offensive and/or obscene language. Consistent with our methodology, a straightforward majority-voting process was utilized for the classification of these texts.

3. Vargas et al. (2022): This research involved a compilation of 7,000 comments sourced from Instagram, each labeled by a trio of annotators. These data had already been subjected to a simple majority-voting classification, thereby obviating the need for us to apply additional text classification protocols.

Through the application of these rigorous integration guidelines, we have succeeded in establishing a robust, unified database that stands as a valuable resource for the development and refinement of automatic hate speech detection systems in the Portuguese language.

## Data structure
A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:

```python
{
text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link",
aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0,
religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0
}
```

# Dataset content

Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents

#### Table 2 - Count of non-aggressive and aggressive documents

| Label                | Count  |
|----------------------|--------|
| Non-aggressive       | 31121  |
| Aggressive - Not hate| 3180   |
| Aggressive - Hate    | 9367   |
| Total                | 43668  |

Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.

#### Table 3 - Hate categories count

| Label                    | Count |
|--------------------------|-------|
| Ageism                   | 57    |
| Aporophobia              | 66    |
| Body shame               | 285   |
| Capacitism               | 99    |
| LGBTphobia               | 805   |
| Political                | 1149  |
| Racism                   | 290   |
| Religious intolerance    | 108   |
| Misogyny                 | 1675  |
| Xenophobia               | 357   |
| Other                    | 4476  |
| Total                    | 9367  |

# BibTeX citation

This dataset can be cited as follows:

```pyyhon
@misc {silly-machine_2023,
	author       = { {Silly-Machine} },
	title        = { TuPy-Dataset (Revision de6b18c) },
	year         = 2023,
	url          = { https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset },
	doi          = { 10.57967/hf/1529 },
	publisher    = { Hugging Face }
}
```

# Acknowledge
The TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro ([UFRJ](https://ufrj.br/)) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering ([COPPE](https://coppe.ufrj.br/)).