Search is not available for this dataset
image
imagewidth (px)
853
4.16k
label
class label
2 classes
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
0bullying
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal
1normal

Data Description

BullyingAct dataset was created to train models for detecting bullying behavior in school environments. Due to the lack of publicly available datasets focused on bullying detection, we generated this data ourselves by simulating various scenarios in a controlled school environment. The dataset contains images depicting different school activities, with a subset specifically illustrating instances of bullying or not, labeled accordingly for classification tasks.

Data Collection Process

The images were collected by enacting different scenarios within a school setting, involving actors who simulated bullying and non-bullying situations. These scenarios were designed to capture various forms of bullying, including physical confrontations, intimidation, and exclusion, as well as normal school activities without any bullying behavior for training tasks.

Dataset Contents

There are two categories in the dataset. One is bullying and the other one is normal.

Bullying: This category includes over [86] images of staged school interactions.

Normal: This category includes [17] images. The reason we have no many images in this category is that users can add any normal images as they want.

Usage

This dataset is intended for training and evaluating image classification models for bullying detection. It can also be used for related tasks such as object detection or scene understanding in a school context. The goal is to help develop technologies that can assist in early detection and intervention in bullying incidents.

Ethical Considerations

The data collection was conducted in a responsible manner, with consent from all participants involved in the simulations. No real bullying was conducted during data collection. This dataset should be used with ethical considerations in mind, particularly in regard to privacy and the potential implications of deploying AI models for surveillance in educational environments.

Citation

If you use this dataset, please cite it as follows:

[Zoooora/BullyingAct]

Downloads last month
199