|
--- |
|
language: |
|
- ru |
|
tags: |
|
- toxic comments classification |
|
license: cc |
|
task_categories: |
|
- text-classification |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
## General concept of the model |
|
|
|
|
|
Sensitive topics are such topics that have a high chance of initiating a toxic conversation: homophobia, politics, racism, etc. This dataset uses 18 topics. |
|
|
|
More details can be found [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. |
|
This paper presents the first version of this dataset. Here you can see the last version of the dataset which is significantly larger and also properly filtered. |
|
|
|
|
|
## Licensing Information |
|
|
|
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. |
|
|
|
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] |
|
|
|
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ |
|
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png |
|
|
|
## Citation |
|
|
|
If you find this repository helpful, feel free to cite our publication: |
|
|
|
``` |
|
@inproceedings{babakov-etal-2021-detecting, |
|
title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation", |
|
author = "Babakov, Nikolay and |
|
Logacheva, Varvara and |
|
Kozlova, Olga and |
|
Semenov, Nikita and |
|
Panchenko, Alexander", |
|
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", |
|
month = apr, |
|
year = "2021", |
|
address = "Kiyv, Ukraine", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4", |
|
pages = "26--36", |
|
abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.", |
|
} |
|
``` |