File size: 2,606 Bytes
de8ae90 9c7652e 6c9e2be 9c7652e e2278b5 6c9e2be aaac115 de8ae90 97aaa8b 9c7652e 97aaa8b c55ab26 97aaa8b 9c7652e 97aaa8b 1f568dd 9c7652e 97aaa8b 9c7652e 8262225 a12b21a 8262225 97aaa8b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
license: openrail++
datasets:
- ukr-detect/ukr-toxicity-dataset-seminatural
language:
- uk
widget:
- text: Ти неймовірна!
base_model:
- FacebookAI/xlm-roberta-base
---
## Binary toxicity classifier for Ukrainian
This is the fine-tuned on the semi-automatically collected [Ukrainian toxicity classification dataset](https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset) ["xlm-roberta-base"](https://huggingface.co/xlm-roberta-base) instance.
The evaluation metrics for binary toxicity classification on a test set are:
| Metric | Value |
|-----------|-------|
| F1-score | 0.99 |
| Precision | 0.99 |
| Recall | 0.99 |
| Accuracy | 0.99 |
## How to use:
```
from transformers import pipeline
classifier = pipeline("text-classification",
model="ukr-detect/ukr-toxicity-classifier")
```
## Citation
```
@inproceedings{dementieva-etal-2024-toxicity,
title = "Toxicity Classification in {U}krainian",
author = "Dementieva, Daryna and
Khylenko, Valeriia and
Babakov, Nikolay and
Groh, Georg",
editor = {Chung, Yi-Ling and
Talat, Zeerak and
Nozza, Debora and
Plaza-del-Arco, Flor Miriam and
R{\"o}ttger, Paul and
Mostafazadeh Davani, Aida and
Calabrese, Agostina},
booktitle = "Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.woah-1.19",
doi = "10.18653/v1/2024.woah-1.19",
pages = "244--255",
abstract = "The task of toxicity detection is still a relevant task, especially in the context of safe and fair LMs development. Nevertheless, labeled binary toxicity classification corpora are not available for all languages, which is understandable given the resource-intensive nature of the annotation process. Ukrainian, in particular, is among the languages lacking such resources. To our knowledge, there has been no existing toxicity classification corpus in Ukrainian. In this study, we aim to fill this gap by investigating cross-lingual knowledge transfer techniques and creating labeled corpora by: (i){\textasciitilde}translating from an English corpus, (ii){\textasciitilde}filtering toxic samples using keywords, and (iii){\textasciitilde}annotating with crowdsourcing. We compare LLMs prompting and other cross-lingual transfer approaches with and without fine-tuning offering insights into the most robust and efficient baselines.",
}
``` |