File size: 6,083 Bytes
9362cb8
 
cb7d94d
 
a74a40c
 
cb7d94d
 
4365bea
 
cb7d94d
4365bea
 
 
cb7d94d
 
4365bea
 
 
487e283
4365bea
 
487e283
4365bea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
487e283
4365bea
011b777
 
f167fd8
 
 
 
 
 
011b777
4365bea
 
 
487e283
4365bea
 
1b65ac5
4365bea
a74a40c
4365bea
487e283
 
a74a40c
487e283
 
a74a40c
487e283
 
 
 
 
 
4365bea
 
 
 
 
 
 
6e7340e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: cc-by-sa-4.0
task_categories:
- text-classification
task_ids:
  - multi-label-classification
language:
- fi
multilinguality:
- translation
tags:
- toxicity, multi-label
source_datasets:
- extended|jigsaw_toxicity_pred
size_categories:
- 100K<n<1M
---


### Dataset Summary

This dataset is a DeepL -based machine translated version of the Jigsaw toxicity dataset for Finnish. The dataset is originally from a Kaggle competition https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data.
The dataset poses a multi-label text classification problem and includes the labels `identity_attack`, `insult`, `obscene`, `severe_toxicity`, `threat` and `toxicity`.

#### Example data

```
{
"label_identity_attack": 0,
"label_insult": 0,
"label_obscene": 0,
"label_severe_toxicity": 0,
"label_threat": 0,
"label_toxicity": 0,
"lang": "fi-deepl",
"text": "\" \n\n Hei Pieter Pietersen, ja tervetuloa Wikipediaan!   \n\n Tervetuloa Wikipediaan! Toivottavasti viihdyt tietosanakirjassa ja haluat jäädä tänne. Ensimmäiseksi voit lukea johdannon. \n\n Jos sinulla on kysyttävää, voit kysyä minulta keskustelusivullani - autan mielelläni. Tai voit kysyä kysymyksesi Uusien avustajien ohjesivulla. \n\n - \n Seuraavassa on lisää resursseja, jotka auttavat sinua tutkimaan ja osallistumaan maailman suurinta tietosanakirjaa.... \n\n  Löydät perille:  \n\n  \n * Sisällysluettelo \n\n * Osastohakemisto \n\n  \n  Tarvitsetko apua?  \n\n  \n * Kysymykset - opas siitä, mistä voi esittää kysymyksiä. \n * Huijausluettelo - pikaohje Wikipedian merkintäkoodeista. \n\n * Wikipedian 5 pilaria - yleiskatsaus Wikipedian perustaan. \n * The Simplified Ruleset - yhteenveto Wikipedian tärkeimmistä säännöistä. \n\n  \n  Miten voit auttaa:  \n\n  \n * Wikipedian avustaminen - opas siitä, miten voit auttaa. \n\n * Yhteisöportaali - Wikipedian toiminnan keskus. \n\n  \n  Lisää vinkkejä...   \n\n  \n * Allekirjoita viestisi keskustelusivuilla neljällä tildillä (~~~~). Tämä lisää automaattisesti \"\"allekirjoituksesi\"\" (käyttäjänimesi ja päivämääräleima). Myös Wikipedian tekstinmuokkausikkunan yläpuolella olevassa työkalupalkissa oleva painike tekee tämän.  \n\n * Jos haluat leikkiä uusilla Wiki-taidoillasi, Hiekkalaatikko on sinua varten.  \n\n  \n  Onnea ja hauskaa. \""
}
```

### Data Fields

Fields marked as `label_` have either `0` to convey *not* having that category of toxicity in the text and `1` to convey having that category of toxicity present in the text.

- `label_identity_attack`: a `int64` feature. 
- `label_insult`: a `int64` feature.
- `label_obscene`: a `int64` feature.
- `label_severe_toxicity`: a `int64` feature.
- `label_threat`: a `int64` feature.
- `label_toxicity`: a `int64` feature.
- `lang`: a `string` feature.
- `text`: a `string` feature.


### Data Splits

The splits are the same as in the original English data.
| dataset   |  train | test |
| -------- | -----: | ---------: |
| TurkuNLP/jigsaw_toxicity_pred_fi| 159571 | 63978 |

### Evaluation Results

Results from fine-tuning [TurkuNLP/bert-large-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-v1) for multi-label toxicity detection. The fine-tuned model can be found 
| dataset              | F1-micro    | Precision | Recall |
| -------------------- | ----: |  ---: | ----: |
| TurkuNLP/jigsaw_toxicity_pred_fi | 0.66 | 0.58 | 0.76 |

<!--- Base results from fine-tuning [bert-large-cased](https://huggingface.co/bert-large-cased) on the original English data for multi-label toxicity detection.
| dataset              | F1-micro    | Precision | Recall |
| -------------------- | ----: | ---: | ----: |
| jigsaw_toxicity_pred | 0.69 | 0.59 | 0.81 | --->

### Considerations for Using the Data
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation 
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations 
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
### Licensing Information
Contents of this repository are distributed under the 
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). 
Copyright of the dataset contents belongs to the original copyright holders.

### Citing
To cite this dataset use the following bibtex.

```
@inproceedings{eskelinen-etal-2023-toxicity,
    title = "Toxicity Detection in {F}innish Using Machine Translation",
    author = "Eskelinen, Anni  and
      Silvala, Laura  and
      Ginter, Filip  and
      Pyysalo, Sampo  and
      Laippala, Veronika",
    booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
    month = may,
    year = "2023",
    address = "T{\'o}rshavn, Faroe Islands",
    publisher = "University of Tartu Library",
    url = "https://aclanthology.org/2023.nodalida-1.68",
    pages = "685--697",
    abstract = "Due to the popularity of social media platforms and the sheer amount of user-generated content online, the automatic detection of toxic language has become crucial in the creation of a friendly and safe digital space. Previous work has been mostly focusing on English leaving many lower-resource languages behind. In this paper, we present novel resources for toxicity detection in Finnish by introducing two new datasets, a machine translated toxicity dataset for Finnish based on the widely used English Jigsaw dataset and a smaller test set of Suomi24 discussion forum comments originally written in Finnish and manually annotated following the definitions of the labels that were used to annotate the Jigsaw dataset. We show that machine translating the training data to Finnish provides better toxicity detection results than using the original English training data and zero-shot cross-lingual transfer with XLM-R, even with our newly annotated dataset from Suomi24.",
}
```