annieske commited on
Commit
011b777
1 Parent(s): 4365bea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -18,7 +18,7 @@ size_categories:
18
  ### Dataset summary
19
 
20
  This dataset is a DeepL -based machine translated version of the Jigsaw toxicity dataset for Finnish. The dataset is originally from a Kaggle competition https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data.
21
- The dataset poses a multi-label text classification problem.
22
 
23
  #### Example data
24
 
@@ -26,7 +26,6 @@ The dataset poses a multi-label text classification problem.
26
  {
27
  "id": "879ad7bdba4cedaa",
28
  "label_identity_attack": 0,
29
- # for these "0 or 1"?
30
  "label_insult": 0,
31
  "label_obscene": 0,
32
  "label_severe_toxicity": 0,
@@ -39,6 +38,8 @@ The dataset poses a multi-label text classification problem.
39
 
40
  ### Data fields
41
 
 
 
42
  - `id`: a `string` feature.
43
  - `label_identity_attack`: a `int32` feature.
44
  - `label_insult`: a `int32` feature.
@@ -46,7 +47,7 @@ The dataset poses a multi-label text classification problem.
46
  - `label_severe_toxicity`: a `int32` feature.
47
  - `label_threat`: a `int32` feature.
48
  - `label_toxicity`: a `int32` feature.
49
- - `lang`: a `string` feature,
50
  - `text`: a `string` feature.
51
 
52
 
@@ -56,7 +57,7 @@ The splits are the same as in the original English data.
56
 
57
  | | train | test |
58
  | -------- | -----: | ---------: |
59
- | Jigsaw toxicity data | 130319 | 11873 |
60
 
61
  ### Considerations for Using the Data
62
  Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
 
18
  ### Dataset summary
19
 
20
  This dataset is a DeepL -based machine translated version of the Jigsaw toxicity dataset for Finnish. The dataset is originally from a Kaggle competition https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data.
21
+ The dataset poses a multi-label text classification problem and includes the labels `identity_attack, insult, obscene, severe_toxicity, threat and toxicity`.
22
 
23
  #### Example data
24
 
 
26
  {
27
  "id": "879ad7bdba4cedaa",
28
  "label_identity_attack": 0,
 
29
  "label_insult": 0,
30
  "label_obscene": 0,
31
  "label_severe_toxicity": 0,
 
38
 
39
  ### Data fields
40
 
41
+ Fields marked as `label_` have either `0` to convey *not* having that category of toxicity in the text and `1` to convey having that category of toxicity present in the text.
42
+
43
  - `id`: a `string` feature.
44
  - `label_identity_attack`: a `int32` feature.
45
  - `label_insult`: a `int32` feature.
 
47
  - `label_severe_toxicity`: a `int32` feature.
48
  - `label_threat`: a `int32` feature.
49
  - `label_toxicity`: a `int32` feature.
50
+ - `lang`: a `string` feature.
51
  - `text`: a `string` feature.
52
 
53
 
 
57
 
58
  | | train | test |
59
  | -------- | -----: | ---------: |
60
+ | Jigsaw toxicity data | 159571 | 63978 |
61
 
62
  ### Considerations for Using the Data
63
  Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation