laurahanu commited on
Commit
e0fd7dc
1 Parent(s): a982808

update readme

Browse files
Files changed (1) hide show
  1. README.md +322 -0
README.md ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ # 🙊 Detoxify
4
+ ## Toxic Comment Classification with ⚡ Pytorch Lightning and 🤗 Transformers
5
+
6
+ ![CI testing](https://github.com/unitaryai/detoxify/workflows/CI%20testing/badge.svg)
7
+ ![Lint](https://github.com/unitaryai/detoxify/workflows/Lint/badge.svg)
8
+
9
+ </div>
10
+
11
+ ![Examples image](examples.png)
12
+
13
+ ## Description
14
+
15
+ Trained models & code to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, Multilingual toxic comment classification.
16
+
17
+ Built by [Laura Hanu](https://laurahanu.github.io/) at [Unitary](https://www.unitary.ai/), where we are working to stop harmful content online by interpreting visual content in context.
18
+
19
+ Dependencies:
20
+ - For inference:
21
+ - 🤗 Transformers
22
+ - ⚡ Pytorch lightning
23
+ - For training will also need:
24
+ - Kaggle API (to download data)
25
+
26
+
27
+ | Challenge | Year | Goal | Original Data Source | Detoxify Model Name | Top Kaggle Leaderboard Score | Detoxify Score
28
+ |-|-|-|-|-|-|-|
29
+ | [Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) | 2018 | build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate. | Wikipedia Comments | `original` | 0.98856 | 0.98636
30
+ | [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) | 2019 | build a model that recognizes toxicity and minimizes this type of unintended bias with respect to mentions of identities. You'll be using a dataset labeled for identity mentions and optimizing a metric designed to measure unintended bias. | Civil Comments | `unbiased` | 0.94734 | 0.93639
31
+ | [Jigsaw Multilingual Toxic Comment Classification](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification) | 2020 | build effective multilingual models | Wikipedia Comments + Civil Comments | `multilingual` | 0.9536 | 0.91655*
32
+
33
+ *Score not directly comparable since it is obtained on the validation set provided and not on the test set. To update when the test labels are made available.
34
+
35
+ It is also noteworthy to mention that the top leadearboard scores have been achieved using model ensembles. The purpose of this library was to build something user-friendly and straightforward to use.
36
+
37
+ ## Limitations and ethical considerations
38
+
39
+ If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups.
40
+
41
+ The intended use of this library is for research purposes, fine-tuning on carefully constructed datasets that reflect real world demographics and/or to aid content moderators in flagging out harmful content quicker.
42
+
43
+ Some useful resources about the risk of different biases in toxicity or hate speech detection are:
44
+ - [The Risk of Racial Bias in Hate Speech Detection](https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf)
45
+ - [Automated Hate Speech Detection and the Problem of Offensive Language](https://arxiv.org/pdf/1703.04009.pdf%201.pdf)
46
+ - [Racial Bias in Hate Speech and Abusive Language Detection Datasets](https://arxiv.org/pdf/1905.12516.pdf)
47
+
48
+ ## Quick prediction
49
+
50
+
51
+ The `multilingual` model has been trained on 7 different languages so it should only be tested on: `english`, `french`, `spanish`, `italian`, `portuguese`, `turkish` or `russian`.
52
+
53
+ ```bash
54
+ # install detoxify
55
+
56
+ pip install detoxify
57
+
58
+ ```
59
+ ```python
60
+
61
+ from detoxify import Detoxify
62
+
63
+ # each model takes in either a string or a list of strings
64
+
65
+ results = Detoxify('original').predict('example text')
66
+
67
+ results = Detoxify('unbiased').predict(['example text 1','example text 2'])
68
+
69
+ results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста'])
70
+
71
+ # optional to display results nicely (will need to pip install pandas)
72
+
73
+ import pandas as pd
74
+
75
+ print(pd.DataFrame(results, index=input_text).round(5))
76
+
77
+ ```
78
+ For more details check the Prediction section.
79
+
80
+
81
+ ## Labels
82
+ All challenges have a toxicity label. The toxicity labels represent the aggregate ratings of up to 10 annotators according the following schema:
83
+ - **Very Toxic** (a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective)
84
+ - **Toxic** (a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective)
85
+ - **Hard to Say**
86
+ - **Not Toxic**
87
+
88
+ More information about the labelling schema can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).
89
+
90
+ ### Toxic Comment Classification Challenge
91
+ This challenge includes the following labels:
92
+
93
+ - `toxic`
94
+ - `severe_toxic`
95
+ - `obscene`
96
+ - `threat`
97
+ - `insult`
98
+ - `identity_hate`
99
+
100
+ ### Jigsaw Unintended Bias in Toxicity Classification
101
+ This challenge has 2 types of labels: the main toxicity labels and some additional identity labels that represent the identities mentioned in the comments.
102
+
103
+ Only identities with more than 500 examples in the test set (combined public and private) are included during training as additional labels and in the evaluation calculation.
104
+
105
+ - `toxicity`
106
+ - `severe_toxicity`
107
+ - `obscene`
108
+ - `threat`
109
+ - `insult`
110
+ - `identity_attack`
111
+ - `sexual_explicit`
112
+
113
+ Identity labels used:
114
+ - `male`
115
+ - `female`
116
+ - `homosexual_gay_or_lesbian`
117
+ - `christian`
118
+ - `jewish`
119
+ - `muslim`
120
+ - `black`
121
+ - `white`
122
+ - `psychiatric_or_mental_illness`
123
+
124
+ A complete list of all the identity labels available can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).
125
+
126
+
127
+ ### Jigsaw Multilingual Toxic Comment Classification
128
+
129
+ Since this challenge combines the data from the previous 2 challenges, it includes all labels from above, however the final evaluation is only on:
130
+
131
+ - `toxicity`
132
+
133
+ ## How to run
134
+
135
+ First, install dependencies
136
+ ```bash
137
+ # clone project
138
+
139
+ git clone https://github.com/unitaryai/detoxify
140
+
141
+ # create virtual env
142
+
143
+ python3 -m venv toxic-env
144
+ source toxic-env/bin/activate
145
+
146
+ # install project
147
+
148
+ pip install -e detoxify
149
+ cd detoxify
150
+
151
+ # for training
152
+ pip install -r requirements.txt
153
+
154
+ ```
155
+
156
+ ## Prediction
157
+
158
+ Trained models summary:
159
+
160
+ |Model name| Transformer type| Data from
161
+ |:--:|:--:|:--:|
162
+ |`original`| `bert-base-uncased` | Toxic Comment Classification Challenge
163
+ |`unbiased`| `roberta-base`| Unintended Bias in Toxicity Classification
164
+ |`multilingual`| `xlm-roberta-base`| Multilingual Toxic Comment Classification
165
+
166
+ For a quick prediction can run the example script on a comment directly or from a txt containing a list of comments.
167
+ ```bash
168
+
169
+ # load model via torch.hub
170
+
171
+ python run_prediction.py --input 'example' --model_name original
172
+
173
+ # load model from from checkpoint path
174
+
175
+ python run_prediction.py --input 'example' --from_ckpt_path model_path
176
+
177
+ # save results to a .csv file
178
+
179
+ python run_prediction.py --input test_set.txt --model_name original --save_to results.csv
180
+
181
+ # to see usage
182
+
183
+ python run_prediction.py --help
184
+
185
+ ```
186
+
187
+ Checkpoints can be downloaded from the latest release or via the Pytorch hub API with the following names:
188
+ - `toxic_bert`
189
+ - `unbiased_toxic_roberta`
190
+ - `multilingual_toxic_xlm_r`
191
+ ```bash
192
+ model = torch.hub.load('unitaryai/detoxify','toxic_bert')
193
+ ```
194
+
195
+ Importing detoxify in python:
196
+
197
+ ```python
198
+
199
+ from detoxify import Detoxify
200
+
201
+ results = Detoxify('original').predict('some text')
202
+
203
+ results = Detoxify('unbiased').predict(['example text 1','example text 2'])
204
+
205
+ results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста'])
206
+
207
+ # to display results nicely
208
+
209
+ import pandas as pd
210
+
211
+ print(pd.DataFrame(results,index=input_text).round(5))
212
+
213
+ ```
214
+
215
+
216
+ ## Training
217
+
218
+ If you do not already have a Kaggle account:
219
+ - you need to create one to be able to download the data
220
+
221
+ - go to My Account and click on Create New API Token - this will download a kaggle.json file
222
+
223
+ - make sure this file is located in ~/.kaggle
224
+
225
+ ```bash
226
+
227
+ # create data directory
228
+
229
+ mkdir jigsaw_data
230
+ cd jigsaw_data
231
+
232
+ # download data
233
+
234
+ kaggle competitions download -c jigsaw-toxic-comment-classification-challenge
235
+
236
+ kaggle competitions download -c jigsaw-unintended-bias-in-toxicity-classification
237
+
238
+ kaggle competitions download -c jigsaw-multilingual-toxic-comment-classification
239
+
240
+ ```
241
+ ## Start Training
242
+ ### Toxic Comment Classification Challenge
243
+
244
+ ```bash
245
+
246
+ python create_val_set.py
247
+
248
+ python train.py --config configs/Toxic_comment_classification_BERT.json
249
+ ```
250
+ ### Unintended Bias in Toxicicity Challenge
251
+
252
+ ```bash
253
+
254
+ python train.py --config configs/Unintended_bias_toxic_comment_classification_RoBERTa.json
255
+
256
+ ```
257
+ ### Multilingual Toxic Comment Classification
258
+
259
+ This is trained in 2 stages. First, train on all available data, and second, train only on the translated versions of the first challenge.
260
+
261
+ The [translated data](https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api) can be downloaded from Kaggle in french, spanish, italian, portuguese, turkish, and russian (the languages available in the test set).
262
+
263
+ ```bash
264
+
265
+ # stage 1
266
+
267
+ python train.py --config configs/Multilingual_toxic_comment_classification_XLMR.json
268
+
269
+ # stage 2
270
+
271
+ python train.py --config configs/Multilingual_toxic_comment_classification_XLMR_stage2.json
272
+
273
+ ```
274
+ ### Monitor progress with tensorboard
275
+
276
+ ```bash
277
+
278
+ tensorboard --logdir=./saved
279
+
280
+ ```
281
+ ## Model Evaluation
282
+
283
+ ### Toxic Comment Classification Challenge
284
+
285
+ This challenge is evaluated on the mean AUC score of all the labels.
286
+
287
+ ```bash
288
+
289
+ python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
290
+
291
+ ```
292
+ ### Unintended Bias in Toxicicity Challenge
293
+
294
+ This challenge is evaluated on a novel bias metric that combines different AUC scores to balance overall performance. More information on this metric [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation).
295
+
296
+ ```bash
297
+
298
+ python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
299
+
300
+ # to get the final bias metric
301
+ python model_eval/compute_bias_metric.py
302
+
303
+ ```
304
+ ### Multilingual Toxic Comment Classification
305
+
306
+ This challenge is evaluated on the AUC score of the main toxic label.
307
+
308
+ ```bash
309
+
310
+ python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
311
+
312
+ ```
313
+
314
+ ### Citation
315
+ ```
316
+ @misc{Detoxify,
317
+ title={Detoxify},
318
+ author={Hanu, Laura and {Unitary team}},
319
+ howpublished={Github. https://github.com/unitaryai/detoxify},
320
+ year={2020}
321
+ }
322
+ ```