Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
csv
Languages:
English
Size:
10K - 100K
Tags:
detoxification
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,70 @@
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
---
|
4 |
+
|
5 |
+
# ParaDetox: Detoxification with Parallel Data
|
6 |
+
|
7 |
+
This repository contains information about Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
|
8 |
+
|
9 |
+
## ParaDetox Collection Pipeline
|
10 |
+
|
11 |
+
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
|
12 |
+
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
|
13 |
+
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
|
14 |
+
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
|
15 |
+
|
16 |
+
The whole pipeline is illustrated on this schema:
|
17 |
+
![](https://github.com/skoltech-nlp/paradetox/blob/main/img/generation_pipeline_blue.jpg)
|
18 |
+
|
19 |
+
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
|
20 |
+
|
21 |
+
## ParaDetox Dataset
|
22 |
+
As a result, we get paraphrases for 11,939 toxic sentences (on average 1.66 paraphrases per sentence), 19,766 paraphrases total. The whole dataset can be found [here](https://github.com/skoltech-nlp/paradetox/blob/main/paradetox/paradetox.tsv). The examples of samples from ParaDetox Dataset:
|
23 |
+
|
24 |
+
![](https://github.com/skoltech-nlp/paradetox/blob/main/img/paraphrase_example.png)
|
25 |
+
|
26 |
+
In addition to all ParaDetox dataset, we also make public [samples](https://github.com/skoltech-nlp/paradetox/blob/main/paradetox/paradetox_cannot_rewrite.tsv) that were marked by annotators as "cannot rewrite" in *Task 1* of crowdsource pipeline.
|
27 |
+
|
28 |
+
# Detoxification evaluation
|
29 |
+
|
30 |
+
The automatic evaluation of the model were produced based on three parameters:
|
31 |
+
* *style transfer accuracy* (**STA**): percentage of nontoxic outputs identified by a style classifier. We pretrained toxicity classifier on Jigsaw data and put it online in HuggingFace🤗 [repo](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier).
|
32 |
+
* *content preservation* (**SIM**): cosine similarity between the embeddings of the original text and the output computed with the model of [Wieting et al. (2019)](https://aclanthology.org/P19-1427/).
|
33 |
+
* *fluency* (**FL**): percentage of fluent sentences identified by a RoBERTa-based classifier of linguistic acceptability trained on the [CoLA dataset](https://nyu-mll.github.io/CoLA/).
|
34 |
+
|
35 |
+
All code used for our experiments to evluate different detoxifcation models can be run via Colab notebook [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xTqbx7IPF8bVL2bDCfQSDarA43mIPefE?usp=sharing)
|
36 |
+
|
37 |
+
## Detoxification model
|
38 |
+
**New SOTA** for detoxification task -- BART (base) model trained on ParaDetox dataset -- we released online in HuggingFace🤗 repository [here](https://huggingface.co/SkolkovoInstitute/bart-base-detox).
|
39 |
+
|
40 |
+
You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/).
|
41 |
+
|
42 |
+
## Citation
|
43 |
+
|
44 |
+
```
|
45 |
+
@inproceedings{logacheva-etal-2022-paradetox,
|
46 |
+
title = "{P}ara{D}etox: Detoxification with Parallel Data",
|
47 |
+
author = "Logacheva, Varvara and
|
48 |
+
Dementieva, Daryna and
|
49 |
+
Ustyantsev, Sergey and
|
50 |
+
Moskovskiy, Daniil and
|
51 |
+
Dale, David and
|
52 |
+
Krotova, Irina and
|
53 |
+
Semenov, Nikita and
|
54 |
+
Panchenko, Alexander",
|
55 |
+
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
56 |
+
month = may,
|
57 |
+
year = "2022",
|
58 |
+
address = "Dublin, Ireland",
|
59 |
+
publisher = "Association for Computational Linguistics",
|
60 |
+
url = "https://aclanthology.org/2022.acl-long.469",
|
61 |
+
pages = "6804--6818",
|
62 |
+
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
|
63 |
+
}
|
64 |
+
```
|
65 |
+
|
66 |
+
## Contacts
|
67 |
+
|
68 |
+
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/skoltech-nlp/paradetox/issues).
|
69 |
+
|
70 |
+
For any questions, please contact: Daryna Dementieva (daryna.dementieva@skoltech.ru)
|