etomoscow commited on
Commit
93a9009
1 Parent(s): ab35683

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -12,7 +12,7 @@ size_categories:
12
 
13
  # ParaDetox: Detoxification with Parallel Data (English)
14
 
15
- This repository contains information about Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
16
 
17
  ## ParaDetox Collection Pipeline
18
 
@@ -31,16 +31,16 @@ In addition to all ParaDetox dataset, we also make public [samples](https://hugg
31
  # Detoxification evaluation
32
 
33
  The automatic evaluation of the model were produced based on three parameters:
34
- * *style transfer accuracy* (**STA**): percentage of nontoxic outputs identified by a style classifier. We pretrained toxicity classifier on Jigsaw data and put it online in HuggingFace🤗 [repo](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier).
35
  * *content preservation* (**SIM**): cosine similarity between the embeddings of the original text and the output computed with the model of [Wieting et al. (2019)](https://aclanthology.org/P19-1427/).
36
- * *fluency* (**FL**): percentage of fluent sentences identified by a RoBERTa-based classifier of linguistic acceptability trained on the [CoLA dataset](https://nyu-mll.github.io/CoLA/).
37
 
38
  All code used for our experiments to evluate different detoxifcation models can be run via Colab notebook [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xTqbx7IPF8bVL2bDCfQSDarA43mIPefE?usp=sharing)
39
 
40
  ## Detoxification model
41
- **New SOTA** for detoxification task -- BART (base) model trained on ParaDetox dataset -- we released online in HuggingFace🤗 repository [here](https://huggingface.co/SkolkovoInstitute/bart-base-detox).
42
 
43
- You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) and telegram [bot](https://t.me/rudetoxifierbot).
44
 
45
  ## Citation
46
 
@@ -68,7 +68,7 @@ You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) a
68
 
69
  ## Contacts
70
 
71
- If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/skoltech-nlp/paradetox/issues).
72
 
73
  For any questions and get the TEST SET, please, contact: Daryna Dementieva (dardem96@gmail.com), Daniil Moskovskiy (Daniil.Moskovskiy@skoltech.ru), or Alexander Panchenko (a.panchenko@skol.tech)
74
 
 
12
 
13
  # ParaDetox: Detoxification with Parallel Data (English)
14
 
15
+ This repository contains information about ParaDetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
16
 
17
  ## ParaDetox Collection Pipeline
18
 
 
31
  # Detoxification evaluation
32
 
33
  The automatic evaluation of the model were produced based on three parameters:
34
+ * *style transfer accuracy* (**STA**): percentage of nontoxic outputs identified by a style classifier. We pretrained toxicity classifier on Jigsaw data and put it online in HuggingFace🤗 [repo](https://huggingface.co/s-nlp/roberta_toxicity_classifier).
35
  * *content preservation* (**SIM**): cosine similarity between the embeddings of the original text and the output computed with the model of [Wieting et al. (2019)](https://aclanthology.org/P19-1427/).
36
+ * *fluency* (**FL**): percentage of fluent sentences identified by a RoBERTa-based classifier of linguistic acceptability trained on the [CoLA dataset](https://nyu-mll.github.io/CoLA/).
37
 
38
  All code used for our experiments to evluate different detoxifcation models can be run via Colab notebook [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xTqbx7IPF8bVL2bDCfQSDarA43mIPefE?usp=sharing)
39
 
40
  ## Detoxification model
41
+ At-the-time SOTA for text detoxification task in English -- BART (base) model trained on ParaDetox dataset -- we release online in HuggingFace🤗 [repo](https://huggingface.co/s-nlp/bart-base-detox).
42
 
43
+ You can also check out our [web-demo](https://detoxifier.nlp.zhores.net/junction/).
44
 
45
  ## Citation
46
 
 
68
 
69
  ## Contacts
70
 
71
+ If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/s-nlp/paradetox/issues).
72
 
73
  For any questions and get the TEST SET, please, contact: Daryna Dementieva (dardem96@gmail.com), Daniil Moskovskiy (Daniil.Moskovskiy@skoltech.ru), or Alexander Panchenko (a.panchenko@skol.tech)
74