Text2Text Generation
English
File size: 3,845 Bytes
39f0e5d
 
fc81f44
 
e0e9524
 
fc81f44
 
 
 
 
 
39f0e5d
fc81f44
 
 
 
c476d2d
fc81f44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c476d2d
fc81f44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c476d2d
 
 
 
 
 
 
 
 
 
 
 
 
 
fc81f44
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: apache-2.0
datasets:
- grammarly/pseudonymization-data
- cnn_dailymail
- imdb
language:
- en
metrics:
- f1
- bleu
pipeline_tag: text2text-generation
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
This repository contains files for two Seq2Seq transformers-based models used in our paper: https://aclanthology.org/2023.trustnlp-1.20/.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Oleksandr Yermilov, Vipul Raheja, Artem Chernodub
- **Model type:** Seq2Seq
- **Language (NLP):** English
- **License:** Apache license 2.0
- **Finetuned from model:** BART

### Model Sources

- **Paper:** https://aclanthology.org/2023.trustnlp-1.20/

## Uses

These models can be used for anonymizing datasets in English language.

## Bias, Risks, and Limitations

Please check the Limitations section in our paper.


## Training Details

### Training Data

https://huggingface.co/datasets/grammarly/pseudonymization-data/tree/main/seq2seq

### Training Procedure 

1. Gather text data from Wikipedia.
2. Preprocess it using NER-based pseudonymization. 
3. Fine-tune BART model on translation task for translating text from "original" to "pseudonymized".


#### Training Hyperparameters

We train the models for 3 epochs using `AdamW` optimization with the learning rate α =2*10<sup>5</sup>, and the batch size is 8.

## Evaluation

### Factors & Metrics

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

There is no source truth of named entities for the data, on which this model was trained. We check whether the word is a named entity, using one of the NER systems (spaCy or FLAIR).

#### Metrics


We measure the amount of text, changed by our model. Specifically, we check for the following categories of translated text word by word:
1. True positive (TP) - Named entity, which was changed to another named entity.
2. True negative (TN) - Not a named entity, which was not changed.
3. False positive (FP) - Not a named entity, which was changed to another word.
4. False negative (FN) - Named entity, which was not changed to another named entity.
We calculate F<sub>1</sub> score based on the abovementioned values.


## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@inproceedings{yermilov-etal-2023-privacy,
    title = "Privacy- and Utility-Preserving {NLP} with Anonymized data: A case study of Pseudonymization",
    author = "Yermilov, Oleksandr  and
      Raheja, Vipul  and
      Chernodub, Artem",
    booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.trustnlp-1.20",
    doi = "10.18653/v1/2023.trustnlp-1.20",
    pages = "232--241",
    abstract = "This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques better to balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.",
}
```

## Model Card Contact

Oleksandr Yermilov (oleksandr.yermilov@ucu.edu.ua).