Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,144 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<h1>Deepfake Text Detection in the Wild</h1>
|
3 |
+
</div>
|
4 |
+
|
5 |
+
<div align="center">
|
6 |
+
<img src="https://img.shields.io/badge/Version-1.0.0-blue.svg" alt="Version">
|
7 |
+
<img src="https://img.shields.io/badge/License-CC%20BY%204.0-green.svg" alt="License">
|
8 |
+
<img src="https://img.shields.io/github/stars/yafuly/DeepfakeTextDetect?color=yellow" alt="Stars">
|
9 |
+
<img src="https://img.shields.io/github/issues/yafuly/DeepfakeTextDetect?color=red" alt="Issues">
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
<!-- **Authors:** -->
|
15 |
+
<br>
|
16 |
+
|
17 |
+
_**Yafu Li<sup>†</sup><sup>‡</sup>, Qintong Li<sup>§</sup>, Leyang Cui<sup>¶</sup>, Wei Bi<sup>¶</sup>,<br>**_
|
18 |
+
|
19 |
+
_**Longyue Wang<sup>¶</sup>, Linyi Yang<sup>‡</sup>, Shuming Shi<sup>¶</sup>, Yue Zhang<sup>‡</sup><br>**_
|
20 |
+
|
21 |
+
|
22 |
+
<!-- **Affiliations:** -->
|
23 |
+
|
24 |
+
|
25 |
+
_<sup>†</sup> Zhejiang University,
|
26 |
+
<sup>‡</sup> Westlake University,
|
27 |
+
<sup>§</sup> The University of Hong Kong,
|
28 |
+
<sup>¶</sup> Tencent AI Lab_
|
29 |
+
|
30 |
+
|
31 |
+
Presenting a comprehensive benchmark dataset designed to assess the proficiency of deepfake detectors amidst real-world scenarios.
|
32 |
+
</div>
|
33 |
+
|
34 |
+
## 📌 Table of Contents
|
35 |
+
- [Introduction](#-introduction)
|
36 |
+
- [Activities](#-activities)
|
37 |
+
- [Dataset Description](#-dataset-description)
|
38 |
+
- [Try Detection](#computer--try-detection)
|
39 |
+
- [How to Get the Data](#-how-to-get-the-data)
|
40 |
+
- [Citation](#-citation)
|
41 |
+
<!-- - [Contributing](#-contributing) -->
|
42 |
+
|
43 |
+
## 🚀 Introduction
|
44 |
+
Recent advances in large language models have enabled them to reach a level of text generation comparable to that of humans.
|
45 |
+
These models show powerful capabilities across a wide range of content, including news article writing, story generation, and scientific writing.
|
46 |
+
Such capability further narrows the gap between human-authored and machine-generated texts, highlighting the importance of deepfake text detection to avoid potential risks such as fake news propagation and plagiarism.
|
47 |
+
In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.
|
48 |
+
|
49 |
+
To this end, we build **a wild testbed for deepfake text detection**, by gathering texts from various human writings and deepfake texts generated by different LLMs.
|
50 |
+
This repository contains the data to testify deepfake detection methods described in our paper, [Deepfake Text Detection in the Wild](https://arxiv.org/abs/2305.13242).
|
51 |
+
Welcome to test your detection methods on our testbed!
|
52 |
+
|
53 |
+
## 📅 Activities
|
54 |
+
|
55 |
+
- **May 25, 2023**: Initial dataset release including texts from 10 domains and 27 LLMs, contributing to 6 testbeds with increasing detection difficulty.
|
56 |
+
- 🎉 **June 19, 2023**: Update two 'wilder' testbeds! We go one step wilder by constructing an additional testset with texts from unseen domains generated by an unseen model, to testify the detection ability in more practical scenarios.
|
57 |
+
We consider four new datasets: CNN/DailyMail, DialogSum, PubMedQA and IMDb to test the detection of deepfake news, deepfake dialogues, deepfake scientific answers and deepfake movie reviews.
|
58 |
+
We sample 200 instances from each dataset and use a newly developed LLM, i.e., GPT-4, with specially designed prompts to create deepfake texts, establishing an "Unseen Domains & Unseen Model" scenario.
|
59 |
+
Previous work demonstrates that detection methods are vulnerable to being deceived by target texts.
|
60 |
+
Therefore, we also paraphrase each sentence individually for both human-written and machine-generated texts, forming an even more challenging testbed.
|
61 |
+
We adopt gpt-3.5-trubo as the zero-shot paraphraser and consider all paraphrased texts as machine-generated.
|
62 |
+
|
63 |
+
## 📝 Dataset Description
|
64 |
+
|
65 |
+
The dataset consists of **447,674** human-written and machine-generated texts from a wide range of sources in the wild:
|
66 |
+
- Human-written texts from **10 datasets** covering a wide range of writing tasks, e.g., news article writing, story generation, scientific writing, etc.
|
67 |
+
- Machine-generated texts generated by **27 mainstream LLMs** from 7 sources, e.g., OpenAI, LLaMA, and EleutherAI, etc.
|
68 |
+
- **6 systematic testbed**s with increasing wildness and detection difficulty.
|
69 |
+
- **2 wilder test sets**: (1) texts collected from new datasets and generated by GPT-4; (2) paraphrased texts.
|
70 |
+
|
71 |
+
|
72 |
+
<!-- - **Size**: X GB
|
73 |
+
- **Number of Records**: X
|
74 |
+
- **Time Span**: YYYY-MM-DD to YYYY-MM-DD
|
75 |
+
|
76 |
+
Here's a brief overview of the types of data included:
|
77 |
+
|
78 |
+
1. **Feature 1**: Description of feature 1.
|
79 |
+
2. **Feature 2**: Description of feature 2.
|
80 |
+
3. **Feature 3**: Description of feature 3.
|
81 |
+
... -->
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
|
88 |
+
## :computer: Try Detection
|
89 |
+
|
90 |
+
### Model Access
|
91 |
+
Our Longformer detector, which has been trained on the entire dataset, is now accessible through [Huggingface](https://huggingface.co/nealcly/detection-longformer). Additionally, you can try detection directly using our [online demo](https://huggingface.co/spaces/yaful/DeepfakeTextDetect).
|
92 |
+
|
93 |
+
### Deployment
|
94 |
+
We have refined the threshold based on out-of-distribution settings. To ensure optimal performance, we recommend preprocessing texts before sending them to the detector.
|
95 |
+
```python
|
96 |
+
import torch
|
97 |
+
import os
|
98 |
+
from transformers import AutoModelForSequenceClassification,AutoTokenizer
|
99 |
+
from deployment import preprocess, detect
|
100 |
+
|
101 |
+
# init
|
102 |
+
device = 'cpu' # use 'cuda:0' if GPU is available
|
103 |
+
model_dir = "nealcly/detection-longformer"
|
104 |
+
tokenizer = AutoTokenizer.from_pretrained(model_dir)
|
105 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_dir).to(device)
|
106 |
+
|
107 |
+
# preprocess
|
108 |
+
text = preprocess(text)
|
109 |
+
# detection
|
110 |
+
result = detect(text,tokenizer,model,device)
|
111 |
+
```
|
112 |
+
|
113 |
+
### Detection Performance
|
114 |
+
#### In-distribution Detection
|
115 |
+
| Testbed | HumanRec | MachineRec | AvgRec | AUROC|
|
116 |
+
|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
|
117 |
+
| Domain-specific & Model-specific | 97.30% | 95.91% | 96.60% | 0.99 |
|
118 |
+
| Cross-domains & Model–specific | 95.25% | 96.94% | 96.10% | 0.99 |
|
119 |
+
| Domain-specific & Cross-models | 89.78% | 97.24% | 93.51% | 0.99 |
|
120 |
+
| Cross-domains & Cross-models | 82.80% | 98.27% | 90.53% | 0.99 |
|
121 |
+
#### Out-of-distribution Detection
|
122 |
+
| Testbed | HumanRec | MachineRec | AvgRec | AUROC|
|
123 |
+
|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
|
124 |
+
| Unseen Model Sets | 86.09% | 89.15% | 87.62% | 0.95 |
|
125 |
+
| Unseen Domains | 82.88% | 80.50% | 81.78% | 0.93 |
|
126 |
+
#### Wilder Testsets
|
127 |
+
| Testbed | HumanRec | MachineRec | AvgRec | AUROC|
|
128 |
+
|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
|
129 |
+
| Unseen Domains & Unseen Model|88.78% |84.12% |86.54%|0.94|
|
130 |
+
| Paraphrase| 88.78% |37.05% |62.92% |0.75|
|
131 |
+
|
132 |
+
|
133 |
+
## 📥 How to Get the Data
|
134 |
+
### Download
|
135 |
+
The dataset is available for download at:
|
136 |
+
1. [Google Drive](https://drive.google.com/drive/folders/1p09vDiEvoA-ZPmpqkB2WApcwMQWiiMRl?usp=sharing)
|
137 |
+
2. [Tencent Weiyun](https://share.weiyun.com/JUWQxF4H)
|
138 |
+
|
139 |
+
The folder contains 4 packages:
|
140 |
+
1. source.zip: Source texts of human-written texts and corresponding texts generated by LLMs.
|
141 |
+
2. processed.zip: This is a refined version of the "source" that filters out low-quality texts and specifies sources as CSV file names. For example, the "cmv_machine_specified_gpt-3.5-trubo.csv" file contains texts from the CMV domain generated by the "gpt-3.5-trubo" model using specific prompts, while "cmv_human" includes human-written CMV texts. We suggest trying this version to test your detection methods.
|
142 |
+
3. testbeds_processed.zip: 6 testbeds based on the ''processed'' version, which can be directly used for detecting in-distribution and out-of-distribution detection performance.
|
143 |
+
4. wilder_testsets.zip: 2 wilder test sets with texts processed, aiming for (1) detecting deepfake text generated by GPT-4, and (2) detecting deepfake text in paraphrased versions.
|
144 |
+
<!-- # 🤝 Contributing -->
|