zaitang commited on
Commit
4d22a83
·
verified ·
1 Parent(s): cb8fa1a

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +20 -30
index.html CHANGED
@@ -3,10 +3,11 @@
3
  <head>
4
  <meta charset="utf-8">
5
  <meta name="description"
6
- content="Demo Page of BEYOND ICML 2024.">
7
- <meta name="keywords" content="BEYOND, Adversarial Examples, Adversarial Detection">
8
  <meta name="viewport" content="width=device-width, initial-scale=1">
9
- <title>Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning</title>
 
10
 
11
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
12
  rel="stylesheet">
@@ -84,22 +85,16 @@
84
  <h1 class="title is-1 publication-title">Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning</h1>
85
  <div class="is-size-5 publication-authors">
86
  <span class="author-block">
87
- <a href="#" target="_blank">Zhiyuan He</a><sup>1*</sup>,</span>
88
- <span class="author-block">
89
- <a href="https://yangyijune.github.io/" target="_blank">Yijun Yang</a><sup>1*</sup>,</span>
90
  <span class="author-block">
91
  <a href="https://sites.google.com/site/pinyuchenpage/home" target="_blank">Pin-Yu Chen</a><sup>2</sup>,
92
  </span>
93
- <span class="author-block">
94
- <a href="https://cure-lab.github.io/" target="_blank">Qiang Xu</a><sup>1</sup>,
95
- </span>
96
  <span class="author-block">
97
  <a href="https://tsungyiho.github.io/" target="_blank">Tsung-Yi Ho</a><sup>1</sup>,
98
  </span>
99
  </div>
100
 
101
  <div class="is-size-5 publication-authors">
102
- <span class="author-block"><sup>*</sup>Equal contribution,</span>
103
  <span class="author-block"><sup>1</sup>The Chinese University of Hong Kong,</span>
104
  <span class="author-block"><sup>2</sup>IBM Research</span>
105
  </div>
@@ -108,7 +103,7 @@
108
  <div class="publication-links">
109
  <!-- PDF Link. -->
110
  <span class="link-block">
111
- <a href="https://arxiv.org/abs/2209.00005" target="_blank"
112
  class="external-link button is-normal is-rounded is-dark">
113
  <span class="icon">
114
  <i class="fas fa-file-pdf"></i>
@@ -117,7 +112,7 @@
117
  </a>
118
  </span>
119
  <span class="link-block">
120
- <a href="https://arxiv.org/abs/2209.00005" target="_blank"
121
  class="external-link button is-normal is-rounded is-dark">
122
  <span class="icon">
123
  <i class="ai ai-arxiv"></i>
@@ -180,19 +175,14 @@
180
  <h2 class="title is-3">Abstract</h2>
181
  <div class="content has-text-justified">
182
  <p>
183
- Deep Neural Networks (DNNs) have achieved excellent performance in various fields. However, DNNs’ vulnerability to
184
- Adversarial Examples (AE) hinders their deployments to safety-critical applications. In this paper, we present <strong>BEYOND</strong>,
185
- an innovative AE detection frameworkdesigned for reliable predictions. BEYOND identifies AEs by distinguishing the AE’s
186
- abnormal relation with its augmented versions, i.e. neighbors, from two prospects: representation similarity and label
187
- consistency. An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the
188
- label for its highly informative representation capacity compared to supervised learning models. We found clean samples
189
- maintain a high degree of representation similarity and label consistency relative to their neighbors, in contrast to AEs
190
- which exhibit significant discrepancies. We explain this obser vation and show that leveraging this discrepancy BEYOND can
191
- accurately detect AEs. Additionally, we develop a rigorous justification for the effectiveness of BEYOND. Furthermore, as a
192
- plug-and-play model, BEYOND can easily cooperate with the Adversarial Trained Classifier (ATC), achieving state-of-the-art
193
- (SOTA) robustness accuracy. Experimental results show that BEYOND outperforms baselines by a large margin, especially under
194
- adaptive attacks. Empowered by the robust relationship built on SSL, we found that BEYOND outperforms baselines in terms
195
- of both detection ability and speed.
196
  </p>
197
  </div>
198
  </div>
@@ -492,10 +482,10 @@
492
  <section class="section" id="BibTeX">
493
  <div class="container is-max-desktop content">
494
  <h2 class="title">BibTeX</h2>
495
- <pre><code>@article{he2024beyond,
496
- author = {Zhiyuan, He and Yijun, Yang and Pin-Yu, Chen and Qiang, Xu and Tsung-Yi, Ho},
497
- title = {Be your own neighborhood: Detecting adversarial example by the neighborhood relations built on self-supervised learning},
498
- journal = {ICML},
499
  year = {2024},
500
  }</code></pre>
501
  </div>
@@ -535,4 +525,4 @@
535
  </footer>
536
 
537
  </body>
538
- </html>
 
3
  <head>
4
  <meta charset="utf-8">
5
  <meta name="description"
6
+ content="Demo Page of GREAT Score Neurips 2024.">
7
+ <meta name="keywords" content="GREAT Score, Adversarial robustness, Generative models">
8
  <meta name="viewport" content="width=device-width, initial-scale=1">
9
+ <title>GREAT Score: Global Robustness Evaluation of
10
+ Adversarial Perturbation using Generative Models</title>
11
 
12
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
13
  rel="stylesheet">
 
85
  <h1 class="title is-1 publication-title">Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning</h1>
86
  <div class="is-size-5 publication-authors">
87
  <span class="author-block">
88
+ <a href="#" target="_blank">ZAITANG LI</a><sup>1</sup>,</span>
 
 
89
  <span class="author-block">
90
  <a href="https://sites.google.com/site/pinyuchenpage/home" target="_blank">Pin-Yu Chen</a><sup>2</sup>,
91
  </span>
 
 
 
92
  <span class="author-block">
93
  <a href="https://tsungyiho.github.io/" target="_blank">Tsung-Yi Ho</a><sup>1</sup>,
94
  </span>
95
  </div>
96
 
97
  <div class="is-size-5 publication-authors">
 
98
  <span class="author-block"><sup>1</sup>The Chinese University of Hong Kong,</span>
99
  <span class="author-block"><sup>2</sup>IBM Research</span>
100
  </div>
 
103
  <div class="publication-links">
104
  <!-- PDF Link. -->
105
  <span class="link-block">
106
+ <a href="https://arxiv.org/abs/2304.09875" target="_blank"
107
  class="external-link button is-normal is-rounded is-dark">
108
  <span class="icon">
109
  <i class="fas fa-file-pdf"></i>
 
112
  </a>
113
  </span>
114
  <span class="link-block">
115
+ <a href="https://arxiv.org/abs/2304.09875" target="_blank"
116
  class="external-link button is-normal is-rounded is-dark">
117
  <span class="icon">
118
  <i class="ai ai-arxiv"></i>
 
175
  <h2 class="title is-3">Abstract</h2>
176
  <div class="content has-text-justified">
177
  <p>
178
+ Current studies on adversarial robustness mainly focus on aggregating <i>local</i> robustness results from a set of data samples to evaluate and rank different models. However, the local statistics may not well represent the true <i>global</i> robustness of the underlying unknown data distribution. To address this challenge, this paper makes the first attempt to present a new framework, called <strong>GREAT Score</strong>, for global robustness evaluation of adversarial perturbation using generative models. Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model. For finite-sample evaluation, we also derive a probabilistic guarantee on the sample complexity and the difference between the sample mean and the true mean. GREAT Score has several advantages: (1) Robustness evaluations using GREAT Score are efficient and scalable to large models, by sparing the need of running adversarial attacks. In particular, we show high correlation and significantly reduced computation cost of GREAT Score when compared to the attack-based model ranking on RobustBench<sup>1</sup>. (2) The use of generative models facilitates the approximation of the unknown data distribution. In our ablation study with different generative adversarial networks (GANs), we observe consistency between global robustness evaluation and the quality of GANs. (3) GREAT Score can be used for remote auditing of privacy-sensitive black-box models, as demonstrated by our robustness evaluation on several online facial recognition services.
179
+ </p>
180
+ </div>
181
+
182
+ <!-- References -->
183
+ <div class="content">
184
+ <p>
185
+ <sup>1</sup> Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., Mittal, P., & Hein, M. (2021). RobustBench: a standardized adversarial robustness benchmark. In <i>Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)</i>. <a href="https://openreview.net/forum?id=SSKZPJCt7B" target="_blank">https://openreview.net/forum?id=SSKZPJCt7B</a>
 
 
 
 
 
186
  </p>
187
  </div>
188
  </div>
 
482
  <section class="section" id="BibTeX">
483
  <div class="container is-max-desktop content">
484
  <h2 class="title">BibTeX</h2>
485
+ <pre><code>@article{li2024greatscore,
486
+ title = {GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models},
487
+ author = {Zaitang, Li and Pin-Yu, Chen and Tsung-Yi, Ho},
488
+ journal = {NeurIPS},
489
  year = {2024},
490
  }</code></pre>
491
  </div>
 
525
  </footer>
526
 
527
  </body>
528
+ </html>