RedTachyon
commited on
Commit
•
7a240ca
1
Parent(s):
2fa2099
Upload folder using huggingface_hub
Browse files- hqLJMAceZG/10_image_0.png +3 -0
- hqLJMAceZG/10_image_1.png +3 -0
- hqLJMAceZG/10_image_2.png +3 -0
- hqLJMAceZG/15_image_0.png +3 -0
- hqLJMAceZG/15_image_1.png +3 -0
- hqLJMAceZG/8_image_0.png +3 -0
- hqLJMAceZG/9_image_0.png +3 -0
- hqLJMAceZG/9_image_1.png +3 -0
- hqLJMAceZG/hqLJMAceZG.md +901 -0
- hqLJMAceZG/hqLJMAceZG_meta.json +25 -0
hqLJMAceZG/10_image_0.png
ADDED
Git LFS Details
|
hqLJMAceZG/10_image_1.png
ADDED
Git LFS Details
|
hqLJMAceZG/10_image_2.png
ADDED
Git LFS Details
|
hqLJMAceZG/15_image_0.png
ADDED
Git LFS Details
|
hqLJMAceZG/15_image_1.png
ADDED
Git LFS Details
|
hqLJMAceZG/8_image_0.png
ADDED
Git LFS Details
|
hqLJMAceZG/9_image_0.png
ADDED
Git LFS Details
|
hqLJMAceZG/9_image_1.png
ADDED
Git LFS Details
|
hqLJMAceZG/hqLJMAceZG.md
ADDED
@@ -0,0 +1,901 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Understanding Metric Learning On Unit Hypersphere And Generating Better Examples For Adversarial Training
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Recent works have shown that the adversarial examples can improve the performance of representation learning tasks. In this paper, we boost the performance of deep metric learning (DML) models with adversarial examples generated by attacking two new objective functions: *intra-class alignment* and *hyperspherical uniformity*. These two new objectives are motivated by our theoretical and empirical analysis of the tuple-based metric losses on the hyperspherical embedding space. Our analytical results reveal that a) the metric losses on positive sample pairs are related to intra-class alignment; b) the metric losses on negative sample pairs serve as uniformity regularization on hypersphere. Based on our new understanding on the DML models, we propose Adversarial Deep Metric Learning model with adversarial samples generated by Alignment or Uniformity objective (ADML+A
|
8 |
+
or U). With the same network structure and training settings, ADML+A and ADML+U
|
9 |
+
consistently outperform the vanilla DML models and the baseline model, adversarial DML
|
10 |
+
model with attacking triplet objective function, on four metric learning benchmark datasets.
|
11 |
+
|
12 |
+
## 1 Introduction
|
13 |
+
|
14 |
+
Deep metric learning (DML) has been applied to various computer vision tasks ranging from face recognition
|
15 |
+
(Schroff et al., 2015; Liu et al., 2017) to zero-shot learning (Romera-Paredes & Torr, 2015; Bucher et al.,
|
16 |
+
2016) and image retrieval (Song et al., 2016; Wu et al., 2017). It has been proved to be one of the most effective methods for learning the distance-preserving features of images. The intuition of DML is to pull the embedding of positive images pairs together and push the negative pairs apart, where the embedding function could be a deep neural network. Most of the metric losses in DML are tuple-based (Schroff et al., 2015; Song et al., 2016; Wu et al., 2017; Wang et al., 2019) or classification-based (Movshovitz-Attias et al., 2017; Kim et al., 2020; Boudiaf et al., 2020), these different losses have been shown to achieve similar performance in the recent reviews of DML (Roth et al., 2020; Musgrave et al., 2020). One common ground of existing DML models is that the embedding space is a unit hypersphere. It is widely known that achieving uniformity on hypersphere can increase the generalization of models and preserve as much information as possible (Bachman et al., 2019; Liu et al., 2018; 2021; Hjelm et al., 2018), and the objective function that lead to uniformity is called uniformity regularization. Meanwhile, the downstream tasks in DML favor the models with small intra-class alignment (Wu et al., 2017; Wang et al., 2019). In this work, we investigate these two properties, *intra-class alignment* and *hyperspherical uniformity* for tuple-based metric losses. We derive the theoretical analysis for the triplet loss to prove that the triplet loss on the positive sample pairs minimizes the *intra-class alignment* by mapping all samples from one class to the same vector, while the triplet loss on the negative sample pairs achieves *hyperspherical uniformity*. We further conduct empirical studies to show that the same statement is also valid for other tuple-based metric losses.
|
17 |
+
|
18 |
+
We utilize our new understanding on DML to design novel robust DML methods to enhance the performance via improved adversarial training. Adversarial training aims at improving the robustness of models towards to certain types of attacks by training with perturbed samples. In parallel, as shown in the recent work on classification tasks (Xie et al., 2020), adversarial training can also enhance the clean accuracy by improving the model generalization. We believe it's also possible to improve the clean performance of metric learning models with adversarial samples. Following our new insights on positive and negative metric losses, we generate perturbations by attacking the alignment or uniformity objective, and create adversarial DML models augmented with both normal samples and perturbed samples. Our experimental results show that the new adversarial DML models can significantly boost the clean performance.
|
19 |
+
|
20 |
+
The major contributions of our paper can be summarized as follows:
|
21 |
+
- We analyze the intra-class alignment and hyperspherical uniformity for tuple-based metric losses, and establish the connections between these two properties and the positive/negative metric losses.
|
22 |
+
|
23 |
+
- Based on our new analysis and understanding, we propose two new adversarial DML models, ADML+A
|
24 |
+
and ADML+U, via attacking the alignment or uniformity objective. ADML+A and ADML+U improve the **clean performance** on benchmarks significantly.
|
25 |
+
|
26 |
+
Difference between our work and adversarial DML. We notice that there are some works Panum et al.
|
27 |
+
|
28 |
+
(2021); Mao et al. (2019) on the adversarial DML topic. These works focus on improving the robustness of DML against adversarial attacks. However, in our work, we use the adversarial samples to improve the clean performance rather than the robustness to adversarial attacks.
|
29 |
+
|
30 |
+
## 2 Related Works
|
31 |
+
|
32 |
+
Deep metric learning. There are mainly two kind of metric losses in DML, tuple-based and classification based losses. Tuple-based losses include contrastive loss (Hadsell et al., 2006), triplet loss (Schroff et al., 2015),
|
33 |
+
margin loss (Wu et al., 2017), and multi-similarity loss (Wang et al., 2019), where the objective function is based on the distance between positive pairs and negative pairs. In classification-based losses, the learning objective is not depend on the positive or negative pairs but a fixed (Boudiaf et al., 2020) or learnable proxy
|
34 |
+
(Kim et al., 2020). In the recent reviews of metric learning methods (Roth et al., 2020; Musgrave et al.,
|
35 |
+
2020), it's concluded the improvement on the DML performance is mainly due to different training strategies and unfair comparison. The original contrastive loss and triplet loss still achieved comparable result with other metric losses under the same network and training strategies. In experiments we apply the training framework of (Roth et al., 2020) to ensure fair comparison.
|
36 |
+
|
37 |
+
Learning with hyperspherical uniformity. Hyperspherical learning regards learning tasks where the embedding space is a unit hypersphere. The uniformity of the hypersphere represents the diversity of vectors on the sphere. It encourages vectors to be spaced with angles as large as possible so that these vectors can be evenly distributed on the hypersphere (Liu et al., 2018). Achieving hyperspherical uniformity can help with preventing overfitting and improving generalization of the neural works (Liu et al., 2021). The objective functions which can lead to the uniformity on hypersphere are called uniformity regularization.
|
38 |
+
|
39 |
+
Hyperspherical embedding is widely applied in representation learning tasks such as contrastive representation learning (Oord et al., 2018; Hjelm et al., 2018) and DML (Wu et al., 2017; Liu et al., 2017). Wang & Isola
|
40 |
+
(2020) showed that the objective function in contrastive representation learning optimizes for intra-class alignment and uniformity together.
|
41 |
+
|
42 |
+
Adversarial examples improves clean performance. In classification tasks, it is well known that the clean accuracy of adversarially trained model is typically worse than the normal model. However, Xie et al.
|
43 |
+
|
44 |
+
(2020) showed that adversarial samples can be used to improve the clean accuracy of image classification models. According to (Jiang et al., 2020), training with adversarial samples can help improve the clean performance of contrastive learning models on the downstream classification tasks. The authors presented adversarial attacks based on the objective of contrastive learning and their method achieved improvement on both clean and robustness performance. It's believed that adversarial examples contain extra features, thus the generalization of models augmented with adversarial example is increased (Ilyas et al., 2019; Salman et al., 2020; Xie et al., 2020), which contributes to better clean performance.
|
45 |
+
|
46 |
+
## 3 Alignment And Hyperspherical Uniformity In Tuple-Based Metric Losses
|
47 |
+
|
48 |
+
In this section, we will study tuple-based metric losses on the unit hypersphere embedding space. We assume there are n classes X1, · · · , Xn in training set and each class Xi contains the same numbers of samples.
|
49 |
+
|
50 |
+
Denote by f : R
|
51 |
+
d → Sk−1the encoder, where S
|
52 |
+
k−1is the surface of a k-dimensional unit ball. Let p*data*(·) be the data distribution over R
|
53 |
+
d, ppos(·, ·) be the distribution of positive pairs over R
|
54 |
+
d ×R
|
55 |
+
d, and ptri(·, ·, ·) be the distribution of triplet pairs over R
|
56 |
+
d × R
|
57 |
+
d × R
|
58 |
+
d, where the first two entries have the same label and the third entry is a sample from different classes. Please note that all detailed proofs are included in supplementary material Appendix E. We also conduct experiments to validate our theoretical analysis, the details are in Appendix C.
|
59 |
+
|
60 |
+
The major intuition of DML is to pull the representations of similar samples together and push dissimilar samples apart. Thus, we reformulate the metric losses as the combination of two parts:
|
61 |
+
- **Positive metric loss**: minimizes the distance between embedded positive sample pairs. - **Negative metric loss**: maximizes the distance between embedded negative sample pairs.
|
62 |
+
|
63 |
+
Although in DML models the positive metric losses have different representations, they share one common optimal solution pattern, where samples from the same class are mapped to the same feature vector. Thus, we define the alignment loss with minimizing the intra-class distance.
|
64 |
+
|
65 |
+
Definition 1. *(Intra-class alignment) The expectation of intra-class distance is given by:*
|
66 |
+
|
67 |
+
.) $\blacksquare$
|
68 |
+
$${\mathcal{L}}_{a l i g n m e n t}(f;X,p_{p o s}):=\mathbb{E}_{(x,y)\sim p_{p o s}}\left[||f(x)-f(y)||_{2}^{2}\right]$$
|
69 |
+
|
70 |
+
(1)
|
71 |
+
the minimum of this loss is achieved when the samples with the same label are encoded to the same embedding.
|
72 |
+
|
73 |
+
Proposition 1. If the support set of the data distribution is connected and the support set of each class distribution is closed, the minimum of Lalignment is reached when all *samples are projected to the same* vector.
|
74 |
+
|
75 |
+
In Section C.1, we conduct the empirical studies to verify our analysis. Results in Table 7 show that samples are roughly projected to the same vector if only the positive metric losses are used. The negative metric losses aim at positioning the embedding of dissimilar samples as far as possible. However, because the embedding space of DML is a unit hypersphere, where the maximum distance between two points is 2, it's not possible to separate all negative embeddings with a large margin. Actually on S
|
76 |
+
k−1, the number of points with pairwise distance larger or equal than √2 is at most 2k and the embedding dimension k is always much smaller than the number of feature vectors, thus it's impossible to make all distances between negative pairs exceed √2. Therefore investigating the properties of negative metric losses on the unit hypersphere is an interesting and important topic. We believe the negative metric losses are closely related to the uniformity on the hypersphere and our experimental results support this argument. Definition 2. (Hyperspherical uniformity) The embedded samples should be evenly distributed on the spherical surface.
|
77 |
+
|
78 |
+
In practice, the hyperspherical uniformity can be achieved by optimizing the uniformity regularization. There exist many different representations of the regularization, and we utilize the hyperspherical energy (HE) (Liu et al., 2018):
|
79 |
+
|
80 |
+
$$E(s,X)=\begin{cases}\mathbb{E}_{x\sim p_{data},y\sim p_{data}}[||f(x)-f(y)||_{2}^{-s}1_{x\neq y}],s>0,\\ \mathbb{E}_{x\sim p_{data},y\sim p_{data}}[\log(||f(x)-f(y)||_{2}^{-1}1_{x\neq y})],s=0,\end{cases}$$
|
81 |
+
$$\left(2\right)$$
|
82 |
+
|
83 |
+
the function ||f(x)−f(y)||−sis known as Riesz s-kernel, and s controls the penalty on small distance between two feature vectors. and Gaussian hyperspherical energy (G-HE) (Liu et al., 2018):
|
84 |
+
|
85 |
+
$$E_{G}(s,X)=\log\mathbb{E}_{x\sim p_{d a t a},y\sim p_{d a t a}}\big[e^{-s||f(x)-f(y)||_{2}^{2}}\big],s>0$$
|
86 |
+
−s||f(x)−f(y)||22 ]*, s >* 0 (3)
|
87 |
+
in the experiments for comparison. The values of HE and G-HE can also be used as measurements on the uniformity of the embedded samples. We expect the value to be small in order to achieve good hyperspherical uniformity. We also want to mention that simply maximizing the distance between samples will not lead to hyperspherical uniformity, and the detailed discussion is in Section B.2. Because finding the optimal solution of the HE or G-HE problem is NP-hard (Liu et al., 2018), we are not able to calculate the exact position of vectors which are evenly distributed on the sphere. We provide a primary
|
88 |
+
|
89 |
+
$$\left({\mathfrak{3}}\right)$$
|
90 |
+
|
91 |
+
insight about how should finite vectors be uniformly distributed on unit hypersphere, and our conclusion is consist with the empirical results. Since there exist many different tuple-based metric losses, analyzing all of them theoretically is impossible in this work. In Section 3.1, we will provide the theoretical analysis of the triplet loss. The analysis of linear loss can be found in Section B.2. In Appendix C we will show the empirical results on four popularly used tuple-based metric losses to verify our statement.
|
92 |
+
|
93 |
+
## 3.1 Triplet Metric Losses
|
94 |
+
|
95 |
+
In this subsection, we provide our theoretical analysis on the triplet metric losses with the following assumptions.
|
96 |
+
|
97 |
+
Assumption. Distributions pdata, ppos, ptri should satisfy:
|
98 |
+
- Random positive sampling: ∀x, y, ppos(*x, y*) = pdata(x)p*data*(y|Xx), where p*data*(·|Xx) is the conditional pdf of p*data* on the set of samples Xx similar to x i.e. p*data*(·|Xx) = p*data*(·)
|
99 |
+
p*data*(Xx)
|
100 |
+
.
|
101 |
+
|
102 |
+
- Random negative sampling: ptri(*x, y, x*−) = ppos(*x, y*)p
|
103 |
+
−
|
104 |
+
data(x
|
105 |
+
−), where p
|
106 |
+
$$p_{d a t a}^{-}(x^{-})={\frac{p_{d a t a}(x^{-})}{\int_{x^{-}}p_{d a t a}(x^{-})d x^{-}}}$$
|
107 |
+
- Class-balanced learning: p*data*(Xi) = 1n
|
108 |
+
, then Rx− p*data*(x
|
109 |
+
−)dx− =
|
110 |
+
where x
|
111 |
+
− is a negative sample *w.r.t.* x and n is the number of classes.
|
112 |
+
Definition 3. *(Triplet loss)*
|
113 |
+
$=\;\frac{n-1}{n}$ .
|
114 |
+
**In $\int_{x}$** **Pdata** (**
|
115 |
+
$${\mathcal{L}}_{t r i p l e t}(f,\tau):=\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}}\left[(||f(x)-f(y)||_{2}^{2}-||f(x)-f(x^{-})||_{2}^{2}+\tau)_{+}\right]$$
|
116 |
+
(4) $\frac{1}{2}$ ..............................
|
117 |
+
Triplet loss can be rewritten into the form of naive linear loss with a different distribution of triplets. We consider a new distribution:
|
118 |
+
|
119 |
+
$$p_{t r i}^{\prime}=\begin{cases}0,&\text{when}||f(x)-f(y)||_{2}^{2}-||f(x)-f(x^{-})||_{2}^{2}+\tau<0,\\ C p_{t r i},&\text{else,}\end{cases}$$
|
120 |
+
|
121 |
+
where C = 1/E(x,y,x−)∼ptri [1{||f(x)−f(y)||22−||f(x)−f(x−)||22+τ≥0}], then Ltriplet(*f, τ* ) = Llinear(f; X, p′tri) = E(*x,y,x*−)∼p
|
122 |
+
′
|
123 |
+
tri
|
124 |
+
[||f(x) − f(y)||22 *− ||*f(x) − f(x
|
125 |
+
−)||22
|
126 |
+
].
|
127 |
+
|
128 |
+
Apparently the positive part of the triplet loss is minimizing the intra-class distance under a new distribution p
|
129 |
+
′
|
130 |
+
tri, which has similar effect as the alignment loss with ptri shown in the experiments (Table 7). Now we focus on the negative part of the triplet loss.
|
131 |
+
|
132 |
+
Theorem 1. *Denote the probability density function (pdf) of* d 2(*x, y*) := ||f(x)−f(y)||22 w.r.t. y ∼ p*data*(·|Xx)
|
133 |
+
by q(d 2(x, y)). Then the pdf of u = ||f(x)−f(y)||22 −||f(x)−f(x
|
134 |
+
′)||22 +τ *with fixed* x, x
|
135 |
+
′is q(u−τ +d 2(*x, x*′)),
|
136 |
+
let S(*x, x*′) = R ∞
|
137 |
+
0q(u − τ + d 2(x, x′))du ∈ [0, 1]*, then*
|
138 |
+
|
139 |
+
$$-\mathbb{E}_{(x,y,x^{-})\sim p^{\prime}_{trx}}[||f(x)-f(x^{-})||_{2}^{2}]=-\frac{n}{n-1}\mathbb{E}_{x\sim p_{data},x^{\prime}\sim p_{data}}[||f(x)-f(x^{\prime})||_{2}^{2}S(x,x^{\prime})]$$ $$+\frac{1}{n-1}\mathbb{E}_{(x,x^{\prime})\sim p_{real}}[||f(x)-f(x^{\prime})||_{2}^{2}S(x,x^{\prime})]$$
|
140 |
+
|
141 |
+
The negative triplet loss consists of two parts, where the first part dominants the second part because n is always large in practice. The first part is actually a weighted unbiased regularization with weight S(*x, x*′). We think S(*x, x*′) may help the unbiased regularization to achieve hyperspherical uniformity. Because the closed form of q(d 2(*x, y*)) is intractable, it's impossible to analyze S(*x, x*′) theoretically without any assumptions.
|
142 |
+
|
143 |
+
We assume q(d 2(*x, y*)) is exponentially distributed and show the gradient flow of negative triplet loss is asymptotically equal to the gradient of Gaussian hyperspherical energy. Therefore in this case the negative triplet loss can lead to hyperspherical uniformity. Proposition 2. *Assume* q(d 2(*x, y*)) = 1A
|
144 |
+
e
|
145 |
+
−Ad2(x,y), S(*x, x*′) = 1A
|
146 |
+
e
|
147 |
+
−A(d 2(x,x−)−τ) and for the network parameter θ*, we have*
|
148 |
+
|
149 |
+
$$-\nabla_{\theta}\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}^{\prime}}[||f(x)-f(x^{-})||_{2}^{2}]=\frac{e^{A\tau}n}{A^{2}(n-1)}\nabla_{\theta}E_{G}(A,X)+O(\frac{1}{n})$$
|
150 |
+
|
151 |
+
the negative triplet loss has asymptotically the same gradient as Gaussian hyperspherical energy.
|
152 |
+
|
153 |
+
Despite the theoretical analysis, we also empirically show that the negative triplet loss achieves hyperspherical uniformity without any assumption on S(*x, x*′). Besides, the negative part of other metric losses are also shown to achieve hyperspherical uniformity in our empirical study. The details is shown in Appendix C.
|
154 |
+
|
155 |
+
In summary, the tuple-based metric losses on the unit hypersphere are closely related to intra-class alignment and hyperspherical uniformity. The positive metric losses target at minimizing the intra-class alignment and the negative metric losses try to keep all samples distributed uniformly on the hypersphere.
|
156 |
+
|
157 |
+
Connection to adversarial examples and adversarial training. The goal of adversarial examples is to fool the neural network by reducing the model performance. Attacking alignment loss, which positions the embedding of similar samples apart, or attacking the uniformity loss, which pulls dissimilar samples together, can definitely destroy the representation learned by DML models. Thus alignment and uniformity loss are suitable objectives for generating adversarial examples.
|
158 |
+
|
159 |
+
## 4 Designing New Adversarial Dml Models Based On Our Better Understanding Of Dml Loss
|
160 |
+
|
161 |
+
In this section, we introduce our new adversarial DML models: adversarial DML with alignment or uniformity objective (ADML+A or ADML+U). Before we introduce the details of our models, we share our motivation for designing ADML+A/U models by answering the following questions:
|
162 |
+
How can adversarial training helps improve DML models? One of the most reasonable explanations is that training with adversarial examples brings additional features to neural networks. For example, compared with clean images, adversarial examples make network representations more consistent with salient data features and human perception (Tsipras et al., 2018). Another possible reason is that adversarial examples can be regarded as a data augmentation method, which prevents overfitting of the neural networks. Augmentation techniques which are similar to adversarial training, e.g. using masking out (DeVries & Taylor, 2017) or adding Gaussian noise (Lopes et al., 2019) to regions in images, can help to achieve better performance on image recognizing tasks.
|
163 |
+
|
164 |
+
Why do we need new objectives for adversarial DML models? Based on our analysis in Section 3, the DML embedding of each image is depend on all other positive and negative samples from perspective of alignment and uniformity objective. Therefore if we want to generate the adversarial sample x
|
165 |
+
′for one image x, we need to push the adversarial sample away from the similar samples of x (maximize the alignment loss w.r.t. x
|
166 |
+
′), or pull the adversarial sample close to the dissimilar samples of x (maximize the uniformity loss w.r.t. x
|
167 |
+
′). Currently, the existing adversarial DML models (Duan et al., 2018; Panum et al., 2021) generate adversarial samples by attacking the triplet loss (3). In this case, only one positive and one negative samples are used to generating adversarial sample x
|
168 |
+
′, which is obviously less powerful than the alignment/uniformity objective (which utilizes more positive or negative samples). Our experimental results in Table 1 also show the adversarial examples generated by alignment/uniformity objectives is more powerful than the triplet objective. Thus it is critical to design new objectives for attacking DML models, which can take advantage of the representation information from more data samples.
|
169 |
+
|
170 |
+
Adversarial training. We first recall the standard tuple-based DML training setting. Denote the metric loss function by L(·; θ), where θ is the model parameters, our learning objective is:
|
171 |
+
min θ E(x,x+,x−)∼p[L((x, x+, x−); θ)]
|
172 |
+
In the regular adversarial training framework (Madry et al., 2017), we train networks with perturbed samples from distribution p
|
173 |
+
(adv)
|
174 |
+
|
175 |
+
$$\operatorname*{min}_{\theta}\mathbb{E}_{(x,x^{+},x^{-})\sim p^{(a d v)}}[{\mathcal{L}}((x,x^{+},x^{-});\theta)]$$
|
176 |
+
|
177 |
+
As our goal is to improve the DML performance on clean images by leveraging the regularization power of adversarial examples, we treat adversarial images as additional data augmentations and train networks with a mixture of adversarial examples and clean images. Our learning objective is
|
178 |
+
|
179 |
+
$$\min_{\theta}(\mathbb{E}_{(x,x^{+},x^{-})\sim p}[\mathcal{L}((x,x^{+},x^{-});\theta)]+\lambda\mathbb{E}_{(x,x^{+},x^{-})\sim p^{(x,x)}}[\mathcal{L}((x,x^{+},x^{-});\theta)])\tag{5}$$
|
180 |
+
|
181 |
+
where λ is the strength of the adversarial training.
|
182 |
+
|
183 |
+
Generate adversarial samples. We use l∞ PGD-FSGM (Madry et al., 2017) model to generate adversarial samples. We consider DML models require class-balanced batches for training, and propose to generate perturbations by maximizing the intra-batch alignment or uniformity. Given a sample x0 and an DML model f, denote by Spos a batch of positive sample of x0, the l∞ FSGM adversarial sample of x0 is generated by maximizing the alignment objective via:
|
184 |
+
|
185 |
+
$$\mathrm{ADV}(x_{0}):=\Pi_{B_{\infty}(x_{0},\epsilon)}(x_{0}+\alpha\nabla_{x_{0}}{\mathcal{L}}_{a l i g n m e n t}(f;S_{p o s}))$$
|
186 |
+
ADV(x0) : = ΠB∞(x0,ϵ)(x0 + α∇x0Lalignment(f; Spos)) (6)
|
187 |
+
where ΠB∞(x,ϵ)is the projecting function on the l∞ ball centering at x with radius ϵ, and α is the attack strength. The gradient ∇xadv Lalignment(f; Spos) is
|
188 |
+
|
189 |
+
$$\begin{split}\nabla_{x^{adv}}\mathcal{L}_{alignment}(f,S_{pos})&=\frac{1}{|S_{pos}|}\nabla_{x^{adv}}(\sum_{x\in S_{pos}}||f(x^{adv})-f(x)||_{2}^{2})\\ &=\frac{2}{|S_{pos}|}(\nabla_{x}f(x)|_{x=x^{adv}})^{T}\sum_{x\in S_{pos}}(f(x^{adv})-f(x)).\end{split}$$ In the following, we can use the above expression to compute the $\mathcal{L}_{q}$.
|
190 |
+
|
191 |
+
$$({\mathfrak{f}}{\mathfrak{h}})$$
|
192 |
+
$$\left(7\right)$$
|
193 |
+
|
194 |
+
Analogously, denote by Sneg a batch of negative sample of x0, the adversarial sample generated by maximizing the uniformity objective (Eq. 2 and Eq. 3) is:
|
195 |
+
|
196 |
+
$$\mathrm{ADV}(x_{0}):=\Pi_{B_{\infty}(x_{0},\epsilon)}(x_{0}+\alpha\nabla_{x_{0}}{\mathcal{L}}_{u n i f o r m i t y}(f;S_{n e g}))$$
|
197 |
+
ADV(x0) : = ΠB∞(x0,ϵ)(x0 + α∇x0Luniformity(f; Sneg)) (7)
|
198 |
+
The gradient $\nabla_{x\circ d\circ}\mathcal{L}_{uniformity}(f;S_{neg})$ is .
|
199 |
+
The gamma-ray scattering length is $$\nabla_{\text{\tiny{photo}}}\mathcal{E}_{\text{\tiny{scattering}}}(f,S_{\text{\tiny{mag}}})=\frac{1}{|S_{\text{\tiny{mag}}}|}\nabla_{\text{\tiny{exc}}}(\sum_{x\in S_{\text{\tiny{mag}}}}\exp(-\|f(x^{\text{\tiny{dark}}})-f(x)\|_{2}^{2}))$$ $$=-\frac{2}{|S_{\text{\tiny{mag}}}|}(\nabla_{x}f(x)|_{x=x^{\text{\tiny{dark}}}})^{T}\sum_{x\in S_{\text{\tiny{mag}}}}(f(x^{\text{\tiny{dark}}})-f(x))\exp(-\|f(x^{\text{\tiny{dark}}})-f(x)\|_{2}^{2})).$$
|
200 |
+
|
201 |
+
In PGD-FSGM method, we will update the adversarial samples iterative by x
|
202 |
+
(l+1) = ADV (x
|
203 |
+
(l)) for L steps.
|
204 |
+
|
205 |
+
The output perturbed samples x
|
206 |
+
(L) will be used in our adversarial training objective Eq. 5. In ADML+A we use alignment loss (Eq. 1) to generate adversarial samples and in ADML+U we use Gaussian uniformity loss G-HE (Eq. 3). We include the algorithm of ADML+A and ADML+U in the appendix (Alg. 1 and Alg. 2). In experiments, we apply multi-similarity losses as the metric loss L with attacking alignment and uniformity objective, both models achieve significantly better results on benchmarks (Table 2 and Table 3).
|
207 |
+
|
208 |
+
## 5 Experiments
|
209 |
+
|
210 |
+
In our experiments, we first conduct the empirical studies to verify the theoretical analysis results in Sec. 3
|
211 |
+
(the results are discussed in Appendix C). After that, we show that alignment and uniformity objectives can help to generate better adversarial examples than the triplet loss, then we compare the natural and robust performance of our adversarial DML model with the state-of-the-art methods. The *clean performance* is the performance of DML models evaluated with clean samples, while the *robust performance* is evaluated with adversarial samples.
|
212 |
+
|
213 |
+
## 5.1 Experimental Setup
|
214 |
+
|
215 |
+
Datasets. We test our model on four DML benchmarks, CUB200-2011 (Wah et al., 2011), CARS196 (Krause et al., 2013), Online-product (Song et al., 2016), and In-shop (Liu et al., 2016). We follow the previous work
|
216 |
+
|
217 |
+
| or uniformity loss. Lower score indicates better quality of adversarial samples. CUB200-2011 CARS196 | | | Online-products | | | | | | |
|
218 |
+
|--------------------------------------------------------------------------------------------------------|-------|-------|-------------------|-------|-------|-------|-------|-------|-------|
|
219 |
+
| Attack objective | R@1 | NMI | mAP@C | R@1 | NMI | mAP@C | R@1 | NMI | mAP@C |
|
220 |
+
| No attack | 62.40 | 67.21 | 23.56 | 77.59 | 66.64 | 23.83 | 77.53 | 89.98 | 41.12 |
|
221 |
+
| Triplet | 28.33 | 44.48 | 6.61 | 22.42 | 34.30 | 3.47 | 53.77 | 83.43 | 25.21 |
|
222 |
+
| Alignment | 8.71 | 23.42 | 0.46 | 10.99 | 17.11 | 0.55 | 8.82 | 80.05 | 1.49 |
|
223 |
+
| Uniformity | 13.47 | 32.87 | 3.54 | 14.30 | 24.07 | 2.05 | 4.68 | 79.90 | 0.51 |
|
224 |
+
|
225 |
+
(Song et al., 2016) and (Liu et al., 2016) for the train-test split. The statistics of these datasets is introduced in Section D.2 Training Frameworks. In all experiments, we use the DML framework from (Roth et al., 2020) for training.
|
226 |
+
|
227 |
+
This framework enables us to train and evaluate DML models under the same settings and ensure fair comparison of the model performance. The backbone network is ResNet50 (He et al., 2016) with ImageNet pretrained (Krizhevsky et al., 2012) and frozen Batch-Normalization layers, the embedding dimension of samples is 128. The initial learning rate is 0.00001 with no scheduling and the batch size is 112. Experiments are performed on a 24GB Nvidia Tesla P40.
|
228 |
+
|
229 |
+
Baseline models. We compare ADML+A/U with the state-of-the-art DML models. First we select three of the best DML models according to (Roth et al., 2020): margin loss with distance sampling (Wu et al.,
|
230 |
+
2017), multisimilarity loss (Wang et al., 2019) and triplet loss with distance sampling. Besides, we take another two SOTA DML models which were published recently but not included in Roth et al. (2020)'s work: Proxy-Anchor loss (Kim et al., 2020) and Cross-Entropy loss (Boudiaf et al., 2020). Next we apply the Info-NCE loss (Wang & Isola, 2020), which is a contrastive learning objective, as one of the baselines. Finally we compare our models with the only existing adversarial DML model (Duan et al., 2018), ADML+T (triplet objective), in the experiments.
|
231 |
+
|
232 |
+
Evaluation Metrics. We measure the performance of DML and ADML models with Recall@k (R@k)
|
233 |
+
(Jegou et al., 2010), Normalized Mutual Information (NMI) (Christopher et al., 2008) and Mean Average Precision measured on recall (mAP@k) (Musgrave et al., 2020). The details of these metrics are introduced in Section D.3.
|
234 |
+
|
235 |
+
## 5.2 Compare The Quality Of Adversarial Examples With Different Objectives
|
236 |
+
|
237 |
+
Settings. The threatened model is a pretrained DML model with triplet loss and distance sampling (Wu et al.,
|
238 |
+
2017), which is one of the most competitive DML models according to Roth et al. (2020). We consider three different attack objectives, triplet loss (Eq. 4), alignment loss (Eq. 1), and Gaussian hyperspherical uniformity
|
239 |
+
(Eq. 3), for generating adversarial samples. We use l∞ PGD-FSGM attacks with strength ϵ = 0.0314, L = 7 steps and step size α = 0.007, we keep this settings for all three attack objectives in our experiments. Results. In Table 1, the adversarial samples generated by alignment or uniformity objectives are significantly stronger than the samples generated by triplet loss. This indicates that adversarial samples from alignment or uniformity contain more features that are not captured by the vanilla DML models. Thus we believe the Adversarial DML model with alignment or uniformity objectives could be more generalized than the vanilla DML models or ADML with triplet loss. Our experimental results in Table 2 and Table 3, which show that ADML+A and ADML+U outperform the baseline models on metric learning tasks, also support our analysis.
|
240 |
+
|
241 |
+
We also notice that using both A and U in ADML has similar performance as ADML+T, which suggests we should use the attack objective (A or U) separately. In Fig. 2 we also shows the T-SNE plot of the embedding generated by the vanilla DML and ADML+A, which shows ADML+A can better separate different classes.
|
242 |
+
|
243 |
+
| cases. The model settings and training parameters are same for all models. Online-product | | | In-shop | | | |
|
244 |
+
|---------------------------------------------------------------------------------------------|--------------|--------------|--------------|--------------|--------------|--------------|
|
245 |
+
| Models | R@1 | NMI | mAP@C | R@1 | NMI | mAP@C |
|
246 |
+
| ImageNet pretrain | 48.51 | 84.24 | 17.37 | 21.62 | 76.53 | 4.02 |
|
247 |
+
| Linear | 20.53 | 81.20 | 5.78 | 16.03 | 75.81 | 2.47 |
|
248 |
+
| Triplet-D | 77.41 ± 0.19 | 90.04 ± 0.05 | 41.05 ± 0.14 | 87.31 ± 0.18 | 89.76 ± 0.09 | 28.45 ± 0.17 |
|
249 |
+
| Margin | 77.66 ± 0.14 | 89.93 ± 0.06 | 41.41 ± 0.12 | 87.56 ± 0.15 | 89.93 ± 0.07 | 28.45 ± 0.13 |
|
250 |
+
| Multi-Similarity | 77.75 ± 0.11 | 90.00 ± 0.04 | 41.39 ± 0.10 | 87.33 ± 0.20 | 89.85 ± 0.12 | 29.61 ± 0.16 |
|
251 |
+
| Proxy-Anchor | 77.11 ± 0.13 | 89.90 ± 0.05 | 40.98 ± 0.15 | 87.14 ± 0.17 | 89.41 ± 0.05 | 28.11 ± 0.11 |
|
252 |
+
| Cross-Entropy | 76.92 ± 0.36 | 89.82 ± 0.11 | 41.31 ± 0.42 | 86.75 ± 0.32 | 89.71 ± 0.13 | 28.38 ± 0.51 |
|
253 |
+
| Info-NCE | 76.21 ± 0.15 | 89.71 ± 0.04 | 39.42 ± 0.09 | 86.24 ± 0.14 | 89.62 ± 0.04 | 27.94 ± 0.17 |
|
254 |
+
| ADML+T | 77.13 ± 0.11 | 89.59 ± 0.03 | 40.75 ± 0.07 | 87.47 ± 0.12 | 89.65 ± 0.10 | 29.05 ± 0.12 |
|
255 |
+
| ADML+A | 78.12 ± 0.16 | 89.95 ± 0.04 | 41.56 ± 0.11 | 87.94 ± 0.15 | 89.93 ± 0.05 | 30.12 ± 0.15 |
|
256 |
+
| ADML+U | 78.01 ± 0.12 | 89.97 ± 0.03 | 41.21 ± 0.12 | 87.86 ± 0.18 | 89.57 ± 0.08 | 29.93 ± 0.16 |
|
257 |
+
| ADML+A+U | 77.41 ± 0.15 | 89.88 ± 0.07 | 40.91 ± 0.14 | 87.65 ± 0.12 | 89.71 ± 0.09 | 29.72 ± 0.13 |
|
258 |
+
|
259 |
+
| models. The model settings and training parameters are same for all models. CUB200-2011 | | | CARS196 | | | |
|
260 |
+
|-------------------------------------------------------------------------------------------|--------------|--------------|--------------|--------------|--------------|--------------|
|
261 |
+
| Models | R@1 | NMI | mAP@C | R@1 | NMI | mAP@C |
|
262 |
+
| ImageNet pretrain | 43.77 | 57.56 | 8.99 | 36.39 | 37.96 | 4.93 |
|
263 |
+
| Linear | 38.42 | 43.28 | 7.64 | 32.45 | 35.12 | 3.48 |
|
264 |
+
| Triplet-D | 62.31 ± 0.41 | 67.23 ± 0.34 | 23.29 ± 0.25 | 79.08 ± 0.41 | 66.02 ± 0.33 | 24.02 ± 0.31 |
|
265 |
+
| Margin | 62.42 ± 0.36 | 67.11 ± 0.49 | 23.54 ± 0.21 | 78.11 ± 0.32 | 66.87 ± 0.35 | 23.94 ± 0.27 |
|
266 |
+
| Multi-Similarity | 62.73 ± 0.61 | 67.45 ± 0.39 | 22.65 ± 0.34 | 79.94 ± 0.28 | 67.59 ± 0.43 | 24.12 ± 0.25 |
|
267 |
+
| Proxy-Anchor | 64.16 ± 0.48 | 67.84 ± 0.37 | 23.91 ± 0.32 | 80.13 ± 0.33 | 67.31 ± 0.41 | 23.86 ± 0.26 |
|
268 |
+
| Cross-Entropy | 61.58 ± 0.31 | 66.67 ± 0.39 | 22.25 ± 0.20 | 78.41 ± 0.39 | 66.35 ± 0.31 | 23.63 ± 0.34 |
|
269 |
+
| Info-NCE | 61.79 ± 0.51 | 66.91 ± 0.42 | 22.43 ± 0.28 | 77.52 ± 0.37 | 66.75 ± 0.57 | 23.41 ± 0.22 |
|
270 |
+
| ADML+T | 64.37 ± 0.43 | 68.13 ± 0.49 | 24.05 ± 0.30 | 80.88 ± 0.46 | 66.47 ± 0.51 | 23.91 ± 0.39 |
|
271 |
+
| ADML+A | 66.02 ± 0.35 | 68.78 ± 0.37 | 24.46 ± 0.23 | 81.95 ± 0.38 | 67.97 ± 0.49 | 24.21 ± 0.28 |
|
272 |
+
| ADML+U | 65.46 ± 0.40 | 68.60 ± 0.33 | 24.58 ± 0.28 | 82.06 ± 0.36 | 68.21 ± 0.35 | 24.82 ± 0.34 |
|
273 |
+
| ADML+A+U | 64.24 ± 0.38 | 67.73 ± 0.45 | 23.88 ± 0.26 | 80.95 ± 0.41 | 67.64 ± 0.39 | 23.85 ± 0.30 |
|
274 |
+
|
275 |
+
## 5.3 Adversarial Dml Models Improve Clean Performance
|
276 |
+
|
277 |
+
Settings. We train all DML models for 100 epochs. For our adversarial DML models we apply ADML+A
|
278 |
+
and ADML+U. The adversarial training strength λ in ADML+A/U is 0.1 for CUB200-2011 and 0.15 for CARS196. For generating adversarial examples we use l∞ PGD-FSGM attacks with strength ϵ = 0.0314, L = 7 steps and step size α = 0.007. For Online-products and In-shop we use λ = 0.005, ϵ = 0.01, L = 5, and α = 0.003. The ImageNet model is the model only with ImageNet pretrain.
|
279 |
+
|
280 |
+
Results. In Table 2, our ADML+A and ADML+U models outperform the SOTA metric learning models.
|
281 |
+
|
282 |
+
ADML+A/U improve the R@1 over 1% on CUB200-2011 and CARS196, and also have considerable improvement on Online-product and In-shop dataset. Besides, the performance of ADML+A/U under the NMI and mAP@C metrics is also comparable or better than the SOTA. Since all the models are training under the same framework and settings, we can conclude that adversarial training helps to enhance the natuaral performance of DML models.
|
283 |
+
|
284 |
+
## 5.4 Robustness Performance Of Adversarial Dml Models
|
285 |
+
|
286 |
+
We evaluate the robustness performance of our ADML+A and ADML+U models against attacking the alignment objective, which is the strongest DML attack according to our experiment in Section 5.2, on CUB2002011 and CARS196 datasets. For baseline models we use margin, multi-similarity, and ADML+Triplet models.
|
287 |
+
|
288 |
+
8
|
289 |
+
|
290 |
+
![8_image_0.png](8_image_0.png)
|
291 |
+
|
292 |
+
Figure 1: Performance of ADML+A and ADML+U with different adversarial training strength λ on CUB200-2011 and CARS196.
|
293 |
+
The settings of alignment attack are the same as the settings in Section 5.2, the settings of ADML+A/U
|
294 |
+
and baseline models are the same as the settings in Section 5.3. As seen in Table 4, ADML+T has slightly better performance than the vanilla DML models, while ADML+A and ADML+U outperform the baseline models with a large margin under the alignment attacks. Notice ADML+A takes the advantage of adversarial training with alignment loss, it's reasonable that the performance ADML+A is slightly better than ADML+U
|
295 |
+
under alignment attacks.
|
296 |
+
|
297 |
+
| by attacking alignment loss. CUB200-2011 | | CARS196 | | Online-Products | | | | | |
|
298 |
+
|--------------------------------------------|-------|-----------|-------|-------------------|-------|-------|-------|-------|-------|
|
299 |
+
| Models | R@1 | NMI | mAP@C | R@1 | NMI | mAP@C | R@1 | NMI | mAP@C |
|
300 |
+
| Margin | 8.04 | 24.59 | 0.58 | 8.39 | 17.17 | 0.56 | 8.27 | 80.02 | 1.34 |
|
301 |
+
| Multi-similarity | 8.90 | 24.13 | 0.51 | 11.95 | 18.32 | 0.55 | 8.93 | 80.06 | 1.45 |
|
302 |
+
| ADML+T | 11.58 | 25.26 | 0.74 | 25.35 | 21.22 | 1.31 | 10.69 | 80.20 | 1.74 |
|
303 |
+
| ADML+A | 17.37 | 29.16 | 1.27 | 39.97 | 26.05 | 2.54 | 13.98 | 80.40 | 2.01 |
|
304 |
+
| ADML+U | 15.10 | 27.89 | 1.06 | 33.09 | 24.48 | 2.30 | 11.32 | 80.29 | 1.85 |
|
305 |
+
|
306 |
+
## 5.5 Ablation Study
|
307 |
+
|
308 |
+
Strength of adversarial training. In this part, we evaluate the effect of adversarial training strength λ on the performance of ADML+A and ADML+U. The backbone is DML with multi-similarity loss. We expect the performance will first increase with λ then decrease, because when λ is close to 0, the ADML models can hardly learn the extra features of adversarial examples and the improvement is small, when λ is large, the extra features of adversarial examples will dominate the DML models and the performance on clean features will be poor. The model settings are the same as in Section 5.3. The experimental results illustrated in Fig. 1 are consistent with our analysis, when increasing the adversarial training strength, the clean performance of ADML+A and ADML+U is first improved then decreased. Table 5 shows the robustness of ADML+A
|
309 |
+
and ADML+U model with adversarial training strength λ and alignment attacks. Both models become increasingly robust against alignment attacks.
|
310 |
+
|
311 |
+
ADML on different metric loss. In this experiment, we apply our ADML approach with triplet, alignment, uniformity, and alignment+uniformity objectives on different metric losses, including triplet, margin, multi-
|
312 |
+
|
313 |
+
Table 5: Robustness with different adversarial training strength λ on CUB200-2011 under the alignment
|
314 |
+
|
315 |
+
attacks. The metric is Recall@1.
|
316 |
+
|
317 |
+
λ 0 0.2 0.4 0.6 0.8 1.0
|
318 |
+
|
319 |
+
ADML+A 8.90 19.15 22.51 24.47 25.29 25.93
|
320 |
+
|
321 |
+
ADML+U 8.90 16.21 18.36 20.23 21.06 21.55
|
322 |
+
|
323 |
+
![9_image_0.png](9_image_0.png)
|
324 |
+
|
325 |
+
![9_image_1.png](9_image_1.png)
|
326 |
+
|
327 |
+
(b) ADML+A
|
328 |
+
Figure 2: T-SNE visualization of embedding generated by a vanilla DML and an ADML+A model on CUB200-2011.
|
329 |
+
similarity, and info-NCE losses. The metric is Recall@1. The model settings are the same as in Section 5.3. From Table 6, we can observe that all ADML methods boost the performance of all DML losses. ADML+A achieves the most significant improvement across all attacks and metric losses.
|
330 |
+
|
331 |
+
| Table 6: Recall@1 of different metric learning losses with ADML meth | ods on CUB200-2011 | | | |
|
332 |
+
|------------------------------------------------------------------------|----------------------|--------|--------|-------|
|
333 |
+
| Vanilla | ADML+T | ADML+A | ADML+U | |
|
334 |
+
| Triplet | 62.29 | 63.68 | 65.11 | 64.72 |
|
335 |
+
| Margin | 62.48 | 64.26 | 65.92 | 65.61 |
|
336 |
+
| Multi-similarity | 62.71 | 64.45 | 66.13 | 65.58 |
|
337 |
+
| Info-NCE | 61.42 | 62.74 | 64.01 | 63.85 |
|
338 |
+
|
339 |
+
T-SNE evaluation of ADML+A and vanilla DML. In experiments, we visualize the embedding of the first 10 classes of CUB200-2011 generated by a vanilla DML model (with multi-similarity loss) and an ADML+A model (with multi-similarity loss). The Recall@1 of the DML model and the ADML+A model are 62.71 and 66.13, respectively. Fig. 2 (a) plots the T-SNE result for vanilla DML and Fig. 2 (b) plots the T-SNE result for ADML+A. Comparing two figures, we can see that ADML+A has better separation on the
|
340 |
+
(red, green, orange) samples.
|
341 |
+
|
342 |
+
## 6 Discussion And Conclusion
|
343 |
+
|
344 |
+
Limitation. a) We only provide theoretical analysis on the linear and triplet losses, studying other tuple-based metric losses theoretically is also interesting and can provide further insight into metric learning problems. b)
|
345 |
+
Our adversarial DML models improve the performance by increasing the model generalization. If the model generalization is already good, the adversarial training cannot enhance the performance significantly. Boarder Impact. Adversarial training is designed to improve the robustness of the model. In this work, we show that with suitable perturbation size and training weight, adversarial training can enhance the natural performance simultaneously. Our work sheds light on the possibility of designing both accurate and robust models. Meanwhile, there are some concerns raised in adversarial training, e.g., unfairness Xu et al. (2021),
|
346 |
+
we believe our models can be further improved by combining the related approaches.
|
347 |
+
|
348 |
+
![10_image_0.png](10_image_0.png)
|
349 |
+
|
350 |
+
![10_image_1.png](10_image_1.png)
|
351 |
+
|
352 |
+
![10_image_2.png](10_image_2.png)
|
353 |
+
|
354 |
+
Figure 3: Hardness (the performance of a model under that attack) and the clean performance of ADML
|
355 |
+
under triplet loss and alignment loss. We see that there is no significant correlation between the hardness of adversarial examples and the clean performance. Thus, the special property of alignment and uniformity loss
|
356 |
+
(using multiple positive/negative examples) should be the key to the improvement on the clean performance.
|
357 |
+
In this work, we investigated two important properties, intra-class alignment and hyperspherical uniformity, of tuple-based metric losses on unit sphere. Based on our new understanding, we design two novel adversarial DML models, ADML+A and ADML+U, where the perturbations are generated by maximizing the alignment loss or the uniformity loss. Our ADML+A and ADML+U improve both of natural and robust DML
|
358 |
+
performance by enhancing model generalization.
|
359 |
+
|
360 |
+
## References
|
361 |
+
|
362 |
+
Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. *arXiv preprint arXiv:1906.00910*, 2019.
|
363 |
+
|
364 |
+
Malik Boudiaf, Jérôme Rony, Imtiaz Masud Ziko, Eric Granger, Marco Pedersoli, Pablo Piantanida, and Ismail Ben Ayed. A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses. In *European Conference on Computer Vision*, pp. 548–564. Springer, 2020.
|
365 |
+
|
366 |
+
Maxime Bucher, Stéphane Herbin, and Frédéric Jurie. Improving semantic embedding consistency by metric learning for zero-shot classiffication. In *European Conference on Computer Vision*, pp. 730–746. Springer, 2016.
|
367 |
+
|
368 |
+
D Manning Christopher, Raghavan Prabhakar, Schütze Hinrich, et al. Introduction to information retrieval.
|
369 |
+
|
370 |
+
An Introduction To Information Retrieval, 151(177):5, 2008.
|
371 |
+
|
372 |
+
Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. *arXiv preprint arXiv:1708.04552*, 2017.
|
373 |
+
|
374 |
+
Yueqi Duan, Wenzhao Zheng, Xudong Lin, Jiwen Lu, and Jie Zhou. Deep adversarial metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2780–2789, 2018.
|
375 |
+
|
376 |
+
Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping.
|
377 |
+
|
378 |
+
In *2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)*,
|
379 |
+
volume 2, pp. 1735–1742. IEEE, 2006.
|
380 |
+
|
381 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
|
382 |
+
|
383 |
+
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization.
|
384 |
+
|
385 |
+
arXiv preprint arXiv:1808.06670, 2018.
|
386 |
+
|
387 |
+
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry.
|
388 |
+
|
389 |
+
Adversarial examples are not bugs, they are features. *arXiv preprint arXiv:1905.02175*, 2019.
|
390 |
+
|
391 |
+
Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. *IEEE*
|
392 |
+
transactions on pattern analysis and machine intelligence, 33(1):117–128, 2010.
|
393 |
+
|
394 |
+
Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. *arXiv preprint arXiv:2010.13337*, 2020.
|
395 |
+
|
396 |
+
G. A. Kabatjanskii and V. I. Levenstein. Bounds for packings on a sphere and in space. *Problemy Peredachy* Informatsii, 1978.
|
397 |
+
|
398 |
+
Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3238–3247, 2020.
|
399 |
+
|
400 |
+
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13)*,
|
401 |
+
Sydney, Australia, 2013.
|
402 |
+
|
403 |
+
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25:1097–1105, 2012.
|
404 |
+
|
405 |
+
Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 212–220, 2017.
|
406 |
+
|
407 |
+
Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, and Le Song. Learning towards minimum hyperspherical energy. *arXiv preprint arXiv:1805.09298*, 2018.
|
408 |
+
|
409 |
+
Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, and Adrian Weller. Learning with hyperspherical uniformity. In *International Conference on Artificial Intelligence and Statistics*, pp. 1180–
|
410 |
+
1188. PMLR, 2021.
|
411 |
+
|
412 |
+
Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In *Proceedings of IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), June 2016.
|
413 |
+
|
414 |
+
Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, and Ekin D Cubuk. Improving robustness without sacrificing accuracy with patch gaussian augmentation. *arXiv preprint arXiv:1906.02611*, 2019.
|
415 |
+
|
416 |
+
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017.
|
417 |
+
|
418 |
+
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. *Advances in Neural Information Processing Systems*, 32, 2019.
|
419 |
+
|
420 |
+
Yair Movshovitz-Attias, Alexander Toshev, Thomas K Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies. In *Proceedings of the IEEE International Conference on Computer* Vision, pp. 360–368, 2017.
|
421 |
+
|
422 |
+
Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. A metric learning reality check. In European Conference on Computer Vision, pp. 681–699. Springer, 2020.
|
423 |
+
|
424 |
+
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding.
|
425 |
+
|
426 |
+
arXiv preprint arXiv:1807.03748, 2018.
|
427 |
+
|
428 |
+
Thomas Kobber Panum, Zi Wang, Pengyu Kan, Earlence Fernandes, and Somesh Jha. Exploring adversarial robustness of deep metric learning. *arXiv preprint arXiv:2102.07265*, 2021.
|
429 |
+
|
430 |
+
Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In International conference on machine learning, pp. 2152–2161. PMLR, 2015.
|
431 |
+
|
432 |
+
Karsten Roth, Timo Milbich, Samarth Sinha, Prateek Gupta, Bjorn Ommer, and Joseph Paul Cohen.
|
433 |
+
|
434 |
+
Revisiting training strategies and generalization performance in deep metric learning. In International Conference on Machine Learning, pp. 8242–8252. PMLR, 2020.
|
435 |
+
|
436 |
+
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? *arXiv preprint arXiv:2007.08489*, 2020.
|
437 |
+
|
438 |
+
Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp.
|
439 |
+
|
440 |
+
815–823, 2015.
|
441 |
+
|
442 |
+
Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2016.
|
443 |
+
|
444 |
+
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. There is no free lunch in adversarial robustness (but there are unexpected benefits). *arXiv preprint arXiv:1805.12152*,
|
445 |
+
2(3), 2018.
|
446 |
+
|
447 |
+
C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset.
|
448 |
+
|
449 |
+
Technical report, 2011.
|
450 |
+
|
451 |
+
Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International Conference on Machine Learning*, pp. 9929–9939. PMLR,
|
452 |
+
2020.
|
453 |
+
|
454 |
+
Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R Scott. Multi-similarity loss with general pair weighting for deep metric learning. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 5022–5030, 2019.
|
455 |
+
|
456 |
+
Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. Sampling matters in deep embedding learning. In *Proceedings of the IEEE International Conference on Computer Vision*, pp.
|
457 |
+
|
458 |
+
2840–2848, 2017.
|
459 |
+
|
460 |
+
Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan L Yuille, and Quoc V Le. Adversarial examples improve image recognition. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 819–828, 2020.
|
461 |
+
|
462 |
+
Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, and Jiliang Tang. To be robust or to be fair: Towards fairness in adversarial training. In *International Conference on Machine Learning*, pp. 11492–11501. PMLR, 2021.
|
463 |
+
|
464 |
+
## A Code
|
465 |
+
|
466 |
+
We use the open source DML training framework (Apache License 2.0) https://github.com/Confusezius/
|
467 |
+
Deep-Metric-Learning-Baselines. We include the code of our Alg. 1 and Alg. 2 in the supplementary material. All datasets used in this paper are available online.
|
468 |
+
|
469 |
+
## B Theoretical Explanation Of Positive And Negative Metric Losses
|
470 |
+
|
471 |
+
B.1 Number of nearly orthogonal vectors in high dimensional sphere.
|
472 |
+
|
473 |
+
Lemma 1. *(Kabatjanskii-Levenstein bound (Kabatjanskii & Levenstein, 1978)) For a hypersphere* S
|
474 |
+
k−1 ∈ R
|
475 |
+
k, there exist at least kM *vectors with pairwise distance in the range of* √2 ± O(
|
476 |
+
qM log k k).
|
477 |
+
|
478 |
+
Lemma 1 shows the number of nearly orthogonal vectors we can take from a high dimensional sphere. If k = 128 and M = 3, the amount of nearly orthogonal vectors could be more than one million, which exceeds the volume of benchmark datasets X in DML a lot. Following the former discussion, we know the pairwise distance cannot be larger than √2 for all negative pairs. Therefore we think the uniformity regularization with DML benchmarks will lead to an embedding space where the feature vectors are nearly orthogonal. The plots of pairwise distance distribution in Fig. 4 with hyperspherical regularization also support our analysis.
|
479 |
+
|
480 |
+
## B.2 Naive Linear Metric Losses
|
481 |
+
|
482 |
+
With the same Assumption in Section 3.1, we can prove that the positive part of the linear loss minimizes the intra-class distance and the negative part maximizes an unbiased term, which doesn't achieve hyperspherical uniformity. The theoretical analysis results are summarized in the following theorem. Definition 4. *(Naive linear loss)*
|
483 |
+
|
484 |
+
$$\mathcal{L}_{l i n e a r}(f;X,p_{t r i}):=\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}}\left[||f(x)-f(y)||_{2}^{2}-||f(x)-f(x^{-})||_{2}^{2}\right]\,.$$
|
485 |
+
|
486 |
+
Definition 5. *(Unbiased regularization)*
|
487 |
+
|
488 |
+
$$i\omega n)$$
|
489 |
+
$${\mathcal{L}}_{u n b i a s e d}(f;X,p_{d a t a}):=||\mathbb{E}_{x\sim p_{d a t a}}[f(x)]||_{2}^{2}\,.$$
|
490 |
+
|
491 |
+
the minimum of this loss is reached when the centroid coincide with the origin.
|
492 |
+
|
493 |
+
Theorem 2. Naive linear loss consists of alignment loss and unbiased loss with a constant multiplier.
|
494 |
+
|
495 |
+
$\textit{Positive part:}\ \mathbb{E}_{(x,y,x^-)\sim p_{tri}}\left[||f(x)-f(y)||_2^2\right]=\mathcal{L}_{\textit{alignment}}$ $\textit{Negative part:}\ -\mathbb{E}_{(x,y,x^-)\sim p_{tri}}\left[||f(x)-f(x^-)||_2^2\right]=\dfrac{n}{n-1}(2\mathcal{L}_{\textit{unbiased}}-2)+\dfrac{1}{n-1}\mathcal{L}_{\textit{alignment}}$
|
496 |
+
Combining them we have Llinear =n n−1
|
497 |
+
(2Lunbiased + L*alignment* − 2).
|
498 |
+
|
499 |
+
Because the number of classes n is always large and the magnitude of L*unbiased* −1 and L*alignment* are similar, in negative linear loss the dominant objective is the unbiased regularization. Thus simply maximizing the distance between negative pairs will lead to the unbiased regularization instead of hyperspherical uniformity.
|
500 |
+
|
501 |
+
In 4(e) and 4(c) of the appendix, we can observe the difference between the models with negative linear loss and uniformity regularization. We also show the interesting connection between the naive linear loss and linear discriminant analysis (LDA) in Section E.5.
|
502 |
+
|
503 |
+
Remark. Naive linear loss doesn't work at all in practice (see the experimental results of Linear model in Table 2 and Table 3). Based on our analysis, we believe it's because linear loss doesn't optimize the hyperspherical uniformity. In the next section, we will introduce our theoretical analysis of triplet loss, which is a simple variant of linear loss. We find triplet loss optimize the hyperspherical uniformity, which could be the reason that triplet loss works well empirically.
|
504 |
+
|
505 |
+
## C Empirical Study Of Positive And Negative Metric Losses
|
506 |
+
|
507 |
+
In this subsection, we study the effect of tuple-based metric losses on positive or negative pairs empirically, and focus on the four tuple-based losses: naive linear, triplet, margin, and multi-similarity loss. In experiments we train all DML models with either positive metric losses or negative metric losses.
|
508 |
+
|
509 |
+
## C.1 Dml Models With Positive Metric Losses
|
510 |
+
|
511 |
+
Settings. We train 4 DML models with linear, triplet, margin, and multi-similarity losses on the positive sample pairs. The gradient flow of negative pairs is stopped. We train all models for 50 epochs. We compare the average distance between positive/negative/all pairs of embedded samples.
|
512 |
+
|
513 |
+
Results. In Table 7, the DML models trained with only positive metric losses have average pairwise distance close to 0. Therefore, minimizing the intra-class alignment will lead to a model which maps all samples to the same feature vector.
|
514 |
+
|
515 |
+
Table 7: Comparison of DML models trained with the positive metric losses on CUB200-2011 and CARS196.
|
516 |
+
|
517 |
+
We calculate the average distance (Avgdist), average distance of positive pairs (AvgdistPos), and average
|
518 |
+
|
519 |
+
distance of negative pairs (AvgdistNeg) with the embedded samples.
|
520 |
+
|
521 |
+
CUB200-2011 CARS196
|
522 |
+
|
523 |
+
AvgdistPos AvgdistNeg Avgdist AvgdistPos AvgdistNeg Avgdist
|
524 |
+
|
525 |
+
Linear 2.010e-3 3.190e-3 3.178e-3 1.497e-3 3.162e-3 3.183e-3
|
526 |
+
|
527 |
+
Triplet 1.934e-3 3.187e-3 3.174e-3 1.476e-3 3.163e-3 3.184e-3
|
528 |
+
|
529 |
+
Margin 7.565e-2 8.200e-2 8.192e-2 3.195e-2 3.468e-2 3.465e-2
|
530 |
+
|
531 |
+
MS 2.558e-3 3.324e-3 3.316e-3 2.242e-3 3.264e-3 3.277e-3
|
532 |
+
|
533 |
+
## C.2 Compare Negative Metric Losses With Uniformity Regularization
|
534 |
+
|
535 |
+
Settings. We compare 8 different models in this experiment, including a) the original model with only ImageNet pretrain; b) four models trained with linear, triplet, margin, and multi-similarity losses on negative pairs; c) three models trained with HE(s=0), HE(s=1) (Eq. 2) and G-HE(s=1) (Eq. 3) regularization, those regularization functions are introduced in Appendix C. We train all models with 50 epochs. For the models with negative metric losses, the gradient flow of positive pairs is stopped.
|
536 |
+
|
537 |
+
Evaluation Metrics. We use the regularization score on HE(s=0) (Eq. 2) and G-HE(s=1) (Eq. 3) to measure the uniformity of embedded samples on hypersphere. Smaller score indicates better uniformity.
|
538 |
+
|
539 |
+
We also check the average of pairwise distance of different models, we expect it to be close to √2 for good hyperspherical uniformity. Results. In experiments we compare the regularization strength of negative metric losses with HE and G-HE. Fig. 4 and Fig. 5 illustrates the pairwise distance of the test samples on CUB200-2011 and CARS196 dataset.
|
540 |
+
|
541 |
+
The DML models with negative metric losses have similar pairwise distance distributions as the uniformity regularization methods, and the only exception is the negative naive linear loss, which will not lead to the hyperspherical uniformity based on our analysis in Section B.2. We also include the distance distribution before training (4(a)) as a reference. Besides, we also compare those models under the hyperspherical uniformity metrics. In Table 8, the negative metric losses achieve comparable results with the uniformity regularization methods. The G-HE(s=1) outperforms the rest models.
|
542 |
+
|
543 |
+
| uniformity regularization. The details of models and training settings are in Section C.2. CUB200-2011 CARS196 Avgdist HE(s=0) G-HE(s=1) Avgdist HE(s=0) G-HE(s=1 | | | ) | | | |
|
544 |
+
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|---------|--------|--------|---------|--------|
|
545 |
+
| ImageNet | 0.7220 | 0.3368 | 0.5939 | 0.6557 | 0.4326 | 0.6497 |
|
546 |
+
| Linear | 1.2398 | -0.1670 | 0.2535 | 1.3194 | -0.1686 | 0.2449 |
|
547 |
+
| Triplet | 1.3877 | -0.3254 | 0.1496 | 1.4039 | -0.3376 | 0.1420 |
|
548 |
+
| Margin | 1.3929 | -0.3297 | 0.1465 | 1.4016 | -0.3363 | 0.1426 |
|
549 |
+
| MS | 1.3825 | -0.3221 | 0.1509 | 1.3985 | -0.3338 | 0.1441 |
|
550 |
+
| HE regularization (s=0) | 1.4001 | -0.3347 | 0.1438 | 1.4064 | -0.3395 | 0.1411 |
|
551 |
+
| HE regularization (s=1) | 1.4009 | -0.3355 | 0.1432 | 1.4068 | -0.3398 | 0.1407 |
|
552 |
+
| G-HE regularization (s=1) | 1.4028 | -0.3369 | 0.1424 | 1.4077 | -0.3406 | 0.1402 |
|
553 |
+
|
554 |
+
![15_image_0.png](15_image_0.png)
|
555 |
+
|
556 |
+
Figure 4: Illustration of pairwise distance distributions of the embedded CUB200-2011 samples generated by DML models trained with negative metric losses or uniformity regularization. The details of models and training settings are in Section C.2.
|
557 |
+
|
558 |
+
![15_image_1.png](15_image_1.png)
|
559 |
+
|
560 |
+
Figure 5: Illustration of pairwise distance distributions of the embedded CARS196 samples with negative metric losses or uniformity regularization. The details of models and training settings are in Sec. 5.3.
|
561 |
+
|
562 |
+
## D Adml Algorithms And Experimental Settings D.1 Algorithm Of Adml+A And Adml+U
|
563 |
+
|
564 |
+
Algorithm 1 ADML+A
|
565 |
+
Input: training set X; number of epochs N; original classifier f; weight of adversarial training λ; PGD
|
566 |
+
attack step L; PGD attack strength ϵ Initialize class balanced sampler S;
|
567 |
+
for i ∈ *epochs* do for S ∈ mini-batch{S1*, ..., S*n} do S
|
568 |
+
(0) = S;
|
569 |
+
Generate adversarial samples of S with PGD-FSGM attack for t ∈ 0 : L − 1 do S
|
570 |
+
(t+1) = ΠB∞(S(0),ϵ)
|
571 |
+
(S
|
572 |
+
(t) + α∇S(t)Lalignment(*f, S*(t)));
|
573 |
+
end Sadv = S
|
574 |
+
(L);
|
575 |
+
Calculate objective function Ltotal = L(*f, S*) + λL(f, Sadv);
|
576 |
+
L can be an arbitrary metric loss, in our paper we use margin and multisimilarity loss Update network parameters of f with L*total*;
|
577 |
+
end end
|
578 |
+
|
579 |
+
## Algorithm 2 Adml+U
|
580 |
+
|
581 |
+
Input: training set X; number of epochs N; original classifier f; weight of adversarial training λ; PGD
|
582 |
+
attack step L; PGD attack strength ϵ Initialize class balanced sampler S;
|
583 |
+
for i ∈ *epochs* do for S ∈ mini-batch{S1*, ..., S*n} do S
|
584 |
+
(0) = S;
|
585 |
+
Generate adversarial samples of S with PGD-FSGM attack for t ∈ 0 : L − 1 do S
|
586 |
+
(t+1) = ΠB∞(S(0),ϵ)
|
587 |
+
(S
|
588 |
+
(t) + α∇S(t)Luniformity(*f, S*(t)));
|
589 |
+
end Sadv = S
|
590 |
+
(L);
|
591 |
+
Calculate objective function Ltotal = L(*f, S*) + λL(f, Sadv);
|
592 |
+
L can be an arbitrary metric loss, in our paper we use margin and multisimilarity loss Update network parameters of f with L*total*;
|
593 |
+
end end
|
594 |
+
|
595 |
+
## D.2 Dataset Details
|
596 |
+
|
597 |
+
- **CUB200-2011** contains 200 species of birds and 11,788 images (Wah et al., 2011). We use the first 100 species as training set and the rest as test set (license unknown).
|
598 |
+
|
599 |
+
- **CARS196** has 196 models of cars and 16,185 images. (Krause et al., 2013). We use the first 98 models as training set and the rest as test set (ImageNet license https://image-net.org/download.php).
|
600 |
+
|
601 |
+
- **Online-product** includes 22,634 classes of products and 120,053 images (Song et al., 2016). We use the first 11,318 classes as training set and the rest as test set (license unknown).
|
602 |
+
|
603 |
+
- **In-shop** contains 7982 classes of clothing and 54,624 images (Liu et al., 2016). We use the first 3,997 classes as training set and the rest as test set. The test set is further partitioned into a query set with 14,218 images of 3,985 classes and a gallery set with 12,612 images of 3,985 classes.
|
604 |
+
|
605 |
+
## D.3 Evaluation Metrics
|
606 |
+
|
607 |
+
We use the following metrics to evaluate the DML models with retrieval and clustering downstream tasks.
|
608 |
+
|
609 |
+
Recall@k. For the retrieval task we apply the Recall@k (R@k) metric (Jegou et al., 2010). For a test set M := {(x1, y1), · · · ,(xn, yn)}, the indices of the first k nearest neighborhood of a sample xiis given by Sk(xi) := arg max|S|=kPj∈S,j̸=i ||f(xi) − f(xj )||2, and
|
610 |
+
|
611 |
+
$$\mathrm{R@k}:={\frac{1}{n}}\sum_{i=1}^{n}1_{\{\exists j\in S_{k}(x_{i}),y_{j}=y_{i}\}}.$$
|
612 |
+
|
613 |
+
NMI. We use Normalized Mutual Information (NMI) (Christopher et al., 2008) to measure the quality of the clustering task. We use K-means to generate the clusters of the embedded samples, then we calculate the label assignment Γ = {γ1*, ..., γ*n} from clustering. Denote the ground truth labels by Ω = {y1*, ..., y*n}, the NMI is computed as NMI(Ω, Γ) = I(Ω, Γ)/[2(H(Ω) + H(Γ))],
|
614 |
+
where I(·, ·) is the mutual information function and H(·) is the entropy function.
|
615 |
+
|
616 |
+
mAP@C. According to (Musgrave et al., 2020), we also include mean average precision measured on recall
|
617 |
+
(mAP@k) as an additional metric. We first compute the recalled samples, which are determined by the k nearest neighbour ranking. Then compute the mAP-score follows the standard mAP procedure. mAP@C is the mean over the class-wise average precision@kc, where kc is the number of samples in class c, which means we only recall kc nearest neighbour. Following the notation of Recall@k, the value of mAP@C is given by
|
618 |
+
|
619 |
+
$$m A P@C:={\frac{1}{n}}\sum_{c\in C}\sum_{y_{q}=c}{\frac{|\{x_{i}\in S_{k_{c}}(x_{q})|y_{i}=y_{q}\}|}{k_{c}}}$$
|
620 |
+
.
|
621 |
+
|
622 |
+
## E Missing Proofs E.1 Proof Of 1
|
623 |
+
|
624 |
+
Denote the support set of the distribution of each class by S1*, . . . , S*n, according to Definition 1, the minimum of alignment loss is reached when the encoder f
|
625 |
+
∗ maps all samples in one class to the same feature vector i.e. ∀*i, f* ∗(Si) = {vi}. For arbitrary *i, j*, because ∪
|
626 |
+
n k=1Sk is connected and each Sk is closed, we can select a set sequence Sk0
|
627 |
+
, . . . , Skm such that Sk0 = Si, Skm = Sj and Skl ∩ Skl+1 ̸= ∅, l ∈ [m − 1]. By Skl ∩ Skl+1 ≠ ∅
|
628 |
+
we have vkl = vkl+1 for all l ∈ [m − 1], thus ∀*i, j, f* ∗(Si) = f
|
629 |
+
∗(Sj ). So all samples are projected to the same feature vector.
|
630 |
+
|
631 |
+
## E.2 Proof Of Theorem 2
|
632 |
+
|
633 |
+
Proof. Recall that the naive linear loss is given by Llinear(f; X, ptri) := E(x,y,x−)∼ptri [||f(x) − f(y)||22 *− ||*f(x) − f(x
|
634 |
+
|
635 |
+
$$f(x)-f(y)||_{2}^{2}-||f(x)-f(x^{-})||_{2}^{2}]$$
|
636 |
+
|
637 |
+
Consider the positive part
|
638 |
+
|
639 |
+
$$\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}}[||f(x)-f(y)||_{2}^{2}]=\mathbb{E}_{(x,y)\sim p_{p o s}}[||f(x)-f(y)||_{2}^{2}]=\mathcal{L}_{a l i g n m e n t}$$
|
640 |
+
] = L*alignment* (8)
|
641 |
+
$\left(\mathfrak{S}\right)_{\mathfrak{k}}$
|
642 |
+
Consider the negative part
|
643 |
+
|
644 |
+
$$-\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}}[||f(x)-f(x^{-})||_{2}^{2}]=2\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}}[f(x)^{T}f(x^{-})]-2$$
|
645 |
+
|
646 |
+
and
|
647 |
+
|
648 |
+
E(x,y,x−)∼ptri [f(x)
|
649 |
+
Tf(x
|
650 |
+
−)]
|
651 |
+
=E(x,y)∼ppos
|
652 |
+
[f(x)
|
653 |
+
T Ex−∼p
|
654 |
+
−
|
655 |
+
data
|
656 |
+
[f(x
|
657 |
+
−)]]
|
658 |
+
=Ex∼p*data* [f(x)
|
659 |
+
T1
|
660 |
+
Rx− p*data*(x−)dx−
|
661 |
+
(Ex′∼p*data* [f(x
|
662 |
+
′)] − Ex′∼p*data* [f(x
|
663 |
+
′)1x′∈Xx
|
664 |
+
])]
|
665 |
+
=n
|
666 |
+
n − 1
|
667 |
+
(Ex∼p*data* [f(x)
|
668 |
+
T Ex′∼p*data* [f(x
|
669 |
+
′)]] − Ex∼p*data* [f(x)
|
670 |
+
T Ex′∼p*data* [f(x
|
671 |
+
′)1x′∈Xx
|
672 |
+
]])
|
673 |
+
=n
|
674 |
+
n − 1
|
675 |
+
(Ex∼p*data* [f(x)
|
676 |
+
T Ex′∼p*data* [f(x
|
677 |
+
′)]] − Ex∼p*data* [f(x)
|
678 |
+
Tpdata(Xx)Ex′∼p*data*(·|Xx)[f(x
|
679 |
+
′)]])
|
680 |
+
$$\square$$
|
681 |
+
(9)
|
682 |
+
=n
|
683 |
+
n − 1
|
684 |
+
Ex∼pdata [f(x)]T Ex∼p*data* [f(x)] −1
|
685 |
+
n − 1
|
686 |
+
E(x,y)∼ppos
|
687 |
+
[f(x)
|
688 |
+
Tf(y)]
|
689 |
+
=n
|
690 |
+
n − 1
|
691 |
+
Ex∼pdata [f(x)]T Ex∼p*data* [f(x)] −1
|
692 |
+
n − 1
|
693 |
+
E(x,y)∼ppos
|
694 |
+
[f(x)
|
695 |
+
Tf(y)]
|
696 |
+
=n
|
697 |
+
n − 1
|
698 |
+
L*unbiased* +1
|
699 |
+
2(n − 1)(L*alignment* − 2)
|
700 |
+
where we denote the class of sample x by Xx.
|
701 |
+
|
702 |
+
Combining Eq. 8 and Eq. 9 together, we have
|
703 |
+
|
704 |
+
$$\begin{split}\mathcal{L}_{linear}(f)&=\frac{2n}{n-1}\mathcal{L}_{unbiased}+\frac{1}{n-1}(\mathcal{L}_{alignment}-2)-2+\mathcal{L}_{alignment}\\ &=\frac{n}{n-1}(2\mathcal{L}_{unbiased}+\mathcal{L}_{alignment}-2)\end{split}\tag{10}$$
|
705 |
+
|
706 |
+
## E.3 Proof Of Theorem 2
|
707 |
+
|
708 |
+
Proof. The triplet loss is
|
709 |
+
|
710 |
+
Ltriplet(f, τ ) = Llinear(f; X, p′tri) = E(x,y,x−)∼p ′ tri [||f(x) − f(y)||22 − ||f(x) − f(x −)||22 ] = E(x,y,x−)∼ptri [(||f(x) − f(y)||22 − ||f(x) − f(x −)||22 )1{||f(x)−f(y)||22−||f(x)−f(x−)||22+τ≥0}] = E(x,y,x−)∼ptri [||f(x) − f(y)||221{||f(x)−f(y)||22−||f(x)−f(x−)||22+τ≥0}] − E(x,y,x−)∼ptri [||f(x) − f(x −)||221{||f(x)−f(y)||22−||f(x)−f(x−)||22+τ≥0}]
|
711 |
+
Consider the **negative part**,
|
712 |
+
|
713 |
+
− E(x,y,x−)∼ptri [||f(x) − f(x
|
714 |
+
−)||221{||f(x)−f(y)||22−||f(x)−f(x−)||22+τ≥0}]
|
715 |
+
= − Ex∼p*data*,x−∼p
|
716 |
+
−
|
717 |
+
data
|
718 |
+
[||f(x) − f(x
|
719 |
+
−)||22Ey∼pdata(·|Xx)[1{||f(x)−f(y)||22−||f(x)−f(x−)||22+τ≥0}]]
|
720 |
+
= −n
|
721 |
+
n − 1
|
722 |
+
Ex∼pdata,x′∼p*data* [||f(x) − f(x
|
723 |
+
′)||22Ey∼pdata(·|Xx)[1{||f(x)−f(y)||22−||f(x)−f(x′)||22+τ≥0}]]
|
724 |
+
+n
|
725 |
+
n − 1
|
726 |
+
Ex∼pdata,x′∼p*data* [||f(x) − f(x
|
727 |
+
′)||221x′∈Xx Ey∼pdata(·|Xx)[1{||f(x)−f(y)||22−||f(x)−f(x′)||22+τ≥0}]]
|
728 |
+
= −n
|
729 |
+
n − 1
|
730 |
+
Ex∼pdata,x′∼p*data* [||f(x) − f(x
|
731 |
+
′)||22Ey∼pdata(·|Xx)[1{||f(x)−f(y)||22−||f(x)−f(x′)||22+τ≥0}]]
|
732 |
+
+1
|
733 |
+
n − 1
|
734 |
+
E(x,x′)∼ppos
|
735 |
+
[||f(x) − f(x
|
736 |
+
′)||22Ey∼pdata(·|Xx)[1{||f(x)−f(y)||22−||f(x)−f(x′)||22+τ≥0}]]
|
737 |
+
= −n
|
738 |
+
n − 1
|
739 |
+
Ex∼pdata,x′∼p*data* [||f(x) − f(x
|
740 |
+
′)||22S(*x, x*′)]
|
741 |
+
+1
|
742 |
+
n − 1
|
743 |
+
E(x,x′)∼ppos
|
744 |
+
[||f(x) − f(x
|
745 |
+
′)||22S(*x, x*′)]
|
746 |
+
where Xx is the set of samples have the same label as x. The last equation is based on
|
747 |
+
|
748 |
+
$$S(x,x^{\prime})=\int_{0}^{\infty}q(u+d^{2}(x,x^{\prime})-\tau)=\mathbb{E}_{q(d^{2}(x,y))}[1_{\{u\geq0\}}]=\mathbb{E}_{y\sim p_{data}(\cdot|X_{x})}[1_{\{u\geq0\}}]$$ $$=\mathbb{E}_{y\sim p_{data}(\cdot|X_{x})}[1_{\{||f(x)-f(y)||_{2}^{2}-||f(x)-f(x^{\prime})||_{2}^{2}+\tau\geq0\}}]$$
|
749 |
+
|
750 |
+
## E.4 Proof Of 2
|
751 |
+
|
752 |
+
Proof. By q(d 2(*x, y*)) = 1A
|
753 |
+
e
|
754 |
+
−Ad2(x,y), the pdf of u = d 2(*x, y*) − d 2(*x, x*′) + τ is 1A
|
755 |
+
e
|
756 |
+
−A(u+d 2(x,x′)−τ), then
|
757 |
+
|
758 |
+
$$S(x,x^{\prime})={\frac{1}{A}}\int_{0}^{\infty}e^{-A(u+d^{2}(x,x^{\prime})-\tau)}d u={\frac{1}{A}}e^{-A(d^{2}(x,x^{\prime})-\tau)}$$
|
759 |
+
|
760 |
+
Consider the gradient of the negative triplet loss E(*x,y,x*−)∼p
|
761 |
+
′ tri
|
762 |
+
[||f(x) − f(x
|
763 |
+
−)||22
|
764 |
+
], during training we first sampling from p
|
765 |
+
′
|
766 |
+
tri then calculate the gradient. In this case the actual gradient flow is given by
|
767 |
+
|
768 |
+
$$-\mathbb{E}_{(x,y,x^{-})\sim p_{t r i}^{\prime}}[\nabla_{\theta}||f(x)-f(x^{-})||_{2}^{2}]$$
|
769 |
+
|
770 |
+
Analogous with the discussion in Section E.3, we have
|
771 |
+
|
772 |
+
− E(*x,y,x*−)∼p
|
773 |
+
′
|
774 |
+
tri
|
775 |
+
[∇θ||f(x) − f(x
|
776 |
+
−)||22
|
777 |
+
]
|
778 |
+
= −n
|
779 |
+
n − 1
|
780 |
+
Ex∼pdata,x′∼pdata [S(*x, x*′)∇θ||f(x) − f(x
|
781 |
+
′)||22
|
782 |
+
]
|
783 |
+
+1
|
784 |
+
n − 1
|
785 |
+
E(x,x′)∼ppos
|
786 |
+
[S(*x, x*′)∇θ||f(x) − f(x
|
787 |
+
′)||22
|
788 |
+
]
|
789 |
+
= −e
|
790 |
+
Aτn
|
791 |
+
A(n − 1)Ex∼pdata,x′∼p*data* [e
|
792 |
+
−Ad2(x,x′)∇d(x,x′)d
|
793 |
+
2(*x, x*′)
|
794 |
+
∂d(*x, x*′)
|
795 |
+
∂θ ] + O(
|
796 |
+
1
|
797 |
+
n
|
798 |
+
)
|
799 |
+
= −e
|
800 |
+
Aτn
|
801 |
+
A(n − 1)Ex∼pdata,x′∼p*data* [e
|
802 |
+
−Ad2(x,x′)2d(*x, x*′)
|
803 |
+
∂d(*x, x*′)
|
804 |
+
∂θ ] + O(
|
805 |
+
1
|
806 |
+
n
|
807 |
+
)
|
808 |
+
=e
|
809 |
+
Aτn
|
810 |
+
A2(n − 1)Ex∼pdata,x′∼pdata [∇d(x,x′)(e
|
811 |
+
−Ad2(x,x′))
|
812 |
+
∂d(*x, x*′)
|
813 |
+
∂θ ] + O(
|
814 |
+
1
|
815 |
+
n
|
816 |
+
)
|
817 |
+
=e
|
818 |
+
Aτn
|
819 |
+
A2(n − 1)∇θEx∼pdata,x′∼p*data* [e
|
820 |
+
−Ad2(x,x′)] + O(
|
821 |
+
1
|
822 |
+
n
|
823 |
+
)
|
824 |
+
=e
|
825 |
+
Aτn
|
826 |
+
A2(n − 1)∇θEG(*A, X*) + O(
|
827 |
+
1
|
828 |
+
n
|
829 |
+
)
|
830 |
+
|
831 |
+
## E.5 Connection Between Naive Linear Loss And Lda
|
832 |
+
|
833 |
+
The intuition of multiple linear discriminant analysis (LDA) is to maximize the inter-class variance while minimizing the intra-class variance. In this section we will show linear metric loss have a similar effect.
|
834 |
+
|
835 |
+
Definition 6. (Total variation) For a random vector x*, the total variation is*
|
836 |
+
|
837 |
+
$$T V(x):=t r(\mathbb{E}[(x-\mathbb{E}[x])(x-\mathbb{E}[x])^{T}])=\mathbb{E}[(x-\mathbb{E}[x])^{T}(x-\mathbb{E}[x])]=\mathbb{E}[x^{T}x]-\mathbb{E}[x]^{T}\mathbb{E}[x]$$
|
838 |
+
T E[x] (11)
|
839 |
+
Definition 7. *(Centroid) Define the centroid of samples in an arbitrary set* Y by
|
840 |
+
|
841 |
+
**Definition 1**: (Entropy Engine the definition of samples in an arbitrary set $Y$ is $$c_{Y}:=\mathbb{E}_{y\sim\Phi_{data}(\cdot|Y)}[f(y)]=\frac{1}{p_{data}(Y)}\mathbb{E}_{y\sim\Phi_{data}}[f(y)1_{y\in Y}]$$ **Proposition 3**: _(Intr-class total variation) The within-class variation_
|
842 |
+
$${\mathcal{L}}_{a l i g n e d}(f)=2T V_{i n t r a}(X)$$
|
843 |
+
T V*intra*(X) := Ex∼p*data* [(f(x) − cXx
|
844 |
+
|
845 |
+
$$(x)-c_{X_{x}})^{T}(f(x)-c_{X_{x}})]$$
|
846 |
+
is proportional to alignment loss
|
847 |
+
L*aligned*(f) = 2T V*intra*(X) (13)
|
848 |
+
Proof.
|
849 |
+
T Vintra(X) = Ex∼pdata [(f(x) − cXx ) T(f(x) − cXx )] = Xn i=1 pdata(Xi)(Ex∼pdata(·|Xi)[f(x) Tf(x)] − c T Xi cXi ) = 1 n Xn i=1 (1 − nEx∼pdata,x′∼pdata(·|Xx)[f(x) Tf(x ′)1x∈Xi ]) = 1 − Ex∼pdata,x′∼pdata(·|Xx)[f(x) Tf(x ′) Xn i=1 1x∈Xi ] = 1 − E(x,y)∼ppos [f(x) Tf(y)] = 1 2 Laligned(f)
|
850 |
+
$$(11)$$
|
851 |
+
$$(12)^{\frac{1}{2}}$$
|
852 |
+
$$(13)^{\frac{1}{2}}$$
|
853 |
+
$$\quad(14)$$
|
854 |
+
|
855 |
+
Proposition 4. *(Inter-class total variation) The inter-class total variation* T V*inter*(X) := Ex∼p*data* [(cXx − c)
|
856 |
+
T(cXx − c)],
|
857 |
+
where c *is the centroid of all samples, is proportional to triplet loss*
|
858 |
+
|
859 |
+
$${\mathcal{L}}_{l i n e a r}(f)=-{\frac{2n}{n-1}}T V_{i n t e r}(X)\tag{1}$$
|
860 |
+
$$\left(15\right)$$
|
861 |
+
$$(16)$$
|
862 |
+
|
863 |
+
Proof.
|
864 |
+
$$T V_{i n t e r}(X)=\mathbb{E}_{x\sim p_{d a t a}}[c_{X_{x}}^{T}c_{X_{x}}]-c^{T}c$$
|
865 |
+
Tc (16)
|
866 |
+
Firstly,
|
867 |
+
|
868 |
+
$$c={\frac{1}{p_{d a t a}(X)}}\mathbb{E}_{x\sim p_{d a t a}}[f(x)1_{x\in X}]=\mathbb{E}_{x\sim p_{d a t a}}[f(x)]$$
|
869 |
+
|
870 |
+
Hence c Tc = L*unbiased*(f).
|
871 |
+
|
872 |
+
Next,
|
873 |
+
|
874 |
+
Ex∼pdata [c T Xx cXx ] = Xn i=1 pdata(Xi)Ex∼pdata(·|Xi)[c T Xi cXi ] = Xn i=1 pdata(Xi)c T Xi cXi = Xn i=1 Ex∼pdata,y∼pdata(·|Xx)[f(x) Tf(y)1x∈Xi ] = Ex∼pdata,y∼pdata(·|Xx)[f(x) Tf(y) Xn i=1 1x∈Xi ] = Ex∼pdata,y∼pdata(·|Xx)[f(x) Tf(y)] = E(x,y)∼ppos [f(x) Tf(y)]
|
875 |
+
Thus Ex∼p*data* [c
|
876 |
+
T
|
877 |
+
Xx
|
878 |
+
cXx
|
879 |
+
] = 1 −
|
880 |
+
1
|
881 |
+
2
|
882 |
+
$\mathit{aligned}(f),\;\;\text{and}\;\;TV_{inter}(X)\;\;=\;\;1\;-\;\frac{1}{2}$ .
|
883 |
+
1
|
884 |
+
2
|
885 |
+
Laligned(f) − L*unbiased*(f) =
|
886 |
+
−
|
887 |
+
n−1 2n L*linear*(f)
|
888 |
+
Proposition 5. (Total variation of the dataset) The total variation of dataset X
|
889 |
+
T V*total*(X) := Ex∼p*data* [(x − c)
|
890 |
+
T(x − c)],
|
891 |
+
where c *is the centroid of all samples, is proportional to the unbiased loss*
|
892 |
+
|
893 |
+
$${\mathcal{L}}_{u n b i a s e d}(f)=1-T V_{t o t a l}(X)$$
|
894 |
+
L*unbiased*(f) = 1 − T V*total*(X) (17)
|
895 |
+
Proof.
|
896 |
+
$$(18)$$
|
897 |
+
$$TV_{total}(X)=\mathbb{E}_{x\sim p_{data}}[f(x)^{T}f(x)]-\mathbb{E}_{x\sim p_{data}}[f(x)]^{T}\mathbb{E}_{x\sim p_{data}}[f(x)]$$ $$=1-\mathcal{L}_{unbiased}(f)$$
|
898 |
+
|
899 |
+
We can also check if T V*total*(X) = T V*within*(X) + T V*between*(X) holds to validate the proofs above.
|
900 |
+
|
901 |
+
$$\left(17\right)$$
|
hqLJMAceZG/hqLJMAceZG_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 22,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 1,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 1,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 22,
|
14 |
+
"code": 0,
|
15 |
+
"table": 6,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 56,
|
18 |
+
"unsuccessful_ocr": 10,
|
19 |
+
"equations": 66
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|