RedTachyon commited on
Commit
f6c1104
1 Parent(s): c025d24

Upload folder using huggingface_hub

Browse files
J9GzUw9HgA/10_image_0.png ADDED

Git LFS Details

  • SHA256: c031f88365ad8f351c0e11040a24bdca7f628fc33f72cd55426f8a8a501c5f05
  • Pointer size: 130 Bytes
  • Size of remote file: 27.5 kB
J9GzUw9HgA/10_image_1.png ADDED

Git LFS Details

  • SHA256: f4be8699e9f91335d4fcda4f71268dabfbb015582d8954d4e6fa6ed9b5af8fd4
  • Pointer size: 130 Bytes
  • Size of remote file: 13.2 kB
J9GzUw9HgA/11_image_0.png ADDED

Git LFS Details

  • SHA256: 86d8859049695dc784de237c59d430bbe7bbeb4132122bec4c768abfbff96911
  • Pointer size: 130 Bytes
  • Size of remote file: 27.2 kB
J9GzUw9HgA/8_image_0.png ADDED

Git LFS Details

  • SHA256: 09ba268875c09ecb4d04c21a4fa09ccae568ee1f21e8926bff569de331ff46e6
  • Pointer size: 130 Bytes
  • Size of remote file: 19 kB
J9GzUw9HgA/8_image_1.png ADDED

Git LFS Details

  • SHA256: 6c8502a1ee4b9ba34bc0660019f0c9ec27162c6d225cc74b557186ab53596a3f
  • Pointer size: 130 Bytes
  • Size of remote file: 10.5 kB
J9GzUw9HgA/9_image_0.png ADDED

Git LFS Details

  • SHA256: 6e888167c0102a5cb5d5e955cc1a906cae6707e2ad40be5b8df5ec22b90188ad
  • Pointer size: 130 Bytes
  • Size of remote file: 83.7 kB
J9GzUw9HgA/J9GzUw9HgA.md ADDED
@@ -0,0 +1,711 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A New Stochastic Optimization Technique For Combating Data Poisoning Attacks
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ In this paper, we present new techniques for building and analyzing robust stochastic optimization algorithms. To solve the given d-dimensional optimization problem, our technique generates a sequence of random k-dimensional subproblems, where *k < d*, and solves them instead. Unlike traditional optimization analysis which exploits structural assumptions like convexity, Lipschitzness or Polyak-Lojasiewicz criterion of the loss function to obtain convergence rates, our analysis only uses the geometrical structure of the randomness used in the algorithm. This offers a wider applicability than traditional methods, and indeed it applies to all smooth loss functions. Moreover, our analysis identifies an important parameter of the minimizers of the loss function, which we call the *gap parameter*. This parameter dictates the convergence rates of our algorithm. We experimentally study the algorithm on linear regression, logistic regression, SVMs, and neural networks. Using these experiments, we argue that the gap parameter of a minimizer also controls its robustness to the presence of noise in the training data (popularly referred to as *data poisoning*). A modified algorithm which can control the effect of noise on its output is presented as well. Finally, we discuss how the choice of k affects the convergence and robustness of our algorithm.
8
+
9
+ ## 1 Introduction
10
+
11
+ We study methods for stochastic optimization in the setting where a subset of training data might be corrupted by an adversary. Stochastic methods in optimization have become an important workhorse in the practice of modern Machine Learning. These methods usually work on data collected from a variety of different sources (such as scraping the Internet). Naturally, an adversary can inject malicious data in this training set and it might be very challenging to detect and remove this corrupted subset. In such a scenario, it is desirable to have algorithms which are immune to injections of small amount of arbitrary corruptions.
12
+
13
+ In this paper, we propose a novel method for stochastic optimization which has the potential of addressing this problem for a wide class of loss functions. Difficulty of the problem. The problem of learning good models under worst case corruptions in training data is NP-hard even for simple problems like binary classification with half-spaces (Guruswami & Raghavendra, 2006). The popular method of dealing with these difficulties is to make distributional assumptions over the data or the noise added by the adversary (Khardon & Wachman, 2007). These distributional assumptions heavily dictate the design of appropriate algorithms in these settings. However, one doesn't always know how the distributions will look like in practice and, moreover, in the case of noise a determined attacker might tailor it to the specific training data at hand and hence invalidate any distributional assumptions made. This makes the problem of building optimization problems which are robust to worst-case noise in the training data seemingly intractable.
14
+
15
+ In this work, we propose a tractable way out of this difficulty. Instead of expecting our algorithm to behave perfectly on all input instances and be able to handle all worst-case noise (which makes the problem NPhard), we build an algorithm that performs well on *most* instances and handles worst-case noise on these instances without putting any distributional assumptions on either the training data or the noise. By working well on *most* input instances we expect to capture all the instances that one could reasonably expect to see in practice, while leaving out the small fraction of instances which are often responsible for the computational hardness of a given problem (for example, the ones to which a reduction from an NP-hard problem like satisfiability might map to). The idea is to develop an algorithm whose output does not change drastically under small perturbations in the training data. This algorithm is not designed to necessarily perform well on a small set of instances, even when some other algorithm might be able to solve these instances well. The hope is that this small set of instances is one that will not be seen in practice. Since characterizing the complexity landscape of various instances is a highly intricate and mathematically extremely challenging subject (Arora & Barak, 2009), the above discussion is meant to only give an intuitive understanding of our approach. As we shall see later in Section 6, we achieve our objective by identifying a crucial property (called the *gap parameter*)
16
+ of the solutions of a given optimization problem. Our algorithm will only find solutions that have a large enough gap parameter, giving them good robustness properties under perturbations in the input data.
17
+
18
+ Our approach. Instead of working in the original dimension of the given optimization problem, our algorithm proceeds by solving the given problem in a sequence of random hyperplanes of a smaller dimension.
19
+
20
+ These hyperplanes are defined so that each successive one contains the solution obtained from the previous one. The algorithm stops either when the improvements in the successive hyperplanes become small enough or after a chosen number of iterations, and outputs the solution obtained in the last hyperplane. See Algorithms 1 and 2.
21
+
22
+ Significance of our approach. Our approach to stochastic optimization algorithms has interesting connections with the theory of expander graphs. The role of the gap parameter in our analysis is akin to that of the spectral gap of the Laplacian of a graph. Under very mild assumptions on this parameter (the logarithm of this parameter has to be polynomially bounded in the dimension of the problem) we obtain a polynomially bounded runtime for our algorithm (see Theorem 3).
23
+
24
+ The second eigenvalue is an important spectral quantity for graphs (Kowalski, 2019) and our analysis shows that a similar quantity for functions dictates how fast certain stochastic optimization algorithms can converge.
25
+
26
+ This is distinct from all previous analyses which rely on structural properties of the functions like convexity, Lipschitzness, or the Polyak-Lojasiewicz criterion to bound the rate of convergence. As far as we know, such a parallel between the very well-studied theory of random walks on expander graphs and stochastic optimization algorithms is entirely novel to our work.
27
+
28
+ The highlight of our analysis is Lemma 4, which is a statement about moving non-trivially away from the maximum of a function by random sampling. For a function whose domain is a Lie group which satisfies Kazhdan's Property (T), it gives a lower bound on the size of the set where the function takes a value non-trivially away from the maximum. This is a very general lemma (see Section 5) and we believe that it will find applicability in many future analyses. Organization of this paper. In Section 2, we discuss existing approaches for stochastic optimization and dealing with perturbations in training data, popularly referred to as data poisoning attacks. In Section 3, we set up the notation and introduce all the concepts needed for the rest of the paper. In Section 4, we describe and discuss our stochastic optimization algorithm and give our convergence result. In Section 5, we discuss the main theoretical technique of our analysis. In Section 6, we discuss the robustness of our algorithm and demonstrate the efficacy of our approach with experiments in Section 7. Finally, in Section 8 we discuss the importance of our work and highlight the main takeaways from the paper.
29
+
30
+ ## 2 Related Work
31
+
32
+ In this section, we compare our approach to stochastic optimization with existing approaches as well as discuss the literature on data poisoning attacks in Machine Learning.
33
+
34
+ Comparison to existing stochastic techniques. Most of the existing literature focuses on either stochastic gradient descent or it's popular variants like Adam (Kingma & Ba, 2014) and AdaGrad (Duchi et al.,
35
+ 2011). In stochastic gradient descent one picks a random subset of the data, computes the loss on this subset and uses the gradient of this loss to update the parameters of the model. Convergence for this scheme can be shown under assumption like strong convexity (Moulines & Bach, 2011), the Polyak-Lojasiewicz condition (Gower et al., 2021), and convergence to stationary points for non-convex functions which satisfy an expected smoothness assumption (Khaled & Richtárik, 2023). These convergence results rely crucially on the respective structural properties mentioned for the loss functions, while the randomness of picking a subset of the data usually worsens the convergence rates as compared to their deterministic counterparts (which work with the full training data in all iterations). Our approach is fundamentally different from these approaches. Instead of subsets of the training data being the source of randomness, in our approach the randomness comes from the selection of random subspaces in which the given optimization problem is solved. In the particular case when the optimization problem is solving for optimal parameters in a Euclidean space, our method works in subspaces of the full space of the parameters. The only existing technique that has superficial similarities to this is the dropout method in deep learning (Srivastava et al., 2014). But even there one typically considers only subsets and not subspaces of the parameters. Note that the set of all subspaces of the parameter space is a much bigger space (being a smooth manifold) than the set of all of their subsets (which is a discrete set). In addition, dropout is a specialized technique that is only used in the context of deep learning.
36
+
37
+ Analytically, our analysis is dependent on the crucial fact that the space our randomness is drawn from forms a smooth manifold that it is a quotient of a compact Lie group, and in particular therefore satisfies Kazhdan's Property (T). The only assumption we need from the loss function is that it should be smooth. We do not need any other assumptions like convexity or Lipschitzness. Data poisoning in Machine Learning. Many methods exist in the literature for dealing with data poisoning; see Tian et al. (2022) and Cinà et al. (2023) for excellent surveys. While there are a lot of methods which try to deal with data poisoning for specific models like linear regression, logistic regression, or neural networks, few methods exist which have general applicability. Data sanitization and some form of bagging and majority voting seem to be among these few general techniques. Data sanitization can be difficult as adversaries assemble more and more sophisticated forms of noise to make noisy data look indistinguishable from real data. Bagging and voting can decrease the amount of data available for training for a single model and can have unwanted accuracy trade-offs. Robust training, which augments the training data with poisoned instances to specifically train the model to handle such data, is another popular method to combat such attacks. All of these techniques deal with the preprocessing of the training data, and not with the actual learning process.
38
+
39
+ Techniques like Prasad et al. (2020) and Charikar et al. (2017), which deal with the learning process, have been developed in the robust statistics literature to mitigate the influence of noise in the training data. But these techniques tend to be intractable without making restrictive distributional or modeling assumptions. Our technique, which is primarily a new optimization algorithm, can be used either as an alternative to these existing techniques or in conjunction with them to provide enhanced protection against data poisoning attacks.
40
+
41
+ ## 3 Preliminaries
42
+
43
+ Notation. We use G to represent a Lie group and H to represent a subgroup of it. Moreover, we use G/H to represent the quotient of G w.r.t. H. For a treatment of Lie groups see Bump (2013). We use O(d) to represent the compact orthogonal group acting on R
44
+ d. The product group O(k) × O(d − k) can naturally be identified as a subgroup of O(d). The quotient O(d)/ (O(k) × O(d − k)) has a natural interpretation as the set of all k-dimensional subspaces of R
45
+ d. This is a well studied geometric object, popularly known as the Grassmannian (see Bendokat et al. (2024)). We denote it by Gk,d. We use the term k-plane to refer to a k-dimensional affine subspace, i.e. a k-dimensional hyperplane of R
46
+ d, in the rest of the paper.
47
+
48
+ For us, ℓ : R
49
+ d → R will be the smooth loss function we want to optimize. Here, smoothness means that ℓ is infinitely differentiable. We use η with various subscripts to represent subspaces or k-planes of appropriate dimensions (which will be clear from the context). Measures on Lie Groups. Our Lie groups, like all locally compact Lie groups, have a left-invariant Haar measure which is unique up to scaling (Nachbin, 1976). This covers a wide range of Lie groups used in applications (Gallier & Quaintance, 2020). For results regarding existence of invariant measures on compact Lie groups, their quotients (like the Grassmannians) and validity of Fubini style decompositions look at Chapter 1 of Sepanski (2006).
50
+
51
+ For a measurable subset A of a given measure space we use |A| to denote the measure of this set under the implied measure. Kazhdan's Property (T). For a definition of this property see Section 3.1 of Rogawski & Lubotzky (1994). It is primarily defined for non-compact Lie groups. Indeed for compact Lie groups, such as we are considering in this paper, the property is trivially satisfied. We bring it up here because of its impact on expander graphs and their random walks which forms an important motivation for our work. Also, because we formulate our core lemma, Lemma 4, on non-compact Lie groups. We will only be working with the following consequence of the property in our proofs: Lemma 1 [Remark 1.1.4 in Bekka et al. (2008)] Let G *be a locally compact Lie group that satisfies Kazhdan's* property (T). Then there exists a c > 0 such that for all functions f : G → R*, square integrable w.r.t. a* left-invariant Haar measure and which satisfy RG
52
+ f = 0, there exists a γ ∈ G *satisfying*
53
+
54
+ $$\|f-\gamma\cdot f\|^{2}\geq c\|f\|^{2}$$
55
+ $\neg x$).
56
+ where the action of γ on f *is defined by* (γ · f)(x) = f(γ
57
+ −1· x).
58
+
59
+ Remark 2 The constant c > 0 in Lemma 1 is only dependent on the group and is popularly referred to as the Kazhdan constant of the group. We can chose c = 2 for compact Lie groups. The proof of Lemma 4 in Appendix A.2 includes a proof of this fact. Noise model. We will study the robustness properties of our techniques in Section 6. We consider noise only in the training data matrix A. Noise may be introduced by perturbing a certain fraction of the rows of A with a noise matrix ∆ or by augmenting A with a small number of well crafted data points. Generally, both these settings can be mathematically modelled as adding noise ∆ to A. This setting is popularly referred to as *data poisoning*. We study the behavior of our approach as the fraction of rows that ∆ corrupts increases.
60
+
61
+ Note that we do not make any distributional assumptions on ∆, instead we work with the worst case ∆ by evaluating our technique against existing data poisoning attacks in the literature, which generate ∆ with full knowledge of A.
62
+
63
+ ## 4 Our Results
64
+
65
+ In this section, we describe our random walk, which is a stochastic optimization technique applicable to any smooth loss function. Moreover, we provide a convergence result that works in this very general setting.
66
+
67
+ ## 4.1 Random Walk
68
+
69
+ The aim of any optimization algorithm is to find some critical point of ℓ, usually one of the global minima, i.e., find an x
70
+ ∗ ∈ R
71
+ dsuch that x
72
+ ∗ ∈ arg min x∈Rd ℓ(x)
73
+ Assume that we are given a black-box access for solving the same problem but in a smaller-dimensional space, specifically a k-plane η ⊂ R
74
+ d, i.e., we can find an x
75
+ ∗ η such that
76
+
77
+ $\mathbf{a_{id}}\in(\mathbb{R}^d)$
78
+ $\hdots$ .
79
+ $$x_{\eta}^{*}\in\arg\operatorname*{min}_{x\in\eta}\ell(x)$$
80
+ Our random walk is motivated by asking the question: Can we use this black box repeatedly for a sequence of k-planes η1, η2*, . . .* to find an x
81
+ ∗? This suggests a natural random walk as follows: start with some x1 ∈ R
82
+ d and sample a random k-plane η1 containing x1; find an x2 such that x2 ∈ arg minx∈η1 ℓ(x); in i-th step find a random k-plane ηi containing xi and solve for arg minx∈ηi ℓ(x); stop the algorithm after N steps. We state this more formally in Algorithm 1. This is a very natural random walk from computational complexity theory perspective. It leverages the ability to solve several smaller-dimensional random problems when solving a bigger-dimensional problem.
83
+
84
+ This approach has been used to study other problems in the literature (see Section 10.1.2 in Arora & Barak (2009)), and has provided interesting insights in to the structure of these problems. This approach is called Algorithm 1 Our Random Walk Require: ℓ : R
85
+ d → R, x0 ∈ R
86
+ d, d > k > 1, T ≥ 0, N ≥ 0 1: for i = 1*, . . . , N* do 2: Sample η1*, . . . , η*T uniformly from Gk,d 3: yj ← arg miny∈xi−1+ηj ℓ(y) for j ∈ [T]
87
+ 4: xi ← arg miny∈{y1*,...,y*T }
88
+ ℓ(y)
89
+ 5: **end for**
90
+ 6: **return** xN
91
+ random self-reducibility. To the best of our knowledge, our work is the first to study this approach for optimization problems in the Euclidean space.
92
+
93
+ One of the surprising observations from the practice of modern ML is the ease of solving many seemingly intractable non-convex optimization problems. This justifies the use of a black-box to solve a problem of a smaller dimension in Algorithm 1. Since our random walk is designed to serve the dual purpose of optimization as well as learning a model robust to data poisoning, one would not be advised to use the same black-box to solve the original problem directly. The black box can be implemented with any of the existing techniques. We recommend using a technique best suited to the specific machine learning model that is being learned. We now study the convergence properties of Algorithm 1.
94
+
95
+ ## 4.2 Convergence Analysis
96
+
97
+ For the convergence analysis, we need to define a few auxiliary functions. We define L : R
98
+ d × Gk,d → R, M :
99
+ R
100
+
101
+ d → R, m : R
102
+ d → R, Θ : R
103
+ d → R and θ : R → R as follows:
104
+
105
+ $$L(x,\eta):=\operatorname*{min}_{y\in x+\eta}\ell(y),$$
106
+ $M(x):=\max\limits_{\eta\in G_{k,d}}L(x,\eta),\quad m(x):=\min\limits_{\eta\in G_{k,d}}L(x,\eta),$ $\Theta(x):=\frac{\|L(x,\cdot)\|_{2}^{2}}{2|M(x)-m(x)|^{2}},\quad\theta(\alpha):=\min\limits_{x\in\{x:\ell(x)=\alpha\}}\Theta(x)$
107
+ $$\operatorname*{min}_{x:\ell(x)=\alpha\}\Theta(x)$$
108
+
109
+ We call θ the **gap function** of ℓ and θ(l(x)) the **gap parameter** of the minimizer x. The gap function of ℓ plays a crucial role in our analysis and have very close connections with the spectral gap of a Laplacian on a graph. We discuss this connection in more detail in the next section. Our main convergence proof is as follows:
110
+ Theorem 3 Let ℓ : R
111
+ d → R be a smooth loss function such that θ(ℓ) ≥ 1 − δ for some δ > 0*. Let* α = minx ℓ(x). For all ϵ0 and γ in (0, 1)*, with* N =
112
+ log 1/ϵ0 log 2/δ and T =
113
+ log N+log 2/3γ log 1/δ *and with probability at* least 1 − γ*, Algorithm 1 finds an* x ∈ R
114
+ d*such that* ℓ(x) − α ≤ ϵ0(ℓ(x0) − α).
115
+
116
+ We defer the proof of Theorem 3 to Appendix A.5 and discuss the main theoretical ideas behind it in Section 5. For now, there are several interesting points to note about Theorem 3:
117
+ 1. It only uses a smoothness assumption on the loss function ℓ. We believe that this assumption can be relaxed to a continuity assumption with a little bit more work. But for ease of exposition, we avoid it. In particular, note that we do not assume any bound on the Lipschitz constant of ℓ, which is quite unusual for convergence analysis in the optimization literature.
118
+
119
+ 2. The dependence on all parameters is logarithmic. In contrast, the dependence on the relevant parameters (like Lipschitz or Polyak- Lojasiewicz constant) is at least linear for gradient descent and its stochastic counterparts. Moreover, the dependence on ϵ0 for stochastic procedures is also always at least linear in 1/ϵ0 even under very limited setting of convex functions (Garrigos & Gower, 2024).
120
+
121
+ 3. The analysis is non-local in the sense that at each iteration we directly track progress with respect to the global minimum value α. In typical analysis in the non-convex optimization literature one uses bounds on the difference between consecutive iterates, i.e, ℓ(xi) − ℓ(xi−1).
122
+
123
+ ## 5 Main Theoretical Technique
124
+
125
+ In this section we state and discuss Lemma 4 which forms our main theoretical technique. We state Lemma 4 more generally than is needed to prove Theorem 3. It is stated for any locally compact Lie group that satisfies Kazhdan's Property (T). We do this in order to emphasize the general nature of our result and to bring out the connection of this crucial lemma with Kazhdan's Property (T) which is a very important and extensively studied property of Lie groups (Bekka et al., 2008). A proof Lemma 4 is presented in Appendix A.1. Lemma 4 Let G be a locally compact Lie group that satisfies Kazhdan's Property (T) with constant c. Fix a normalized left-invariant Haar-measure on G. Let f : G → R *be a smooth function such that* RG
126
+ f = 0.
127
+
128
+ Let α = ming∈G f(g), β = maxg∈G f(g) and ϵ =c∥f∥
129
+ 2 2 2|β−α| 2 *. Then,*
130
+
131
+ $$\left|\{g:f(g)-\alpha\leq(1-\sqrt{\epsilon})(\beta-\alpha)\}\right|\geq\epsilon/2.$$
132
+
133
+ Contextualizing Lemma 4. The lemma gives a non-trivial lower bound on the probability of finding a point that is substantially away from the maximum of the function defined on a locally compact group G, by simply sampling a point randomly according to the fixed left-invariant Haar measure. The fundamental nature of this lemma should be compared with results like the Markov inequality or the Chebyshev inequality, which give a non-trivial lower bound on the probability of getting a value close to the mean by sampling a point according to the used probability distribution.
134
+
135
+ Generality of Lemma 4. Though this result is stated on a Lie group one can transfer it to other spaces which lack this structure, for example, the n-dimensional hypercube. This is possible because one can construct a smooth map from the n-dimensional hypercube to the n-dimensional torus, which is a compact Lie group. We state this here to demonstrate the generality of Lemma 4 but we do not provide the details because we do not use such a result in the paper. In the next section, in Lemma 6, we discuss how the result can be transferred to an appropriate quotient of a Lie group. We also note that an argument similar to the proof of Lemma 4 can be constructed for a discrete group like the boolean hypercube, further increasing the applicability of our result.
136
+
137
+ ## 5.1 Using Lemma 4 To Prove Theorem 3
138
+
139
+ In Algorithm 1 we sample from the Grassmannian, which is a quotient space of the compact Lie group O(d). We do not sample from the group directly. In Lemma 6 we show that a statement similar to Lemma 4 holds for our quotient space, also. The proof of Lemma 6 (which is presented in Appendix A.3) uses Lemma 4 adapted to the special case of compact Lie groups (presented in Corollary 5). Kazhdan's Property (T) is a concept for Lie groups and does not have an equivalent statement for their quotients.
140
+
141
+ Corollary 5 Let G be a compact Lie group and let f : G → R *be a smooth function such that* RG
142
+ f = 0*. Let* α = ming∈G f(g), β = maxg∈G f(g) and ϵ =∥f∥
143
+ 2 2 |β−α| 2 *. Then,*
144
+
145
+ $$\left|\{g:f(g)-\alpha\leq(1-\sqrt{\epsilon})(\beta-\alpha)\}\right|\geq\epsilon/2.$$
146
+ √ϵ)(β − α)} ≥ ϵ/2. (1)
147
+ Lemma 6 Let G be a compact Lie group and H a closed subgroup of G. Let f : G/H → R be a smooth function such that RG/H f = 0*. Let* α = minx∈G/H f(x), β = maxx∈G/H f(x) and ϵ =∥f∥
148
+ 2 2 |β−α| 2 *. Then,*
149
+
150
+ $$\left|\{x:f(x)-\alpha\leq(1-{\sqrt{\epsilon}})(\beta-\alpha)\}\right|\geq\epsilon/2.$$
151
+
152
+ A direct proof of Corollary 5 (which also establishes Lemma 1 for compact Lie groups) is presented in Appendix A.2 and the proof for Lemma 6 is presented in Appendix A.3. Discussion on the gap parameter. One of the very important application of Kazhdan's property (T) is the first explicit construction of an expander graph in Margulis (1973). By the virtue of their spectral gap
153
+ (the difference between the first and second eigenvalue of the Laplacian), expander graphs have very good mixing properties, i.e., a random walk on an expander graph quickly gets distributed evenly across the graph
154
+
155
+ $$(1)$$
156
+
157
+ (Rogawski & Lubotzky, 1994). The parameter ϵ in the Lemmas 4-6 behaves very similarly to the spectral gap of an expander graph. It dictates how fast f can approach its minimum α. In fact, it plays a similar role in the proof of Theorem 3 as the spectral gap does in the rapid mixing proofs. More specifically, the key parallel with expander graphs is that their graph adjacency matrix shrinks functions which are orthogonal to constants (e.g., Lemma 1 of Miller & Venkatesan (2006)). This is the same operating principle as in Lemmas 4-6. This is the reason why we call θ the gap function of ℓ.
158
+
159
+ ## 6 Robustness
160
+
161
+ For any smooth function ℓ, by Theorem 3 we know that Algorithm 1 converges towards its minimum α.
162
+
163
+ However, in practice, the algorithm might converge to a point different from the global minimum (see Section 7). In this section, we discuss why this can happen and what this means for the robustness of the solution obtained from Algorithm 1 under perturbations in the training data. The noise model we use in this section was described in Section 3.
164
+
165
+ ## 6.1 Ignoring A Small Set
166
+
167
+ Let f be a function on the Grassmannian. Consider the situation where there is a set U of small measure on which the function dips dramatically compared to the measure of U. In this case, the minimum of f outside of U may be substantially larger than the minimum of f over its entire domain. The variance of f on this restricted space might still be almost the same as its variance on its entire domain. By only considering the space outside U, the gap parameter increases substantially. This means that the value of f, at a random point on the Grassmannian, will have a higher probability of being close to the minimum outside of U than the one on the entire space. Mathematically, this can be formalized as follows: Lemma 7 Let G be a compact Lie group with a normalized measure. Let H be a closed subgroup of it.
168
+
169
+ Let G/H, the quotient of G with respect to H, have a normalized measure on it. Let f : G/H → R be a smooth function such that RG/H f = 0*. Let* α = minx∈G/H f(x), β = maxx∈G/H f(x) and α
170
+ ′ ∈ (α, β)*. Set*
171
+
172
+ ```
173
+ U = {x : f(x) < α′} and ϵ =∥f∥
174
+ 2
175
+ 2
176
+ |β−α′|
177
+ 2 − 2|U|
178
+ |β−α|
179
+ 2
180
+ |β−α′|
181
+ 2 . Assume ϵ > 0. Then,
182
+
183
+ ```
184
+
185
+ $-\sqrt{2}$
186
+ {x : f(x) − α
187
+ ′ ≤ (1 −
188
+ √ϵ)(β − α
189
+
190
+ $$(\beta-\alpha^{\prime})\}|\geq\epsilon/2.$$
191
+
192
+ One way of interpreting this is that random sampling is blind to the bad behavior of the function on small sets in its domain. Leaving out the small set, we get a better gap parameter for the minimizers of our loss function ℓ that lie outside this set. In the next two subsections, we will discuss the implications of this for the robustness of Algorithm 1. But first we use Lemma 7 to give a new convergence result.
193
+
194
+ Define a function ℓα′ as ℓα′ := max(*ℓ, α*′) for some α
195
+ ′ > α. With Lemma 7 in tow, we can now study the convergence properties of Algorithm 1 towards α
196
+ ′even when the algorithm uses ℓ in its execution. We therefore obtain the following theorem:
197
+ Theorem 8 Let ℓ : R
198
+ d → R *be a smooth loss function and* α = minx ℓ(x)*. Choose* α
199
+ ′ > α *and set*
200
+
201
+ ℓα′ := max(ℓ, α′). Let θℓα′ be the gap function of ℓα′ . Assume θℓα′ ≥ 1 − δ for some δ > 0*. Then, for all* ϵ0
202
+ and γ in (0, 1)*, with* N =
203
+ log 1/ϵ0
204
+ log 2/δ and T =
205
+ log N+log 2/3γ
206
+ log 1/δ , with probability at least 1 − γ*, Algorithm 1 finds an*
207
+ x ∈ R
208
+ d*such that*
209
+ $$\ell(x)-\alpha^{\prime}\leq\alpha$$
210
+ ′ ≤ ϵ0(ℓ(x0) − α
211
+ $$v(x_{0})-\alpha^{\prime}).$$
212
+ Note that ℓα′ , as defined, might not be a smooth function. But that does not matter since we only use it to compute θℓα′ theoretically. It has arbitrarily close smooth approximations that yield the same θ.
213
+
214
+ ## 6.2 Gap Parameter As A Measure Of Robustness
215
+
216
+ In the last section, we saw that leaving a "part" of the function out can increase the gap parameter of the minimizers of the loss function ℓ. In general, the value of ℓ on it's domain can vary between the maximum and minimum value of ℓ. When set to the maximum value, the gap parameter for the corresponding solutions will be 1 and when set to the minimum value, it will have the smallest possible value for this function. We hypothesize that for a solution x returned by Algorithm 1, its gap parameter dictates its robustness as a minimizer of ℓ. When an adversary introduces a perturbation ∆ to the data matrix, if it is able to corrupt the solutions on most of Gk,d then the loss function is highly unstable, and there is little hope to build any protection against perturbations. But if we look at the class of loss functions for which most of this perturbation is limited to a small subset of Gk,d, then for such functions it is natural to aim to find solutions which lie outside of these easily corruptible subsets. Since, by Lemma 7, the gap parameter directly measures the size of the set that lies close to a given target value α
217
+ ′, if this set is small, it makes the solutions corresponding to this target value more susceptible to noise and hence less robust. This is why it is reasonable to use the gap parameter as a measure of robustness. With this motivation we give a modification of Algorithm 1 which can be used to optimize ℓ up to an α
218
+ ′ with a desired gap parameter. We present this in Algorithm 2 and prove that it finds the correct α
219
+ ′in Theorem 9. Note that Algorithm 2 does not need α
220
+ ′ as an input parameter.
221
+
222
+ Algorithm 2 Our Robust Random Walk Require: ℓ : R
223
+ d → R, x0 ∈ R
224
+ d, 1 < k < d, 0 < θ0 < 1/2*, N >* 0 1: T ← 2N
225
+ log 1/(1−θ0)
226
+ 2: for i = 1*, . . . , N* do 3: Sample η1*, . . . , η*T uniformly from Gk,d 4: yj ← arg miny∈xi−1+ηj ℓ(y) for j ∈ [T]
227
+ 5: xi ← arg miny∈{y1*,...,y*T }
228
+ ℓ(y)
229
+ 6: **end for**
230
+ 7: **return** xN
231
+ Theorem 9 Let ℓ : R
232
+ d → R be a smooth loss function. Then for all N > 0 and 0 < θ0 < 1/2, Algorithm 2, with probability at least 1 − 3/2N*, converges to an* α
233
+ ′ with θ(ℓα′ ) ≥ θ0*, i.e., it finds an x such that*
234
+
235
+ $$\ell(x)-\alpha^{\prime}\leq\left(1-\sqrt{2\theta_{0}}\right)^{N}\left(\ell(x_{0})-\alpha^{\prime}\right).$$
236
+
237
+ We provide a proof of this theorem in Appendix A.7.
238
+
239
+ ## 6.3 Dependence Of Robustness On K
240
+
241
+ Up until now, we have discussed the convergence properties of the random walk, and identified the gap parameter as an important parameter controlling both the convergence and the robustness of the solution. In this section, we discuss how the choice of k, the dimension of the planes in which the optimization problem is solved, affects the algorithm and in turn informs the gap parameter of the solution retrieved.
242
+
243
+ This subsection is best read in conjunction with Section 7 where our experimental results are presented.
244
+
245
+ A general trend in our experiments, across a range of models, is that for smaller values of k the learned models usually have very good loss values and robustness properties. As k increases, the loss might improve, but at the cost of decreased robustness. For example, in experiments with neural networks, the models learned with a smaller value of k do drastically better on backdoor attacks than the models learned without Algorithm 1 while achieving similar accuracy to the latter on clean test data. As k decreases, the way the optimization problem is adapted to the respective Grassmannian changes, seemingly hiding solutions which are more susceptible to noise in the small sets as discussed in the last two sections. Surprisingly, the solutions retrieved still have close to optimal loss values. We believe that this robust behavior can be attributed to the difficulty of constructing perturbations which can simultaneously affect a large portion of random projections of the data matrix. Choosing k appropriately, we can control the trade-off between obtaining a solution with an optimal loss value and a solution with better robustness properties.
246
+
247
+ ![8_image_0.png](8_image_0.png)
248
+
249
+ ![8_image_1.png](8_image_1.png)
250
+
251
+ (a) Condition number = 102(b) Condition number = 104(c) Condition number = 105
252
+ Figure 1: Plots for Algorithm 3 run on Linear Regression. We compare the loss of the solution retrieved for different values of k with the loss of the solutions retrieved by ridge regression with regularization parameters set to 5, 15 or 25. The dark lines correspond to the mean and the shaded area to one standard deviation over 10 runs of the experiment. We see that the linear regression models retrieved by Algorthm 3 have losses comparable to those of the regularized models learned with ridge regression.
253
+
254
+ ## 7 Experiments
255
+
256
+ In this section, we show the versatility of our technique by testing it on a wide range of models: Linear Regression, Logistic Regression, SVMs and Neural Networks. We use both synthetic as well as popular evaluation datasets.
257
+
258
+ Implementation details. To simplify the implementation, we work with a modification of Algorithm 1 for our experiments. This modification is presented as Algorithm 3 in Appendix A.8. It replaces hyperplanes in Algorithm 1 with subspaces, which are hyperplanes that pass through the origin. Picking a random subspace. One important step in Algorithm 3, used in all the experiments below, is that of picking a random subspace containing a given vector x ∈ R
259
+ d. To do this, we consider two different techniques:
260
+ 1. In the first technique, we start by constructing a basis U for the space orthogonal to x by taking the singular vectors corresponding to non-trivial singular values of the matrix Id − xxT /∥x∥
261
+ 2, where Id is the d × d identity matrix. We then sample a mean 0 and variance 1 gaussian i.i.d. matrix of size
262
+ (d − 1) × (d − 1) and construct V ∈ R
263
+ (d−1)×(k−1), the matrix whose columns are the top k − 1 left singular vectors of the randomly sampled matrix. Our desired random supspace then is the span of the column space of UV combined with x.
264
+
265
+ 2. In the second technique, we start by constructing a d×k matrix U by keeping its first column as x and filling the rest of it's entries with gaussian i.i.d. random variables. We then do a QR decomposition on U and use the orthonormal matrix obtained from this decomposition in our algorithms. Note that the span of this orthonormal matrix will always contain x.
266
+
267
+ While the first method will provably generate a uniformly random subspace containing x, the second method has no such guarantees. But the second method is computationally much faster when d is large and hence is used for all our deep learning experiments.
268
+
269
+ ## 7.1 Linear Regression
270
+
271
+ For linear regression experiments, we work with synthetic data in 100 dimensions with 1000 data points. The behavior of a linear regression instance is largely determined by the condition number of its data matrix.
272
+
273
+ Accordingly, we study the effect of our algorithm for data matrices with preselected condition numbers.
274
+
275
+ For a given condition number, we generate an instance whose singular values are equally spaced between a top singular value of 100 and the corresponding least singular value. We generate a regressor vector by setting the last five values to 1 and by picking other coordinate uniformly at random between 0 and 1. The idea here is that in real world data, the top singular vectors usually correspond to the signal whereas the last singular vectors correspond to the noise. We might be able to get a solution with a lower loss by fitting to the last singular vectors, but this would be overfitting to the training data. We can avoid this by using some regularization technique like ridge regression (see Section 3.4.1 in Hastie et al. (2009)). Using this setting, we want to demonstrate that for an appropriate choice of k, Algorithm 1 retrieves solutions which have
276
+
277
+ ![9_image_0.png](9_image_0.png)
278
+
279
+ Figure 2: Plots for classifying pairs of digits from MNIST dataset using the logistic regression and SVM models trained with Algorithm 3. We poison the datasets using SecML (Melis et al., 2019) and compare the accuracy of a solution retrieved by Algorithm 3, for various values of k, to the solution obtained by directly learning the classifier on the poisoned dataset (this corresponds to the baseline). For reference, we also give the accuracy of the model trained on the clean data in the plots. The dark lines correspond to the mean and the shaded area to one standard deviation over 10 runs of the experiment. We see across all the plots that training with Algorithm 3 yields models with substantially better accuracy in presence of the data poisoning attacks.
280
+ loss corresponding to different choices of the regularization parameters in ridge regression. We repeated the experiments 10 times and report the mean and standard deviation in our plots. The results are presented in Figure 1. This shows that linear regression models trained with Algorithm 3, avoid fitting to the noise in the problem, and hence can be expected to have robust behavior.
281
+
282
+ ## 7.2 Logistic Regression And Svms
283
+
284
+ For binary classification experiments, we use a subset of the MNIST dataset by sampling 100 images corresponding to a pair of digits to construct our training dataset, and 500 images to construct our testing dataset. We then use SecML (Melis et al., 2019), a library for secure and explainable Machine Learning in Python, to poison the training dataset to degrade the performance of the learned classifier. The library implements the attack from Demontis et al. (2019) to generate poisoned datasets for logistic regression and the attack from Biggio et al. (2012) for SVMs. We study the effect of poisoning an increasing number of points on various choices of k for Algorithm 3. As a baseline, we compare this with the accuracy obtained by training the corresponding classifiers without Algorithm 3. We also give the accuracy for training the classifiers without Algorithm 3 on a dataset with no poisoned samples. We repeated the experiments 10 times and report the mean and standard deviation in our plots. The results are presented in Figure 2. As we can see, the models obtained from the training with Algorithm 3 give much better accuracy than those trained without it. Observation also indicates that the accuracy is generally better for smaller values of k. We note that SVM is not a "smooth" optimization problem per se, but Algorithms 1, 2 and 3 are still well defined for it.
285
+
286
+ ## 7.3 Neural Networks
287
+
288
+ In this section, we discuss the efficacy of Algorithm 3 against backdoor attacks in deep learning. The agenda of a backdoor attack is to emanate a specific response from a trained network when a test image has a special patch of pixels (the backdoor) overlapped on it. This attack can be used to misclassify images during testing. To carry out such an attack, an adversary introduces a set of images with the backdoor attached to them, and with their labels set to a desired label in the training dataset. The network then runs the risk
289
+
290
+ ![10_image_0.png](10_image_0.png)
291
+
292
+ (a) Accuracy on a clean test set (b) Accuracy on a poisoned test set (c) Accuracy of the poisoning attack
293
+
294
+ ![10_image_1.png](10_image_1.png)
295
+
296
+ Figure 3: Accuracy plots for MNIST against a backdoor attack presented in Turner et al. (2018). A feedforward neural network is trained with Algorithm 3 for different values of k with poisoned samples in the training data. We report three metrics: 1) accuracy on a clean test set which doesn't contain any images with the backdoor, 2) accuracy on a poisoned test set which contains images with the backdoor, 3) accuracy of the attack, i.e., images with the backdoor getting classified as intended by the adversary. We compare the results against the same model trained directly (this corresponds to the baseline). The dark lines correspond to the mean and the shaded area to one standard deviation over 5 runs of the experiment. At 0.3 fraction of the training, a modest decrease in the clean accuracy of the models trained using Algorithm 3 yields substantially better accuracy on the poisoned data set while also considerably decreasing the accuracy of the attack.
297
+
298
+ of learning an association between the backdoor and the desired label, while ignoring the true label of the image entirely.
299
+
300
+ Attack from Turner et al. (2018). For our experiments, we use the implementation of this attack provided in the ART toolbox (Nicolae et al., 2018). In this attack the backdoor is inserted only into images corresponding to a target label. This is done to avoid the filtering of clearly mislabeled poisoned samples by human inspection. We work with the MNIST dataset. Our model is a fully connected MLP with three hidden layers and 100 neurons in each layer. For training, we use the Adam optimizer. A baseline model is trained on the poisoned dataset for 10 epochs. For training with Algorithm 3, we set N = 10. To solve the problem in the subspace selected in a given iteration of Algorithm 3, we train the network for 5 epochs. The models are evaluated on a clean test set as well as a poisoned test set which consists of images corrupted with the backdoor. We repeated the experiments 5 times and report the mean and standard deviation in our plots. The results are presented in Figure 3. The baseline model corresponds to k = 784, which is the full dimension of the problem.
301
+
302
+ Since the parameters of an MLP are distributed across different layers and neurons, treating them as part of the same Euclidean space and working with the subspaces of this single Euclidean space is unnatural. Instead, we work with the parameters of each neuron separately by treating them as living in their own Euclidean spaces, and sampling different subspaces for each of these spaces individually when running Algorithm 3.
303
+
304
+ This corresponds to working with a finite product of Grassmannians, which is still the quotient of a compact lie group (the group now will be a product of the same number of orthonormal groups). All theoretical guarantees hold in this setting, since the underlying mathematics is based on compact lie groups.
305
+
306
+ As we can see in Figure 3, the models trained using Algorithm 3 are able to achieve an accuracy on the clean test data which is close to that of the accuracy of the models trained without it. At the same time, their accuracy on the poisoned test data is substantially higher and thus the success rate of the poisoning attack substantially smaller. Specifically, around the 1/3rd training mark, the accuracy of the models trained with Algorithm 3 with k ≤ 15 is greater than 80%, while those trained without it have an accuracy of less than 40% on the posioned test data. Attack from Saha et al. (2020). For this attack too, we use the implementation provided by Nicolae et al. (2018). The intent of the attack is the same as the previous one. It is constructed in a manner so that the poisoned image is closer to the desired target image in the feature space while visually being indistinguishable from its source image. In Saha et al. (2020), the authors show that the attack is robust against many existing defense mechanisms.
307
+
308
+ ![11_image_0.png](11_image_0.png)
309
+
310
+ (a) Accuracy on a clean test set (b) Accuracy on a poisoned test set (c) Accuracy on a poisoning attack
311
+ Figure 4: Accuracy plots for CIFAR-10 against the backdoor attack presented in Saha et al. (2020). A CNN
312
+ based architecture is fine-tuned with Algorithm 3 for different values of k on a training set which contains poisoned data points. We report three metrics: 1) accuracy on a clean test set which doesn't contain any images with the backdoor, 2) accuracy on a poisoned test set which contains images with the backdoor, 3)
313
+ accuracy of the attack, i.e., images with the backdoor getting classified as intended by the adversary. We compare the results against the same model fine-tuned directly (this corresponds to the baseline). The dark lines correspond to the mean and the shaded area to one standard deviation over 5 runs of the experiment.
314
+
315
+ We see that the models trained with Algorithm 3 not only have better accuracy on the clean test set, but also have better accuracy on the poisoned test set and are able to substantially decrease the accuracy of the attack.
316
+
317
+ For our experiments, we work with the CIFAR-10 dataset and the CNNs-based architecture used by Nicolae et al. (2018) in their demonstration of the attack. We do not attempt to optimize any hyperparameters to improve the clean classification accuracy of the used model. Instead, we choose to work with the experimental setup of Nicolae et al. (2018) to demonstrate the versatility of our technique. In their setup, the poisoned dataset is used only in the fine-tuning step where all but the last fully connected layer (which has a hidden dimension of 4096) are frozen. We use Algorithm 3 on this last layer, modifying it in a way similar to what we did in the last section.
318
+
319
+ We pretrained a model for 200 epochs using SGD with learning rate 0.01, momentum 0.9 and weight decay 2×10−4, reducing the learning rate by a factor of 0.1 after 100 and 150 epochs. For fine-tuning we reinitialize the last layer with gaussian i.i.d. random variables and train for another 10 epochs. For fine-tuning with Algorithm 3, apart from reinitializing the last layer, we use N = 10 and to solve the problem in the selected subspace of each iteration, we train for 1 epoch. We repeated the experiments 5 times and report the mean and standard deviation in our plots.
320
+
321
+ We present the results of our experiments in Figure 4 and consider three metrics: accuracy on a benign unpoisoned test set, accuracy on a poisoned test set, and the success rate of the attack on this poisoned test set. As we can see Algorithm 3 does not affect the accuracy of trained model on the benign samples, while drastically increasing its accuracy on poisoned samples and drastically decreasing the efficacy of the attack on the same samples, especially for smaller values of k.
322
+
323
+ ## 8 Discussion
324
+
325
+ In this paper, we present a new algorithm for robust stochastic optimization. We give a general convergence theorem for this algorithm, identify an important parameter of the analysis (the gap parameter) and experimentally study the robustness properties of our algorithm. We give a modification of our algorithm which can control the robustness of its output by controlling its gap parameter, and discuss the role of the subspace dimension k in our algorithm. We also present a general lemma which lower bounds the probability of the value of a function on a Lie-group, at a random point, being non-trivially away from the maximum of that function. We believe that this lemma can be adapted or applied to other settings as well.
326
+
327
+ Apart from the goal of optimization and robustness, our random walk also has the potential to provide privacy properties because of its extensive use of randomness. Studying the privacy properties of our approach is an interesting future direction. We believe that developing algorithms which can address different requirements at the same time and work for a variety of optimization problems is necessary given the recent explosion in machine learning research.
328
+
329
+ ## References
330
+
331
+ S. Arora and B. Barak. *Computational Complexity: A Modern Approach*. Cambridge University Press, 2009.
332
+
333
+ ISBN 9781139477369. URL https://books.google.com/books?id=nGvI7cOuOOQC.
334
+
335
+ B. Bekka, P. de la Harpe, and A. Valette. *Kazhdan's Property (T)*. New Mathematical Monographs.
336
+
337
+ Cambridge University Press, 2008. ISBN 9781139471084. URL https://books.google.com/books?id=
338
+ QCftywollBMC.
339
+
340
+ Thomas Bendokat, Ralf Zimmermann, and P.-A. Absil. A grassmann manifold handbook: basic geometry and computational aspects. *Advances in Computational Mathematics*, 50(1), January 2024. ISSN 15729044. doi: 10.1007/s10444-023-10090-8. URL http://dx.doi.org/10.1007/s10444-023-10090-8.
341
+
342
+ Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines.
343
+
344
+ In *Proceedings of the 29th International Coference on International Conference on Machine Learning*,
345
+ ICML'12, pp. 1467–1474, Madison, WI, USA, 2012. Omnipress. ISBN 9781450312851.
346
+
347
+ D. Bump. *Lie Groups*. Graduate Texts in Mathematics. Springer New York, 2013. ISBN 9781461480242.
348
+
349
+ URL https://books.google.com/books?id=x2W4BAAAQBAJ.
350
+
351
+ Moses Charikar, Jacob Steinhardt, and Gregory Valiant. Learning from untrusted data. In *Proceedings of the* 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, pp. 47–60, New York, NY,
352
+ USA, 2017. Association for Computing Machinery. ISBN 9781450345286. doi: 10.1145/3055399.3055491.
353
+
354
+ URL https://doi.org/10.1145/3055399.3055491.
355
+
356
+ Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, and Fabio Roli. Wild patterns reloaded:
357
+ A survey of machine learning security against training data poisoning. *ACM Comput. Surv.*, 55(13s), jul 2023. ISSN 0360-0300. doi: 10.1145/3585385. URL https://doi.org/10.1145/3585385.
358
+
359
+ Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In *28th USENIX Security Symposium (USENIX Security 19)*, pp. 321–338, Santa Clara, CA, August 2019. USENIX Association. ISBN 978-1-939133-06-9. URL https://www.usenix.
360
+
361
+ org/conference/usenixsecurity19/presentation/demontis.
362
+
363
+ John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of Machine Learning Research*, 12(61):2121–2159, 2011. URL http://jmlr.org/
364
+ papers/v12/duchi11a.html.
365
+
366
+ J. Gallier and J. Quaintance. *Differential Geometry and Lie Groups: A Computational Perspective*. Geometry and Computing. Springer International Publishing, 2020. ISBN 9783030460402. URL https://books.
367
+
368
+ google.com/books?id=K3r3DwAAQBAJ.
369
+
370
+ Guillaume Garrigos and Robert M. Gower. Handbook of convergence theorems for (stochastic) gradient methods, 2024.
371
+
372
+ Robert M. Gower, Othmane Sebbouh, and Nicolas Loizou. SGD for structured nonconvex functions: Learning rates, minibatching and interpolation. In Arindam Banerjee and Kenji Fukumizu (eds.), *The 24th* International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of *Proceedings of Machine Learning Research*, pp. 1315–1323. PMLR, 2021. URL
373
+ http://proceedings.mlr.press/v130/gower21a.html.
374
+
375
+ Venkatesan Guruswami and Prasad Raghavendra. Hardness of learning halfspaces with noise. In *2006* 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06), pp. 543–552, 2006. doi: 10.1109/FOCS.2006.33.
376
+
377
+ T. Hastie, R. Tibshirani, and J.H. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer series in statistics. Springer, 2009. ISBN 9780387848846. URL https://books. google.com/books?id=eBSgoAEACAAJ.
378
+
379
+ S. Helgason. *Groups and Geometric Analysis: Integral Geometry, Invariant Differential Operators, and* Spherical Functions. Mathematical Surveys and Monographs. American Mathematical Society, 2022. ISBN
380
+ 9780821832110. URL https://books.google.com/books?id=ThZuEAAAQBAJ.
381
+
382
+ Ahmed Khaled and Peter Richtárik. Better theory for SGD in the nonconvex world. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=AU4qHN2VkS.
383
+
384
+ Survey Certification.
385
+
386
+ Roni Khardon and Gabriel Wachman. Noise tolerant variants of the perceptron algorithm. *J. Mach. Learn.*
387
+ Res., 8:227–248, may 2007. ISSN 1532-4435.
388
+
389
+ Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *CoRR*, abs/1412.6980, 2014. URL https://api.semanticscholar.org/CorpusID:6628106.
390
+
391
+ E. Kowalski. *An Introduction to Expander Graphs*. Collection SMF / Cours spécialisés. Société Mathématique de France, 2019. ISBN 9782856298985. URL https://books.google.com/books?id=BkmAxQEACAAJ.
392
+
393
+ V. Lakshmibai. *Flag Varieties: An Interplay of Geometry, Combinatorics, and Representation Theory*.
394
+
395
+ Texts and Readings in Mathematics. Hindustan Book Agency, 2009. ISBN 9789386279415. URL https: //books.google.co.in/books?id=yfJdDwAAQBAJ.
396
+
397
+ G. Margulis. Explicit constructions of concentrators. *Problemy Peredachi Informatsii*, 9(4):71–80, 1973.
398
+
399
+ Marco Melis, Ambra Demontis, Maura Pintor, Angelo Sotgiu, and Battista Biggio. secml: A python library for secure and explainable machine learning. *arXiv preprint arXiv:1912.10013*, 2019.
400
+
401
+ Stephen D. Miller and Ramarathnam Venkatesan. Spectral analysis of pollard rho collisions. In Florian Hess, Sebastian Pauli, and Michael E. Pohst (eds.), *Algorithmic Number Theory, 7th International Symposium,*
402
+ ANTS-VII, Berlin, Germany, July 23-28, 2006, Proceedings, volume 4076 of *Lecture Notes in Computer* Science, pp. 573–581. Springer, 2006. doi: 10.1007/11792086\_40. URL https://doi.org/10.1007/
403
+ 11792086_40.
404
+
405
+ Eric Moulines and Francis Bach. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q. Weinberger (eds.), *Advances in Neural Information Processing Systems*, volume 24. Curran Associates, Inc., 2011. URL https://proceedings.neurips.cc/paper_files/paper/2011/file/
406
+ 40008b9a5380fcacce3976bf7c08af5b-Paper.pdf.
407
+
408
+ L. Nachbin. *The Haar Integral*. University series in higher mathematics. R. E. Krieger Publishing Company, 1976. ISBN 9780882753744. URL https://books.google.com/books?id=8YspAQAAMAAJ.
409
+
410
+ Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian Molloy, and Ben Edwards.
411
+
412
+ Adversarial robustness toolbox v1.2.0. *CoRR*, 1807.01069, 2018. URL https://arxiv.org/pdf/1807.
413
+
414
+ 01069.
415
+
416
+ Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, and Pradeep Ravikumar. Robust estimation via robust gradient estimation. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*,
417
+ 82(3):601–627, 2020.
418
+
419
+ J.D. Rogawski and A. Lubotzky. *Discrete Groups, Expanding Graphs and Invariant Measures*. Progress in Mathematics. Birkhäuser Basel, 1994. ISBN 9783764350758. URL https://books.google.com/books? id=aNURlzNuotEC.
420
+
421
+ Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. Hidden trigger backdoor attacks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 11957–11965, 2020.
422
+
423
+ M.R. Sepanski. *Compact Lie Groups*. Graduate Texts in Mathematics. Springer New York, 2006. ISBN
424
+ 9780387302638. URL https://books.google.com/books?id=F3NgD_25OOsC.
425
+
426
+ Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
427
+ A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56):
428
+ 1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html.
429
+
430
+ Zhiyi Tian, Lei Cui, Jie Liang, and Shui Yu. A comprehensive survey on poisoning attacks and countermeasures in machine learning. *ACM Comput. Surv.*, 55(8), dec 2022. ISSN 0360-0300. doi: 10.1145/3551636.
431
+
432
+ URL https://doi.org/10.1145/3551636.
433
+
434
+ Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Clean-label backdoor attacks. 2018.
435
+
436
+ ## A Appendix
437
+
438
+ We present the details left out from the main body of the paper here.
439
+
440
+ ## A.1 Proof Of Lemma 4
441
+
442
+ Proof. Since f is a non-constant smooth function with zero mean, we have, by Kazhdan's Property (T)
443
+ that there exists a γ ∈ G such that
444
+ ∥f − γ · f∥
445
+ 2 ≥ c · ∥f∥
446
+ 2 where c is the Kazhdan constant of G. Define h : G → [0, 1] as h(g) = |f(g)−γ·f(g)| 2 |β−α| 2 . Let Uϵ:= {g : h(g) > ϵ}.
447
+
448
+ Then, we have by Lebesgue integration,
449
+
450
+ $$\int\limits_{0}^{1}|U_{\epsilon}|d\epsilon=\int_{G}h(g)dg=\frac{\|f-\gamma\cdot f\|^{2}}{|\beta-\alpha|^{2}}\geq\frac{c\cdot\|f\|^{2}}{|\beta-\alpha|^{2}}.\tag{2}$$
451
+
452
+ Since f is a smooth function, |Uϵ| is a continuous non-increasing function of ϵ. Moreover, |U0| = 1 and |U1| = 0. From this we have the following upper bound for any ϵ
453
+ ′ ∈ [0, 1],
454
+
455
+ $$\int\limits_{0}^{1}|U_{\epsilon}|d\epsilon\leq\epsilon^{\prime}+|U_{\epsilon^{\prime}}|.\tag{1}$$
456
+ $$\left(3\right)$$
457
+
458
+ Let ϵ
459
+ ′ ∈ [0, 1] be such that |Uϵ
460
+ ′ | = ϵ
461
+ ′. Substituting this ϵ
462
+ ′in (3) and using the lower bound from (2) we get ϵ
463
+ ′ ≥c·∥f∥
464
+ 2 2|β−α| 2 . For this ϵ
465
+ ′, since |Uϵ
466
+ ′ | = ϵ
467
+ ′, we also have |Uϵ
468
+ ′ | ≥ c·∥f∥
469
+ 2 2|β−α| 2 . As |Uϵ
470
+ ′ | decreases as ϵ
471
+ ′increases, we can select ϵ
472
+ ′ =c·∥f∥
473
+ 2 2|β−α| 2 and for this ϵ
474
+ ′ we will still have |Uϵ
475
+ ′ | ≥ c·∥f∥
476
+ 2 2|β−α| 2 .
477
+
478
+ Now, using the definition of h, for every g ∈ Uϵ we have
479
+
480
+ $${\frac{|f(g)-\gamma\cdot f(g)|^{2}}{|\beta-\alpha|^{2}}}\geq\epsilon^{\prime}.$$
481
+
482
+ On taking the denominator to the right and taking a square root on both the sides, we get either f(g) − γ · f(g) >
483
+ √ϵ
484
+ ′(β − α) or γ · f(g) − f(g) >
485
+ √ϵ
486
+ ′(β − α).
487
+
488
+ Since both f(g) and γ · f(g) are less than β, we can substitute f(g) with it in the first equation and γ · f(g)
489
+ with it in the second equation. We get
490
+
491
+ $$\beta-\gamma\cdot f(g)>\sqrt{\epsilon^{\prime}}(\beta-\alpha)\quad\mathrm{or}\quad\beta-f(g)>\sqrt{\epsilon^{\prime}}(\beta-\alpha).$$
492
+
493
+ On rearranging, we get
494
+
495
+ $$\gamma\cdot f(g)<\beta-\sqrt{\epsilon^{\prime}}(\beta-\alpha)\quad\mathrm{or}\quad f(g)<\beta-\sqrt{\epsilon^{\prime}}(\beta-\alpha).$$
496
+
497
+ Now subtract α on both sides to get
498
+
499
+ $$\gamma\cdot f(g)-\alpha<(1-\sqrt{e^{\prime}})(\beta-\alpha)\quad\mathrm{or}\quad f(g)-\alpha<(1-\sqrt{e^{\prime}})(\beta-\alpha).$$
500
+
501
+ Since the above statements are strict inequalities, when either of them is true, there will exist a small ball around g contained in Uϵ such that the corresponding inequality will also be true for every element in this small ball. Let U
502
+ 1 ϵ be the union of such balls corresponding to the set for which the first inequality is true, and let U
503
+ 2 ϵ be the corresponding union for which the second inequality is true. By construction, both sets are open. Also, they are measurable as the Haar measure is a Borel measure by definition.
504
+
505
+ Now, every element of Uϵ will belong to one of these two sets, hence |U
506
+ 1 ϵ | + |U
507
+ 2 ϵ | ≥ |Uϵ| and at least one of them will have a measure greater than or equal to |Uϵ|/2. Since the measure is left-invariant |γ · U
508
+ 2 ϵ | = |U
509
+ 2 ϵ |.
510
+
511
+ This gives us the conclusion.
512
+
513
+ ## A.2 Proof Of Corollary 5
514
+
515
+ Proof. Consider the following integral for a non-constant zero-mean smooth function f : G → R,
516
+
517
+ $$\int_{G}\|f-\gamma\cdot f\|^{2}d\gamma=\int_{G}\int_{G}(f(g)-\gamma\cdot f(g))^{2}dgd\gamma$$ $$=\int_{G}\int_{G}\left(f(g)^{2}+f(\gamma^{-1}\cdot g)^{2}-2f(g)f(\gamma\cdot g)\right)dgd\gamma$$ $$\stackrel{{(a)}}{{=}}\int_{G}f(g)^{2}dg+\int_{G}\int_{G}f(\gamma^{-1}\cdot g)^{2}dgd\gamma-2\int_{G}f(g)\int_{G}f(\gamma^{-1}\cdot g)d\gamma dg$$ $$\stackrel{{(c)}}{{=}}\int_{G}f(g)^{2}dg+\int_{G}f(g)^{2}dg-2\left(\int_{G}f(\gamma^{-1})d\gamma\right)\left(\int_{G}f(g)dg\right)$$ $$\stackrel{{(e)}}{{=}}2\int_{G}f(g)^{2}dg$$ $$=2\|f\|^{2}$$
518
+
519
+ where we change the order of integration for the last term in (a), use the invariance property of the Haar measure for compact Lie groups to simplify the second and third term in (b) (left invariance for the second term and right invariance for the third term) and use the fact that f is mean-zero for (c).
520
+
521
+ Using mean value theorem and the above calculation we see that there exists a γ ∈ G such that,
522
+
523
+ $$\|f-\gamma\cdot f\|^{2}\geq2\|f\|^{2}.$$
524
+ $$\left(4\right)$$
525
+ 2. (4)
526
+ From here on we proceed exactly as we did for the proof of Lemma 4 but with fewer details. Define h : G → [0, 1] as h(g) = |f(g)−γ·f(g)| 2 |β−α| 2 . Let Uϵ:= {g : h(g) ≥ ϵ}. Then, we have by Lebesgue integration,
527
+
528
+ $$\int\limits_{0}^{1}|U_{\epsilon}|d\epsilon=\int_{G}h(g)d g=\frac{\|f-\gamma\cdot f\|^{2}}{|\beta-\alpha|^{2}}\geq\frac{2\|f\|^{2}}{|\beta-\alpha|^{2}}.$$
529
+
530
+ Since, |Uϵ| is a non-increasing function of ϵ and, |U0| = 1 and |U1| = 0, we have |Uϵ| ≥ ∥f∥
531
+ 2 |β−α| 2 for ϵ =∥f∥
532
+ 2 |β−α| 2 .
533
+
534
+ Now, for g ∈ Uϵ we have, either
535
+
536
+ $$f(g)-\alpha\leq(1-\sqrt{\epsilon})(\beta-\alpha)\quad\mathrm{or,}\quad\gamma\cdot f(g)-\alpha\leq(1-\sqrt{\epsilon})(\beta-\alpha).$$
537
+
538
+ This gives us the conclusion.
539
+
540
+ ## A.3 Proof Of Lemma 6
541
+
542
+ We use the following lemma on the existence of an invariant measure on quotient spaces for our proof: Lemma 10 (Theorem 1.9 and Remark on page 93 of Helgason (2022)) Let G be a compact Lie group and H a compact Lie subgroup of G. Then there exists a unique normalized left G*-invariant measure* dx on G/H *such that for all* f ∈ C(G)
543
+
544
+ $$\int_{G}f(g)d g=\int_{G/H}\int_{H}f(x\cdot h)d h d x$$
545
+
546
+ where dg and dh are normalized left-invariant measures on G and H *respectively.*
547
+ Proof of Lemma 6. Consider any function t : G/H → R. Since G =Sh∈H ((G/H) · h), we can define a function t
548
+ ′: G → R as t
549
+ ′(x · h) = t(x), ∀h ∈ H and x ∈ G/H. Define f
550
+ ′ using f similarly. Corollary 5 gives us an estimate on the measure of the "good" subset of G for f
551
+ ′. We will use UG to denote this set and UG/H
552
+ to denote a corresponding set on G/H for f i.e.,
553
+
554
+ $$U_{G/H}:=\left\{x:{\frac{|f(x)-\gamma\cdot f(x)|^{2}}{|\beta-\alpha|^{2}}}\geq\epsilon\right\}$$
555
+
556
+ where ϵ here is the same as in Corollary 5 and, α and β are the minimum and maximum values of f respectively. Note that they are also the minimum and maximum values of f
557
+ ′respectively. This follows from the definition of f
558
+ ′.
559
+
560
+ Now, let *dg, dh* and dx be normalized measures on *G, H* and G/H respectively. Using Lemma 10 we can write
561
+
562
+ $$\int_{G}t^{\prime}(g)dg=\int_{G/H}\int_{H}t^{\prime}(x\cdot h)dhdx$$ $$=\int_{G/H}\int_{H}t(x)dhdx$$ $$=\left(\int_{G/H}t(x)dx\right)\left(\int_{H}dh\right)$$ $$=\left(\int_{G/H}t(x)dx\right).$$
563
+
564
+ Note that f
565
+ ′is constant on the cosets gH of H in G. This means that for any g in the good set of f
566
+ ′, the good set will contain the entire coset gH. Set t = 1{x∈UG/H} in the above calculation, then t
567
+ ′ = 1{g∈UG}.
568
+
569
+ We get that measure of the good set of f on G/H will be the same as the measure of the good set of f
570
+ ′ on G. This gives us the conclusion.
571
+
572
+ ## A.4 Proof Of Lemma 7
573
+
574
+ Proof of Lemma 7. We first prove an equivalent statement over the group G in Lemma 11, then we can use the same machinery as we did in proving Lemma 6 from Corollary 5 to transfer the estimate from the group to it's quotient to get the full proof. The details are straightforward.
575
+
576
+ Lemma 11 Let G be a compact Lie group and let f : G → R *be a smooth function such that* RG
577
+ f = 0*. Let* α = ming∈G f(g), β = maxg∈G f(g) and α
578
+ ′ ∈ (α, β). Set U = {g : f(g) < α′} and ϵ =∥f∥
579
+ 2 2 |β−α′| 2 − 2|U| |β−α| 2 |β−α′| 2 .
580
+
581
+ Then,
582
+ $$\left|\{g:f(g)-\alpha^{\prime}\leq(1-\sqrt{\epsilon})(\beta-\alpha^{\prime})\}\right|\geq\epsilon/2$$
583
+ Proof. The proof proceeds in a manner similar to that of Corollary 5. We provide the extra details needed here. Define h : G → [0, 1] as h(g) = |f(g)−γ·f(g)| 2 |β−α′| 2 and let V = U ∪ γ
584
+ −1· U. We consider integrals over the space G \ V . To do this we use the normalized measure dg on G and divide it by |V | so that the resulting measure is normalized over G \ V . We denote this measure by dgV . We use *| · |*V to denote the size of a set w.r.t. this measure.
585
+
586
+ Now, let Uϵ:= {g : h(g) ≥ *ϵ, g* ∈ G \ V }. We use the measure dgV when measure the size of the set Uϵ. We have,
587
+
588
+ Z 1 0 |Uϵ|V dϵ (a) =Z G\V h(g)dgV =Z G\V |f(g) − γ · f(g)| 2 |β − α′| 2dgV (b) =1 |β − α′| 2 Z G\V |f(g) − γ · f(g)| 2 |G \ V |dg! =1 |G \ V ||β − α′| 2 Z G |f(g) − γ · f(g)| 2dg − Z V |f(g) − γ · f(g)| 2dg (c) ≥∥f − γ · f∥ 2 − |V ||β − α| 2 |G \ V ||β − α′| 2 (d) ≥2∥f∥ 2 − |V ||β − α| 2 |G \ V ||β − α′| 2
589
+ where we use Lebesgue integration in (a), we change the measure from dgV to dg in (b), use the upper and lower bound on f to get (c) and use (4) to get (d).
590
+
591
+ Since, |Uϵ|V is a non-increasing function of ϵ and, |U0|V = 1 and |U1|V = 0, using the same ideas as in proof of Corollary 5 we have |Uϵ|V ≥
592
+ 2∥f∥
593
+ 2−|V ||β−α| 2 2|G\V ||β−α′| 2 for ϵ =
594
+ 2∥f∥
595
+ 2−|V ||β−α| 2 2|G\V ||β−α′| 2 . Moreover, since |Uϵ|V =|Uϵ| |G\V |
596
+ , we have |Uϵ| ≥ ∥f∥
597
+ 2 |β−α′| 2 − |V | |β−α| 2 |β−α′| 2 ≥∥f∥
598
+ 2 |β−α′| 2 − 2|U| |β−α| 2 |β−α′| 2 .
599
+
600
+ Now, for g ∈ Uϵ we have, either
601
+
602
+ $$f(g)-\alpha^{\prime}\leq(1-\sqrt{\epsilon})(\beta-\alpha^{\prime})\quad\mathrm{or,}\quad\gamma\cdot f(g)-\alpha^{\prime}\leq(1-\sqrt{\epsilon})(\beta-\alpha^{\prime}).$$
603
+
604
+ Using the same argument as in the proof of Corollary 5 we get the conclusion.
605
+
606
+ ## A.5 Proof Of Theorem 3
607
+
608
+ Proof. We use the notation set up in Section 4.2 for this proof. At step i of Algorithm 1, from Lemma 6, we know that we can find an ηi such that
609
+
610
+ L(xi−1, ηi) − m(xi−1) ≤ 1 −p2Θ(xi−1) (M(xi−1) − m(xi−1)) (5)
611
+ with probability at least Θ(xi−1). Since Θ(xi−1) ≥ θ(ℓ) ≥ 1−δ and, as we sample T points in each iteration and take the minimum over these samples, the probability that we will find one such point amplifies to 1 − δ T.
612
+
613
+ The probability of this happening for all N iterations of the algorithm is (1−δ T)
614
+ N . We want this probability to be greater than 1 − γ. Set (1 − δ T)
615
+ N ≥ 1 − γ and take the logarithm on both sides. Rearrange, and we get N log 1 1−δT ≤ log 1 1−γ
616
+ . Now, we use the following approximations to simplify further:
617
+
618
+ $$\forall t\in[0,1),\quad t\leq\log{\frac{1}{1-t}}\leq t+{\frac{t^{2}}{2}}\leq{\frac{3t}{2}}.$$
619
+
620
+ Using these approximations it is sufficient to work with NδT ≤ 3γ/2. Taking logarithm on both the sides again and rearranging we get,
621
+
622
+ $$T\geq{\frac{\log N+\log2/3\gamma}{\log1/\delta}}.$$
623
+
624
+ This gives us a bound on the number of samples we need to draw in each iteration of Algorithm 1.
625
+
626
+ Now, we need two facts to proceed:
627
+ 1. m(x) is a constant function with value α 2. ∀i ∈ [1, T], M(xi) ≤ L(xi−1, ηi).
628
+
629
+ To prove the first we proceed as follows. Recall α = minx ℓ(x). Then for k ≥ 2, ∀*x, m*(x) = α. This is because, for any given x, there exists a k-plane that passes through x and a global minimum of ℓ.
630
+
631
+ To prove the second, notice that xiis an argmin of ℓ in the k-plane xi−1 + ηi. This k-plane will correspond to some η ∈ Gk,d such that xi +η ∼= xi−1 +ηi. Moreover, on any other k-plane that contains xi the minimum value of ℓ will be upper bounded by ℓ(xi). Hence M(xi) = maxη L(xi, η) ≤ ℓ(xi) = L(xi−1, ηi).
632
+
633
+ Using the above two facts we can rewrite (5) as,
634
+
635
+ $$\ell(x_{i})-\alpha\leq\left(1-\sqrt{2\Theta(x_{i-1})}\right)(\ell(x_{i-1})-\alpha).$$
636
+
637
+ Conjugating this over all N steps we get,
638
+
639
+ $$\ell(x_{N})-\alpha\leq\prod_{i=1}^{N}\left(1-\sqrt{2\Theta(x_{i})}\right)(\ell(x_{0})-\alpha).$$
640
+
641
+ For convenience, we loosen this equation a bit by dropping the 2 in the equation and substituting Θ(x) with 1 − δ to get,
642
+
643
+ $$\ell(x_{N})-\alpha\leq\left(1-\sqrt{1-\delta}\right)^{N}(\ell(x_{0})-\alpha).$$
644
+ We want $\left(1-\sqrt{1-\delta}\right)^N\leq\epsilon_0$.
645
+ √1 − δN≤ ϵ0. This gives us N ≥log ϵ0 log(1−
646
+ √1−δ)
647
+ . This can be further simplified as follows:
648
+
649
+ $$\begin{array}{c}{{N\geq\frac{\log\epsilon_{0}}{\log(1-\sqrt{1-\delta})}}}\\ {{\stackrel{(a)}{\geq}\frac{\log\epsilon_{0}}{\log\delta/2}=\frac{\log1/\epsilon_{0}}{\log2/\delta}}}\end{array}$$
650
+
651
+ where we use the fact that √1 − δ ≤ 1 − δ/2 for δ ≥ 0 in (a). This completes the proof.
652
+
653
+ ## A.6 Proof Of Theorem 8
654
+
655
+ Proof. The proof here is exactly the same as the proof of Theorem 3.
656
+
657
+ ## A.7 Proof Of Theorem 9
658
+
659
+ Proof. Let β = maxx ℓ(x), then θ(ℓβ) = 1. Now, if θ(ℓα) ≥ θ0, the theorem is trivially true. So we suppose that this is not the case. Since θ as a function of α
660
+ ′is continuous there exists an α
661
+ ′such that θ(ℓ
662
+
663
+ α) ≥ θ0.
664
+
665
+ Now, let θ0 = 1 − δ. In step i of Algorithm 2, with probability greater than 1 − δ T we find an x such that
666
+
667
+ $$\ell(x_{i})-\alpha^{\prime}\leq\left(1-\sqrt{2(1-\delta)}\right)(\ell(x_{i+1})-\alpha^{\prime}).$$
668
+
669
+ By composition, after N with probability greater than (1 − δ T)
670
+ N we have,
671
+
672
+ $$\ell(x_{N})-\alpha^{\prime}\leq\left(1-\sqrt{2(1-\delta)}\right)^{N}(\ell(x_{0})-\alpha^{\prime})$$ $$=\left(1-\sqrt{2\theta_{0}}\right)^{N}(\ell(x_{0})-\alpha^{\prime}).$$
673
+
674
+ Now, we need to lower bound the probability of success (1 − δ T)
675
+ N . To do so we consider the negative logarithm of this quantity,
676
+
677
+ $$-N\log\left(1-\left(1-\theta_{0}\right)^{T}\right)=N\log\left(\frac{1}{1-\left(1-\theta_{0}\right)^{T}}\right)$$ $$\stackrel{{(a)}}{{\leq}}\frac{3N(1-\theta_{0})^{T}}{2}$$ $$=\frac{3N}{2}2^{T\log(1-\theta_{0})}$$ inequality $\log\frac{1}{1-t}\leq\frac{3t}{2}$ for all $t>0$. Setting $T=\frac{2\log N}{\log1/(1-\theta_{0})}$, we get
678
+ where (a) follows from the inequality log 1
679
+ $$-N\log\left(1-(1-\theta_{0})^{T}\right)\leq\frac{3N}{2}2^{-2\log N}$$ $$\leq\frac{3N}{2}\frac{1}{N^{2}}=\frac{3}{2N}.$$
680
+ $\mathbf{u}\cdot\mathbf{v}=\mathbf{u}$.
681
+ Hence, the probability of success is at least 2
682
+ −3/2N ≥ 1 − 3/2N for all N > 0. This gives us the theorem.
683
+
684
+ A.8 Implementation details Algorithm 3 Random Walk for the experiments Require: ℓ : R
685
+ d → R, x0 ∈ R
686
+ d, 1 *< k < d, N >* 0 1: for i = 1*, . . . , N* do 2: x¯i−1 ← xi−1
687
+ ∥xi−1∥
688
+ 3: Sample η uniformly from Gk−1,d−1 4: xi ← arg miny∈π(¯xi−1,η) ℓ(y)
689
+ 5: **end for**
690
+ 6: **return** xN
691
+ Instead of working with Algorithms 1 and 2 as they are, we modify them a bit to make them more implementation friendly. To do this, we make two modifications:
692
+ 1. First, redefine the function L defined in Section 4.2 by changing its domain. To do this, define π : G1,d × Gk−1,d−1 → Gk,d as described now. For (*x, η*) in the domain of π pick a fixed basis, represented by a matrix U ∈ R
693
+ d×(d−1), for the space x
694
+ ⊥ orthogonal to x in R
695
+ d. Note that x
696
+ ⊥ is a
697
+ (d − 1)-dimensional space. Pick η from Gk−1,d−1 and use a matrix V ∈ R
698
+ (d−1)×(k−1) to represent it as a subspace of R
699
+ d−1. Then construct a (k − 1)-dimensional subspace of x
700
+ ⊥ by considering the subspace spanned by UV . Note that this subspace will live in R
701
+ d and will be orthogonal to x. The image of (*x, η*) under π is the k-dimensional space spanned by x and ηx⊥ .
702
+
703
+ Now, define L : G1,d × Gk−1,d−1 → R as follows:
704
+
705
+ $$L(x,\eta):=\operatorname*{min}_{y\in\pi(x,\eta)}\ell(y)$$
706
+
707
+ 2. Second, we do not sample multiple subspaces in each iteration. This decreases the computational complexity of the algorithm and is motivated by empirical observations.
708
+
709
+ We present the modification of Algorithm 1 that we use in our experiments in Algorithm 3.
710
+
711
+ The reason why we do not work with this formulation is to avoid using too many unnecessary theoretical concepts that might obscure intuition. To theoretically analyze Algorithm 3, mathematically the correct manifold to use is a degenerate flag manifold (Lakshmibai, 2009) instead of G1,d ×Gk−1,d−1. The theoretical analysis still remains the same as it is mostly concerned with the use of the Grassmannian as the second space in the product manifold. However, this version of the algorithm is much easier to implement since it eliminates the affine component present in Algorithms 1 and 2.
J9GzUw9HgA/J9GzUw9HgA_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 20,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 20,
14
+ "code": 1,
15
+ "table": 0,
16
+ "equations": {
17
+ "successful_ocr": 47,
18
+ "unsuccessful_ocr": 2,
19
+ "equations": 49
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }