|
# When Is Momentum Extragradient Optimal? A Polynomial-Based Analysis |
|
|
|
Junhyung Lyle Kim⋆jlylekim@rice.edu Rice University, Department of Computer Science Gauthier Gidel gauthier.gidel@umontreal.ca Université de Montréal, Department of Computer Science and Operations Research Mila - Quebec AI Institute, Canada CIFAR AI Chair Anastasios Kyrillidis *anastasios@rice.edu* Rice University, Department of Computer Science Fabian Pedregosa *pedregosa@google.com* Google DeepMind Reviewed on OpenReview: *https: // openreview. net/ forum? id= ZLVbQEu4Ab* |
|
|
|
## Abstract |
|
|
|
The extragradient method has gained popularity due to its robust convergence properties for differentiable games. Unlike single-objective optimization, game dynamics involve complex interactions reflected by the eigenvalues of the game vector field's Jacobian scattered across the complex plane. This complexity can cause the simple gradient method to diverge, even for bilinear games, while the extragradient method achieves convergence. Building on the recently proven accelerated convergence of the momentum extragradient method for bilinear games (Azizian et al., 2020b), we use a polynomial-based analysis to identify three distinct scenarios where this method exhibits further accelerated convergence. These scenarios encompass situations where the eigenvalues reside on the (positive) real line, lie on the real line alongside complex conjugates, or exist solely as complex conjugates. Furthermore, we derive the hyperparameters for each scenario that achieve the fastest convergence rate. |
|
|
|
## 1 Introduction |
|
|
|
While most machine learning problems are formulated as minimization problems, a growing number of works rely instead on game formulations that involve multiple players and objectives. Examples of such problems include generative adversarial networks (GANs) (Goodfellow et al., 2014), actor-critic algorithms |
|
(Pfau & Vinyals, 2016), sharpness aware minimization (Foret et al., 2021), and fine-tuning language models from human feedback (Munos et al., 2023). This increasing interest in game formulations motivates further theoretical exploration of differentiable games. |
|
|
|
Optimizing differentiable games presents challenges absent in minimization problems due to the interplay of multiple players and objectives. Notably, the game Jacobian's eigenvalues are distributed on the complex plane, exhibiting richer dynamics compared to single-objective minimization, where the Hessian eigenvalues are restricted to the real line. Consequently, even for simple bilinear games, standard algorithms like the gradient method fail to converge (Mescheder et al., 2018; Balduzzi et al., 2018; Gidel et al., 2019). |
|
|
|
Fortunately, the extragradient method (EG), originally introduced by Korpelevich Korpelevich (1976), offers a solution. Unlike the gradient method, EG demonstrably converges for bilinear games (Tseng, 1995). This |
|
⋆Authors after JLK are listed in alphabetical order. |
|
|
|
This paper extends Kim et al. (2022) presented at the NeurIPS 2022 Optimization for Machine Learning Workshop. |
|
|
|
has sparked extensive research analyzing EG from different perspectives, including variational inequality |
|
(Gidel et al., 2018; Gorbunov et al., 2022), stochastic (Li et al., 2021), and distributed (Liu et al., 2020; Beznosikov et al., 2021) settings. |
|
|
|
Most existing works, including those mentioned earlier, analyze EG and relevant algorithms by assuming some structure on the objectives, such as (strong) monotonicity or Lipschitzness (Solodov & Svaiter, 1999; Tseng, 1995; Daskalakis & Panageas, 2018; Ryu et al., 2019; Azizian et al., 2020a). Such assumptions, in the context of differentiable games, confine the distribution of the eigenvalues of the game Jacobian; for instance, strong monotonicity implies a lower bound on the real part of the eigenvalues, and the Lipschitz assumption implies an upper bound on the magnitude of the eigenvalues of the Jacobian. |
|
|
|
Building upon the limitations of prior assumptions, Azizian et al. (2020b) showed that the key factor for effectively analyzing game dynamics lies in the spectrum of the Jacobian on the complex plane. Through a polynomial-based analysis, they demonstrated that first-order methods can sometimes achieve faster rates using momentum. This is achieved by replacing the smoothness and monotonicity assumptions with more precise assumptions on the distribution of the Jacobian eigenvalues, represented by simple shapes like ellipses or line segments. Notably, Azizian et al. (2020b) proved that for bilinear games, the extragradient method with momentum achieves an accelerated convergence rate. |
|
|
|
In this work, we take a different approach by asking the *reverse question*: for what shapes of the Jacobian spectrum does the momentum extragradient (MEG) method achieve optimal performance? This reverse analysis allows us to study the behavior of MEG in specific settings depending on the hyperparameter setup, encompassing: |
|
- *Minimization*, where all Jacobian eigenvalues lie on the positive real line. |
|
|
|
- *Regularized bilinear games*, where all eigenvalues are complex conjugates. |
|
|
|
- *Intermediate case*, where eigenvalues are both on the real line and as complex conjugates (illustrated in Figure 1). |
|
|
|
Our contributions can be summarized as follows: - **Characterizing MEG convergence modes**: We derive the residual polynomials of MEG for affine game vector fields and identify three distinct convergence modes based on hyperparameter settings. This analysis can then be applied to different eigenvalue structures of the Jacobian (see Theorem 3). |
|
|
|
- **Optimal hyperparameters and convergence rates**: For each eigenvalue structure, we derive the optimal hyperparameters of MEG and its (asymptotic) convergence rates. For minimization, MEG exhibits "super-acceleration," where a constant improvement upon classical lower bound rate is attained,1 similarly to the gradient method with momentum (GDM) with cyclical step sizes (Goujaud et al., 2022). |
|
|
|
For the other two cases involving imaginary eigenvalues, MEG exhibits accelerated convergence rates with the derived optimal hyperparameters. |
|
|
|
- **Comparison with other methods**. We compare MEG's convergence rates with gradient (GD), GDM, |
|
and extragradient (EG) methods. For the considered game classes, none of these methods achieve (asymptotically) accelerated rates (Corollaries 1 and 2), unlike MEG. In Section 7, we validate our findings through numerical experiments, including scenarios with slight deviations from our initial assumptions. |
|
|
|
## 2 Problem Setup And Related Work |
|
|
|
Following Letcher et al. (2019); Balduzzi et al. (2018), we define the n-player differentiable game as a family of twice continuously differentiable losses ℓi: R |
|
d → R, for i = 1*, . . . , n.* The player i controls the parameter w |
|
(i) ∈ R |
|
di. We denote the concatenated parameters by w = [w |
|
(1)*, . . . , w*(n)] ∈ R |
|
d, where d =Pn i=1 di. |
|
|
|
1Note that achieving this improvement is possible by having additional information beyond just the largest (smoothness) |
|
and smallest (strong convexity) eigenvalues of the Jacobian. |
|
|
|
![2_image_0.png](2_image_0.png) |
|
|
|
Figure 1: *Convergence rates of MEG in terms of the game Jacobian eigenvalues.* The step sizes for MEG, h and γ, and the momentum parameter m are set up according to each case of Theorem 3, illustrating three distinct convergence modes of MEG. For each case, the red line indicates the robust region (c.f., Definition 1) |
|
where MEG achieves the optimal convergence rate. |
|
For this problem, a Nash equilibrium satisfies: w |
|
(i) |
|
⋆∈ arg minw(i)∈Rdi ℓi w |
|
(i), w(¬i) |
|
⋆ ∀i ∈ {1*, . . . , n*}, |
|
where the notation · |
|
(¬i) denotes all indices except for i. We also define the vector field v of the game as the concatenation of the individual gradients: v(w) = [∇w(1) ℓ1(w)*· · · ∇*w(n) ℓn(w)]⊤, and denote its associated Jacobian with ∇v. |
|
|
|
Unfortunately, finding Nash equilibria for general games remains an *intractable problem* (Shoham & LeytonBrown, 2008; Letcher et al., 2019).2 Therefore, instead of directly searching for Nash equilibria, we focus on finding *stationary points* of the game's vector field v. This approach is motivated by the fact that any Nash equilibrium necessarily corresponds to a stationary point of the gradient dynamics. In other words, we aim to solve the following problem: |
|
|
|
$$\mathrm{Find}\quad w^{\star}\in\mathbb{R}^{d}\quad\mathrm{such~that}\quad v(w^{\star})=0.$$ |
|
$$(1)$$ |
|
⋆) = 0. (1) |
|
Notation. R(z) and I(z) respectively denote the real and the imaginary part of a complex number z. The spectrum of a matrix M is denoted by Sp(M), and its spectral radius by ρ(M) := max{|λ| : λ ∈ Sp(M)}. |
|
|
|
M ≻ 0 denotes that M is a positive-definite matrix. C+ denotes the complex plane with positive real part, and R+ denotes positive real numbers. |
|
|
|
## 2.1 Related Work |
|
|
|
The extragradient method, originally introduced in Korpelevich (1976), is a popular algorithm for solving (unconstrained) variational inequality problems in (1) (Gidel et al., 2018). There are several works that study the convergence rate of EG for (strongly) monotone problems (Tseng, 1995; Solodov & Svaiter, 1999; Nemirovski, 2004; Monteiro & Svaiter, 2010; Mokhtari et al., 2020; Gorbunov et al., 2022). Under similar settings, stochastic variants of EG are studied in Palaniappan & Bach (2016); Hsieh et al. (2019; 2020); Li et al. (2021). However, as mentioned earlier, assumptions like (strong) monotonicity or Lipchtizness may not accurately represent how Jacobian eigenvalues are distributed. |
|
|
|
Instead, we make more fine-grained assumptions on these eigenvalues, to obtain the optimal hyperparameters and convergence rates for MEG via polynomial-based analysis. Such analysis dates back to the development of the conjugate gradient method (Hestenes & Stiefel, 1952), and is still actively used; for instance, to derive lower bounds (Arjevani & Shamir, 2016), to develop accelerated decentralized algorithms (Berthier et al., |
|
2020), and to analyze average-case performance (Pedregosa & Scieur, 2020; Domingo-Enrich et al., 2021). |
|
|
|
On that end, we use the following lemma (Chihara, 2011), which elucidates the connection between firstorder methods and (residual) polynomials, when the vector field v is affine. First-order methods are the ones in which the sequence of iterates wt is in the span of previous gradients: wt ∈ w0+span{v(w0)*, . . . , v*(wt−1)}. |
|
|
|
2Formulating Nash equilibrium search as a nonlinear complementarity problem makes it inherently difficult, classified as PPAD-hard (Daskalakis et al., 2009; Letcher et al., 2019). |
|
Lemma 1 (Chihara (2011)). Let wt be the iterate generated by a first-order method after t *iterations, with* v(w) = Aw + b. Then, there exists a real polynomial Pt, of degree at most t*, satisfying:* |
|
|
|
$$w_{t}-w^{\star}=P_{t}(A)(w_{0}-w^{\star})\,,$$ |
|
$$\left(2\right)$$ |
|
⋆), (2) |
|
where Pt(0) = 1*, and* v(w |
|
⋆) = Aw⋆ + b = 0. |
|
|
|
By taking ℓ2-norms, (2) further implies the following worst-case convergence rate: |
|
∥wt − w |
|
⋆∥ = ∥Pt(A)(w0 − w |
|
⋆)∥ ⩽ ∥Pt(ZΛZ |
|
−1)*∥ · ∥*w0 − w |
|
⋆∥ ⩽ sup λ∈S⋆ |
|
|
|
$$\lambda)|\cdot\|Z\|\|Z^{-1}\|\cdot\|w_{0}-w^{\star}\|,\quad(3)$$ |
|
|
|
where A = ZΛZ |
|
−1is the diagonalization of A, |
|
3 and the constant ∥Z∥∥Z |
|
−1∥ disappears if A is a normal matrix. Hence, the worst-case convergence rate of a first-order method can be analyzed by studying the associated residual polynomial Pt evaluated at the eigenvalues λ of the Jacobian ∇v = A, distributed over the set S |
|
⋆. |
|
|
|
Unlocking Faster Rates Through Fine-Grained Spectral Shapes. While Azizian et al. (2020b) characterized lower bounds and optimality for certain first-order methods under simple spectral shapes, we posit that a more granular understanding of S |
|
⋆could unlock even faster convergence rates. By meticulously analyzing the residual polynomials of MEG, we identify specific spectral shapes where MEG exhibits optimal performance. This approach resonates with recent advancements in optimization literature (Oymak, 2021; Goujaud et al., 2022), which demonstrate that knowledge beyond merely the largest and smallest eigenvalues (i.e., smoothness and strong convexity) can lead to accelerated convergence in convex smooth minimization. |
|
|
|
## 3 Momentum Extragradient Via Chebyshev Polynomials |
|
|
|
In this section, we delve into the intricate dynamics of the momentum extragradient method (MEG) by harnessing the power of residual polynomials and Chebyshev polynomials. |
|
|
|
MEG iterates according to the following update rule: |
|
|
|
$$\mathrm{(MEG)}\quad w_{t+1}=w_{t}-h v(w_{t}-\gamma v(w_{t}))+m(w_{t}-w_{t-1})\,,$$ |
|
$$\left(4\right)$$ |
|
|
|
where h is the step size, γ is the extrapolation step size, and m is the momentum parameter. |
|
|
|
The extragradient method (EG), which serves as the foundation for MEG, was originally proposed by Korpelevich (1976) for saddle point problems. It has garnered renewed interest due to its remarkable ability to converge in certain differentiable games, such as bilinear games, where the standard gradient method falters (Gidel et al., 2019; Azizian et al., 2020b;a). |
|
|
|
For completeness, we remind the gradient method with momentum (GDM): |
|
|
|
$$(\mathrm{GDM})$$ |
|
$\mathbf{M}$) $w_{t+1}$ |
|
$$t-w_{t-1})\,,$$ |
|
|
|
(GDM) wt+1 = wt − hv(wt) + m(wt − wt−1), (5) |
|
from which the gradient method (GD) can be obtained by setting m = 0. |
|
|
|
As a first-order method (Arjevani & Shamir, 2016; Azizian et al., 2020b), MEG's behavior can be elegantly analyzed through the lens of residual polynomials, as established in Lemma 1. The following theorem unveils the specific residual polynomials associated with MEG: |
|
Theorem 1 (Residual polynomials of MEG and their Chebyshev representation). Consider the momentum extragradient method (MEG) in (4) *with a vector field of the form* v(w) = Aw + b. The residual polynomials associated with MEG can be expressed as follows: |
|
|
|
$$\tilde{P}_{0}(\lambda)=1,\quad\tilde{P}_{1}(\lambda)=1-\frac{h\lambda(1-\gamma\lambda)}{1+m},\quad\mbox{and}\quad\tilde{P}_{t+1}(\lambda)=(1+m-h\lambda(1-\gamma\lambda))\tilde{P}_{t}(\lambda)-m\tilde{P}_{t-1}(\lambda).$$ |
|
3Note that almost all matrices are diagonalizable over C, in the sense that the set of non-diagonalizable matrices has Lebesgue |
|
measure zero (Hetzel et al., 2007). |
|
$$\left(5\right)$$ |
|
|
|
Remarkably, these polynomials can be elegantly rewritten in terms of Chebyshev polynomials of the first and second kind, denoted by Tt(·) and Ut(·)*, respectively:* |
|
|
|
$$P_{t}^{MEG}(\lambda)={}_{t}{}^{1/2}\left(\frac{2m}{\lambda+m}T_{t}(\sigma(\lambda))+\frac{1-m}{1+m}U_{t}(\sigma(\lambda))\right),\text{where}\sigma(\lambda)\equiv\sigma(\lambda;h,\gamma,m)=\frac{1+m-h\lambda(1-\gamma\lambda)}{2\sqrt{m}}.\tag{6}$$ _The term $\sigma(\lambda)$, which encapsulates the interplay between step sizes, momentum, and eigenvalues, is referred to as the |
|
to as the link function. |
|
The residual polynomials of MEG and GDM, intriguingly, share a similar structure but differ in their link functions. Below are the residual polynomials of GDM, expressed in Chebyshev polynomials (Pedregosa, 2020): |
|
|
|
$$P_{t}^{\rm GDM}(\lambda)=m^{t/2}\left(\frac{2m}{1+m}T_{t}(\xi(\lambda))+\frac{1-m}{1+m}U_{t}(\xi(\lambda))\right),\quad\mbox{where}\quad\xi(\lambda)=\frac{1+m-h\lambda}{2\sqrt{m}}.\tag{7}$$ |
|
|
|
Notice that the residual polynomials of MEG in (6) and that of GDM in (7) are identical, except for the link functions σ(λ) and ξ(λ), which enter as arguments in Tt(·) and Ut(·). |
|
|
|
The differences in these link functions are paramount because the behavior of Chebyshev polynomials hinges decisively on their argument's domain: |
|
Lemma 2 (Goujaud & Pedregosa (2022)). Let z be a complex number, and let Tt(·) and Ut(·) *be the* Chebyshev polynomials of the first and second kind, respectively. The sequence n |
|
2m 1+m Tt(z) + 1−m 1+m Ut(z) |
|
|
|
o t⩾0 |
|
|
|
grows exponentially in $t$ for $z\notin[-1,1]$, while for $z\in[-1,1]$, the following bounds hold:_ $$|T_{t}(z)|\leqslant1\quad\text{and}\quad|U_{t}(z)|\leqslant t+1.\tag{8}$$ |
|
Therefore, to study the optimal convergence behavior of MEG, we are interested in the case where the set of step sizes and the momentum parameters lead to |σ(λ; h, γ, m)| ⩽ 1 so that we can use the bounds in (8). |
|
|
|
We will refer to those sets of eigenvalues and hyperparameters as the *robust region*, as defined below. |
|
|
|
Definition 1 (Robust region of MEG). *Consider the MEG method in* (4) *expressed via Chebyshev polynomials, as in* (6)*. We define the set of eigenvalues and hyperparameters such that the image of the link* function σ(λ; h, γ, m) lies in the interval [−1, 1] as the **robust region***, and denote it with* σ |
|
−1([−1, 1]). |
|
|
|
Although polynomial-based analysis requires the assumption that the vector field is affine, it captures intuitive insights into how various algorithms behave in different settings, as we remark below. |
|
|
|
Remark 1. *From the definition of* ξ(λ) in (7), one can infer why negative momentum can help the convergence of GDM (Gidel et al., 2019) when λ ∈ R+*: it forces GDM to stay within the robust region,* |
|
|ξ(λ)| ⩽ 1. *One can also infer the divergence of GDM in the presence of complex eigenvalues, unless, for* instance, complex momentum is used (Lorraine et al., *2022). Similarly, the residual polynomial of GD is* P |
|
GD |
|
t(λ) = (1 − hλ) |
|
t(Goujaud & Pedregosa, 2022, Example 4.2), and can easily diverge in the presence of complex eigenvalues, which can potentially be alleviated by using complex step sizes. On the contrary, thanks to the quadratic link function of MEG in (6)*, it can converge for much wider subsets of complex eigenvalues.* By analyzing the residual polynomials of MEG, we can also characterize the asymptotic convergence rate of MEG for any combination of hyperparameters, as summarized in the next theorem. Theorem 2 (Asymptotic convergence rate of MEG). *Suppose* v(w) = Aw + b. The asymptotic convergence rate of MEG in (4) is:4 |
|
|
|
$$\limsup_{t\to\infty}\sqrt[2]{\frac{|w_{t}-w^{*}|}{\|w_{0}-w^{*}\|}}=\begin{cases}\sqrt[2]{m},&\text{if}\quad\bar{\sigma}\leqslant1\quad\text{(robust region)};\\ \sqrt[2]{m}\big{(}\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\big{)}^{1/2},&\text{if}\quad\bar{\sigma}\in\Big{(}1,\frac{1+w}{2\sqrt{m}}\Big{)};\\ \geqslant1\text{(no convergence)},&\text{otherwise},\end{cases}\tag{9}$$ |
|
_where $\vartheta=\sup_{\lambda\in\mathcal{S}^{*}}|\sigma(\lambda;h,\gamma,m)|$, and $\sigma(\lambda;h,\gamma,m)\equiv\sigma(\lambda)$ is the link function of MEG defined in (6)._ |
|
4The reason why we take the 2t-th root is to normalize by the number of vector field computations; we compare in Section 4 the asymptotic rate of MEG in (9) with other gradient methods that use a single vector field computation in the recurrences, such as GD and GDM. |
|
Optimal hyperparameters for MEG that we obtain in Section 4 minimize the asymptotic convergence rate above. Note that the optimal hyperparameters vary based on the set S |
|
⋆, which we detail in Section 3.2. |
|
|
|
## 3.1 Three Modes Of The Momentum Extragradient |
|
|
|
Within the robust region of MEG, we can compute its worst-case convergence rate based on (3) as follows: |
|
|
|
$$\sup_{\lambda\in\mathcal{S}^{*}}|P_{t}^{\text{MEG}}(\lambda)|\leqslant\,m^{t/2}\Big{(}\frac{2m}{1+m}\sup_{\lambda\in\mathcal{S}^{*}}|T_{t}(\sigma(\lambda))|+\frac{1-m}{1+m}\sup_{\lambda\in\mathcal{S}^{*}}|U_{t}(\sigma(\lambda))|\Big{)}\tag{10}$$ $$\leqslant\,m^{t/2}\Big{(}\frac{2m}{1+m}+\frac{1-m}{1+m}(t+1)\Big{)}\leqslant m^{t/2}(t+1).$$ |
|
|
|
Since the Chebyshev polynomial expressions of MEG in (6) and that of GDM5 are identical except for the link functions, the convergence rate in (10) applies to both MEG and GDM, as long as the link functions |σ(λ)| and |ξ(λ)| are bounded by 1. As a result, we see that the asymptotic convergence rate in (9) only depends on the momentum parameter m, when the hyperparameters are restricted to the robust region. This fact was utilized in tuning GDM for strongly convex quadratic minimization (Zhang & Mitliagkas, 2019). |
|
|
|
The robust region of MEG can be described with the four extreme points below (derivation in the appendix): |
|
|
|
$$\sigma^{-1}(-1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}},\quad\text{and}\quad\sigma^{-1}(1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}.\tag{11}$$ |
|
|
|
The above four points and their intermediate values characterize the set of Jacobian eigenvalues λ that can be mapped to [−1, 1]. The distribution of these eigenvalues can vary in three different modes depending on the selected hyperparameters of MEG, as stated in the following theorem. Theorem 3. *Consider the momentum extragradient method in* (4)*, expressed with the Chebyshev polynomials* as in (6). Then, the robust region (c.f., Definition *1) have the following three modes:* |
|
|
|
- ***Case 1***: If $\frac{h}{4\gamma}\geqslant(1+\sqrt{m})^2$, then $\sigma^{-1}(-1)$ and $\sigma^{-1}(1)$ are all real numbers; -. |
|
* _Case 2: If $(1-\sqrt{m})^{2}\leqslant\frac{h}{4\gamma}<(1+\sqrt{m})^{2}$, then $\sigma^{-1}(-1)$ are complex, and $\sigma^{-1}(1)$ are real;_ |
|
|
|
- *Case 3:* If (1 − |
|
√m) |
|
2 > |
|
h 4γ |
|
, *then* σ |
|
−1(−1) and σ |
|
−1(1) *are all complex numbers.* |
|
Remark 2. Theorem 3 offers guidance on how to set up the hyperparameters for MEG. This depends on the Jacobian spectra of the game problem being considered. For instance, if one observes only real eigenvalues (i.e., the problem is in fact minimization), the main step size h should be at least 4× *larger than the* extrapolation step size γ*, based on the condition* h 4γ ⩾ (1 + √m) |
|
2. |
|
|
|
We illustrate Theorem 3 in Figure 1. We first set the hyperparameters according to each condition in Theorem 3. We then discretize the interval [−1, 1], and plot σ |
|
−1([−1, 1]) for each case, represented by red lines. We can see the quadratic link function induced by MEG allows interesting eigenvalue dynamics to be mapped onto the [−1, 1] segment, such as the cross-shape observed in Case 2. Moreover, although MEG |
|
exhibits the best rates within the robust region, it does not necessarily diverge outside of it, as in the second case of Theorem 2. We illustrate the convergence region of MEG measured by 2pt|Pt(λ)| < 1 from (6) for t = 2000, with different colors indicating varying convergence rates, which slow down as one moves away from the robust region. Interestingly, Figure 1 (right) shows that MEG can also converge in the absence of monotonicity (i.e., in the presence of Jacobian eigenvalues with negative real part) (Gorbunov et al., 2023). |
|
|
|
## 3.2 Robust Region-Induced Problem Cases |
|
|
|
We classify problem classes into three distinct cases based on Theorem 3, each reflecting a different mode of the robust region (Figure 2): |
|
5Asymptotically, GDM enjoys √m convergence rate instead of the √4 m of MEG, as it uses a single vector field computation per iteration instead of the two. However, these are not directly comparable, as the values of m that correspond to the robust region are not the same. |
|
6 |
|
|
|
![6_image_0.png](6_image_0.png) |
|
|
|
Figure 2: *Illustration of the three spectrum models where MEG achieves accelerated convergence rates.* |
|
Case 1: The problem reduces to minimization, where the Jacobian eigenvalues are distributed on the (positive) real line, but as a *union* of two intervals. We can model such spectrum as: |
|
|
|
$$\operatorname{Sp}(\nabla v)\subset{\mathcal{S}}_{1}^{\star}=[\mu_{1},L_{1}]\cup[\mu_{2},L_{2}]\subset\mathbb{R}_{+}.$$ |
|
$$(12)$$ |
|
1 = [µ1, L1] ∪ [µ2, L2] ⊂ R+. (12) |
|
Above generalizes the Hessian spectrum that arise in minimizing µ-strongly convex and L-smooth functions, i.e. λ ∈ [*µ, L*]. This spectrum can be obtained from (12) by setting µ1 = *µ, L*2 = L, and L1 = µ2. It was empirically observed that, during DNN training, sometimes a few eigenvalues of the Hessian have significantly larger magnitudes (Papyan, 2020). In such cases, (12) can be more precise than a single interval [*µ, L*]. In particular, Goujaud et al. (2022) utilized (12), and showed that the GDM with alternating step sizes can achieve a (constant factor) improvement over the traditional lower bound for strongly convex and smooth quadratic objectives. |
|
|
|
In Section 4, we show that MEG enjoys similar improvement. To show that, we define the following quantities following Goujaud et al. (2022), which will be used to obtain the convergence rate of MEG in (18) for this problem class. |
|
|
|
$$\zeta:={\frac{L_{2}+\mu_{1}}{L_{2}-\mu_{1}}}={\frac{1+\tau}{1-\tau}},\quad{\mathrm{and}}\quad R:={\frac{\mu_{2}-L_{1}}{L_{2}-\mu_{1}}}\in[0,1).$$ |
|
$$(13)$$ |
|
|
|
Here, ζ is the ratio between the center of S |
|
⋆ |
|
1 and its radius, and τ := L2/µ1 is the inverse condition number. |
|
|
|
R is the relative gap of µ2 − L1 and L2 − µ1, which becomes 0 if µ2 = L1 (i.e., S |
|
⋆ |
|
1 becomes [µ1, L2]). |
|
|
|
Case 2: In this case, the Jacobian eigenvalues are distributed both on the real line and as complex conjugates, exhibiting a *cross-shaped* spectrum. We model this spectrum as: |
|
|
|
$$\mathrm{Sp}(\nabla v)\subset{\mathcal{S}}_{2}^{\star}=[\mu,L]\cup\{z\in\mathbb{C}:\Re(z)=c^{\prime}>0,\ \Im(z)\in[-c,c]\}.$$ |
|
|
|
The first set [*µ, L*] denotes a segment on the real line, reminiscent of the Hessian spectrum for minimizing µ-strongly convex and L-smooth functions. The second set has a fixed real component (c |
|
′ > 0), along with imaginary components symmetric across the real line (i.e., complex conjugates), as the Jacobian is real. |
|
|
|
This is a strict generalization of the purely imaginary interval ±[*ai, bi*] commonly considered in the bilinear games literature (Liang & Stokes, 2019; Azizian et al., 2020b; Mokhtari et al., 2020). While many recent papers on bilinear games cite GANs (Goodfellow et al., 2014) as a motivation, the work in Berard et al. |
|
|
|
(2020, Figure 4) empirically shows that the spectrum of GANs is not contained in the imaginary axis; the cross-shaped spectrum model above might be closer to some of the observed GAN spectra. Case 3: In this case, the Jacobain eigenvalues are distributed only as complex conjugates, with a fixed real component, exhibiting a *shifted imaginary* spectrum. We model this spectrum as: |
|
|
|
$$\operatorname{Sp}(\nabla v)\subset{\mathcal{S}}_{3}^{\star}=[c+a i,c+b i]\cup[c-a i,c-b i]\subset\mathbb{C}_{+}.$$ |
|
3 = [c + ai, c + bi] ∪ [c − *ai, c* − bi] ⊂ C+. (15) |
|
Again, (15) generalizes bilinear games, where the spectrum reduces to ±[*ai, bi*] with c = 0. |
|
|
|
$$(14)$$ |
|
$$(15)$$ |
|
|
|
Examples of Cases 2 and 3 in quadratic games. To understand these spectra better, we provide examples using quadratic games. Consider the following two player quadratic game, where x ∈ R |
|
d1 and y ∈ R |
|
d2 are the parameters controlled by each player, whose loss functions respectively are: |
|
|
|
$$\ell_{1}(x,y)=\frac{1}{2}x^{\top}S_{1}x+x^{\top}M_{12}y+x^{\top}b_{1}\quad\text{and}\quad\ell_{2}(x,y)=\frac{1}{2}y^{\top}S_{2}y+y^{\top}M_{21}x+y^{\top}b_{2},\tag{16}$$ |
|
|
|
where S1, S2 ≻ 0. Then, the vector field can be written as: |
|
|
|
$$v(x,y)=\begin{bmatrix}S_{1}x+M_{12}y+b_{1}\\ M_{21}x+S_{2}y+b_{2}\end{bmatrix}=Aw+b,\text{where}A=\begin{bmatrix}S_{1}&M_{12}\\ M_{21}&S_{2}\end{bmatrix},\text{}w=\begin{bmatrix}x\\ y\end{bmatrix},\text{and}b=\begin{bmatrix}b_{1}\\ b_{2}\end{bmatrix}.\tag{17}$$ |
|
|
|
If S1 = S2 = 0 and M12 = −M⊤ |
|
21, the game Jacobian ∇v = A has only purely imaginary eigenvalues |
|
(Azizian et al., 2020b, Lemma 7), recovering bilinear games. |
|
|
|
As the second and the third spectrum models in (14) and (15) generalize bilinear games, we can consider more complex quadratic games, where S1 and S2 does not have to be 0. Specifically, when M12 = −M⊤ |
|
21, and they share common bases with S1 and S2 specified in the below proposition, then Sp(A) has a cross-shaped spectrum in (14) of Case 2 and a shifted imaginary spectrum in (15) of Case 3. |
|
|
|
Proposition 1. Let A be a matrix of the form S1 B |
|
−B⊤ S2 |
|
, where S1, S2 ≻ 0. Without loss of generality, assume that dim(S1) *> dim*(S2) = d*. Then,* |
|
- *Case 2:* Sp(A) has a cross-shape if there exist orthonormal matrices U, V and diagonal matrices D1, D2 such that S1 = Udiag(a, . . . , a, D1)U |
|
⊤, S2 = V diag(*a, . . . , a*)V |
|
⊤, and B = UD2V |
|
⊤. |
|
|
|
- *Case 3:* Sp(A) has a shifted imaginary shape if there exist orthonormal matrices U, V and diagonal matrix D2 such that S1 = Udiag(a, . . . , a)U |
|
⊤, S2 = V diag(*a, . . . , a*)V |
|
⊤, and B = UD2V |
|
⊤. |
|
|
|
We can interpret Case 3 as a *regularized* bilinear game, where S1 and S2 are diagonal matrices with a constant eigenvalue. This implies that the players cannot control their parameter x and y arbitrarily, which can be seen in the loss functions in (16), where S1 and S2 appears in terms x |
|
⊤S1x and y |
|
⊤S2y. Case 2 can be interpreted similarly, but player 1 (without loss of generality) has more flexibility in its parameter choice due to the additional diagonal matrix D1 in the eigenvalue decomposition of S1. |
|
|
|
## 4 Optimal Parameters And Convergence Rates |
|
|
|
In this section, we obtain the optimal hyperparameters of MEG (in the sense that they achieve the fastest asymptotic convergence rate), for each spectrum model discussed in the previous section. |
|
|
|
Case 1: minimization. When the condition in Case 1 of Theorem 3 holds (i.e., h 4γ ⩾ (1 + √m) |
|
2), both σ |
|
−1(−1) and σ |
|
−1(1) (and their intermediate values), line on the real line, forming a union of two intervals |
|
(see Figure 1, left). The robust region in this case, denoted σ |
|
−1 Case1 |
|
([−1, 1]), is expressed as: |
|
|
|
$$\left[\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]\bigcup\left[\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]\subset\mathbb{R}_{+}.$$ |
|
|
|
For this case, the optimal hyperparameters of MEG in terms of the worst-case asymptotic convergence rate in (9) can be set as below. Theorem 4 (Case 1). *Consider solving* (1) *for games where the Jacobian has the spectrum in* (12). For this problem, the optimal hyperparameters for the momentum extragradient method in (4) *are:* |
|
|
|
$$h=\frac{4(\mu_{1}+L_{2})}{(\sqrt{\mu_{2}+L_{1}}+\sqrt{\mu_{1}+L_{2}})^{2}},\ \ \gamma=\frac{1}{\mu_{1}+L_{2}}=\frac{1}{\mu_{2}+L_{1}},\ \ \ \text{and}\ \ \ \gamma=\left(\frac{\sqrt{\mu_{2}L_{1}}-\sqrt{\mu_{1}L_{2}}}{\sqrt{\mu_{2}L_{1}}+\sqrt{\mu_{1}L_{2}}}\right)^{2}=\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{2}.$$ |
|
|
|
Recalling (9), we immediately get the asymptotic convergence rate from Theorem 4. Further, this formula can be simplified in the ill-conditioned regime, where the inverse condition number τ := µ1/L2 → 0: |
|
|
|
$$\sqrt[4]{m}=\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{1/2}\underset{\tau\to0}{=}1-\frac{2\sqrt{\tau}}{\sqrt{1-R^{2}}}+o(\sqrt{\tau}).\tag{18}$$ |
|
|
|
From (18), we see that MEG achieves an accelerated convergence rate 1 − O( |
|
√τ ), which is known to be |
|
"optimal" for this function class, and can be asymptotically achieved by GDM6(Polyak, 1987) (see also Theorem 8 with θ = 1). Surprisingly, this rate can be further improved by the factor √1 − R2, exhibiting |
|
"super-acceleration" phenomenon enjoyed by GDM with (optimal) cyclical step sizes (Goujaud et al., 2022). |
|
|
|
Note that achieving this improvement is possible by having additional information beyond just the largest |
|
(L2) and smallest (µ1) eigenvalues of the Hessian. |
|
|
|
Case 2: cross-shaped spectrum. If the condition in Case 2 of Theorem 3 is satisfied (i.e., (1 − |
|
√m) |
|
2 ⩽ |
|
|
|
h 4γ < (1 + √m) |
|
2), then σ |
|
−1(−1) are complex, while σ |
|
−1(1) are real (c.f., Figure 1, middle). We can write the robust region σ |
|
−1 Case2 |
|
([−1, 1]) as: |
|
|
|
$$\underbrace{\left[\frac{1}{2^{2}}-\sqrt{\frac{1}{4\cdot7}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2^{2}}+\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]}_{\mathbb{C}\mathbb{E}_{+}}\bigcup\underbrace{\left[\frac{1}{2^{2}}-\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2^{2}}+\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]}_{\mathbb{C}\mathbb{E}_{+}}.$$ |
|
. |
|
Here, the first interval lies on R+, as the square root term is real; conversely, in the second interval, the square root term is imaginary, with the fixed real component: 1 2γ |
|
. We summarize the optimal hyperparameters for this case in the next theorem. Theorem 5 (Case 2). *Consider solving* (1) *for games where the Jacobian has a cross-shaped spectrum as* in (14)*. For this problem, the optimal hyperparameters for the momentum extragradient method in* (4) *are:* |
|
|
|
$$h=\frac{16(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}},\quad\ \gamma=\frac{1}{\mu+L},\quad\ \mathrm{and}\quad\ m=\left(\frac{\sqrt{4c^{2}+(\mu+L)^{2}}-\sqrt{4\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}\right)^{2}.$$ |
|
|
|
We get the asymptotic rate from Theorem 5, which simplifies in the ill-conditioned regime τ := µ/L → 0 as: |
|
|
|
$$\sqrt[4]{m}=\left(\frac{\sqrt{4c^{2}+(\mu+L)^{2}}-\sqrt{4\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}\right)^{1/2}\underset{\tau\to0}{=}1-\frac{2\sqrt{\tau}}{\sqrt{(2c/L)^{2}+1}}+o(\sqrt{\tau}).\tag{19}$$ |
|
|
|
We see that MEG achieves accelerated convergence rate 1−O(pµ/L), as long as c = O(L). We remark that this rate is optimal in the following sense. The lower bound for the problems with cross-shaped spectrum in (14) must be slower than the existing ones for minimizing µ-strongly convex and L-smooth functions, as the former is strictly more general. Since we reach the same asymptotic optimal rate, this must be optimal. Case 3: shifted imaginary spectrum. Lastly, if the condition in Case 3 of Theorem 3 is satisfied (i.e., |
|
h 4γ < (1 − |
|
√m) |
|
2), then σ |
|
−1(−1) and σ |
|
−1(1) (and the intermediate values) are all complex conjugates (c.f., |
|
Figure 1, right). We can write the robust region σ |
|
−1 Case3 |
|
([−1, 1]) as: |
|
|
|
$$\left[\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]\bigcup\left[\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]\subset\mathbb{C}_{+}.$$ |
|
|
|
We modeled such spectrum in (15), which generalizes bilinear games, where the spectrum reduces to ±[*ai, bi*] |
|
(i.e., with c = 0). We summarize the optimal hyperparameters for this case below. |
|
|
|
6Precisely, GDM with optimal step size and momentum asymptotically achieve 1 − 2 |
|
√τ + o( |
|
√τ) convergence rate, as τ → 0 |
|
(Goujaud & Pedregosa, 2022, Proposition 3.3). |
|
Theorem 6 (Case 3). *Consider solving* (1) *for games where the Jacobian has a shifted imaginary spectrum* in (15)*. For this problem, the optimal hyperparameters for the momentum extragradient method in* (4) *are:* |
|
|
|
$$h=\frac{8c}{(\sqrt{c^{2}+a^{2}+\sqrt{c^{2}+b^{2}}})^{2}},\quad\gamma=\frac{1}{2c},\quad a n d\quad m=\left(\frac{\sqrt{c^{2}+b^{2}}-\sqrt{c^{2}+a^{2}}}{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}\right)^{2}.$$ |
|
|
|
Similarly to before, we compute the asymptotic convergence rate from Theorem 4 using (9). |
|
|
|
$${\sqrt[4]{m}}=\left({\frac{\sqrt{c^{2}+b^{2}}-\sqrt{c^{2}+a^{2}}}{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}}\right)^{1/2}=\left(1-{\frac{2{\sqrt{c^{2}+a^{2}}}}{{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}}}\right)^{1/2}$$ |
|
$$(20)$$ |
|
|
|
Note that by setting c = 0, the rate in (20) matches the lower bound of bilinear game: qb−a b+a |
|
(Azizian et al., |
|
2020b, Proposition 5). Further, with c > 0, the convergence rate in (20) improves, highlighting the contrast between vanilla bilinear games and their regularized counterpart. Remark 3. Notice that the optimal momentum m in both Theorems 5 and 6 are positive. This is in contrast to Gidel et al. (2019), where the **gradient** *method with negative momentum is studied. This difference* elucidates distinct dynamics of how momentum interacts with the **gradient** and the **extragradient** *methods.* |
|
|
|
## 5 Comparison With Other Methods |
|
|
|
Having established MEG's asymptotic convergence rates for various spectrum models, we now compare it with other first-order methods, including GD, GDM, and EG. |
|
|
|
Comparison with GD and EG. Building upon the fixed-point iteration framework established by Polyak (1987), Azizian et al. (2020a) interpret both GD and EG as fixed-point iterations. Within this framework, iterates of a method are generated according to: |
|
|
|
$$w_{t+1}=F(w_{t}),\quad\forall t\geqslant0,$$ |
|
$$(21)$$ |
|
|
|
wt+1 = F(wt), ∀t ⩾ 0, (21) |
|
where F : R |
|
d → R |
|
dis an operator representing the method. However, analyzing this scheme in general settings poses challenges due to the potential nonlinearity of F. To address this, under conditions of twice differentiability of F and proximity of w to the stationary point w |
|
⋆, the analysis can be simplified by linearizing F around w |
|
⋆: |
|
|
|
$$F(w)\approx F(w^{\star})+\nabla F(w^{\star})(w-w^{\star}).$$ |
|
|
|
Then, for w0 in a neighborhood of w |
|
⋆, one can obtain an asymptotic convergence rate of (21) by studying the spectral radius of the Jacobian at the solution: ρ(∇F(w |
|
⋆)) ⩽ ρ |
|
⋆ < 1. This implies that (21) locally converges linearly to w |
|
⋆ at the rate O((ρ |
|
⋆ + ε) |
|
t) for ε ⩾ 0. Further, if F is linear, ε = 0 (Polyak, 1987). |
|
|
|
The corresponding fixed point operators F |
|
GD |
|
hand F |
|
EG |
|
hof GD and EG7respectively are: |
|
|
|
(GD) $w_{t+1}=w_{t}-hv(w_{t})=F_{h}^{\rm GD}(w_{t}),\ \ \mbox{and}$ (EG) $w_{t+1}=w_{t}-hv(w_{t}-hv(w_{t}))=F_{h}^{\rm EG}(w_{t}).$ |
|
The local convergence rate can then be obtained by bounding the spectral radius of the Jacobian of the operators under certain assumptions. We summarize the relevant results below. Theorem 7 (Azizian et al. (2020a); Gidel et al. (2019)). Let w |
|
⋆be a stationary point of v*. Further, assume* the eigenvalues of ∇v(w |
|
⋆) *all have positive real parts. Then, denoting* S |
|
⋆:= Sp(∇v(w |
|
⋆)), |
|
1. *For the gradient method in* (22) *with step size* h = minλ∈S⋆ R(1/λ)*, it satisfies:*8 |
|
|
|
$$\rho(\nabla F_{h}^{G D}(w^{\star}))^{2}\leqslant1-\min_{\lambda\in S^{\star}}\Re\left(\frac{1}{\lambda}\right)\min_{\lambda\in S^{\star}}\Re(\lambda).$$ |
|
λ∈S⋆ |
|
λ∈S⋆ |
|
R(λ). (24) |
|
7Azizian et al. (2020a) assumes that EG uses the same step size h for both the main and the extrapolation steps. |
|
|
|
8Note that the spectral radius ρ is squared, but asymptotically is almost the same as √1 − x ⩽ 1 − x/2. |
|
|
|
$$(24)$$ |
|
|
|
2. *For the extragradient method in* (23) *with step size* h = (4 supλ∈S⋆ |λ|) |
|
−1*, it satisfies:* |
|
|
|
$$n=(4\operatorname*{sup}_{\lambda\in S^{*}}|\lambda|)^{-1},$$ |
|
$$\rho(\nabla F_{h}^{EG}(w^{*}))^{2}\leqslant1-\frac{1}{4}\left(\frac{\min_{\lambda\in\mathcal{S}^{*}}\mathfrak{H}(\lambda)}{\sup_{\lambda\in\mathcal{S}^{*}}|\lambda|}+\frac{1}{16}\frac{\min_{\lambda\in\mathcal{S}^{*}}|\lambda|^{2}}{\sup_{\lambda\in\mathcal{S}^{*}}|\lambda|^{2}}\right).\tag{25}$$ |
|
|
|
We can determine the convergence rate of GD and EG by using Theorem 7 since all three cases of our spectrum models in (12), (14), and (15) meet the condition that the eigenvalues of ∇v(w |
|
⋆) have positive real parts. The following corollary summarizes this result. |
|
|
|
Corollary 1. With the conditions in Theorem *7, for each case of the Jacobian spectrum* S |
|
⋆ |
|
1 |
|
, S |
|
⋆ |
|
2 |
|
, and S |
|
⋆ |
|
3 |
|
, |
|
the gradient method in (22) *and the extragradient method in* (23) *satisfy the following:* |
|
|
|
- Case 1: Sp(∇v) ⊂ S⋆ 1 = [µ1, L1] ∪ [µ2, L2] ∈ R+: ρ(∇F GD h(w ⋆))2 ⩽ 1 − µ1 L2 , and ρ(∇F EG h(w ⋆))2 ⩽ 1 − 1 4 µ1 L2 +µ 2 1 16L22 . (26) - Case 2: Sp(∇v) ⊂ S⋆ 2 = [µ, L] ∪ {z ∈ C : R(z) = c ′ > 0, I(z) ∈ [−c, c]}: ρ(∇F GD h(w ⋆))2 ⩽ (1 −2µ 4c 2/(L−µ)+(L−µ)if c ⩾ qL 2−µ 2 4, 1 − µ Lotherwise. ρ(∇F EG h(w ⋆))2 ⩽ 1 − 1 4 √µ c 2+((L−µ)/2)2 +µ 2 16(c 2+((L−µ)/2)2) if c ⩾ q3L2+2Lµ−µ2 4, 1 − 1 4 µ L +µ 2 16L2 otherwise. |
|
$$(27)$$ |
|
$$(26)$$ |
|
$$(28)$$ |
|
- Case 3: Sp(∇v) ⊂ S⋆ 3 = [c + ai, c + bi] ∪ [c − ai, c − bi] ∈ C+: ρ(∇F GD h(w ⋆))2 ⩽ 1 −c 2 c 2+b 2 , and ρ(∇F EG h(w ⋆))2 ⩽ 1 − 1 4 √ c c 2+b 2 +c 2+a 2 16(c 2+b 2) |
|
. (28) |
|
In Case 1, we see from (26) that both GD and EG have convergence rates 1 − O(µ1/L2) = 1 − O(τ ). MEG, |
|
on the other hand, has an accelerated convergence rate of 1 − O( |
|
√τ ), as well as an additional constant improvement by a factor of √1 − R2, as we showed in (18). Moving on to Case 2, we showed in (19) that MEG enjoys an accelerated convergence rate of 1 − O(pµ/L) as long as c = O(L). However, both GD and EG in (27) have non-accelerated convergence under the same condition. Lastly, for Case 3, we showed in |
|
(20) that MEG achieves an asymptotic rate that matches the known lower bound for bilinear games: qb−a b+a |
|
, |
|
with c = 0; further, the rate of MEG improves if c > 0. On the contrary, GD and EG suffer from slower rates, as shown in (28). |
|
|
|
Comparison with GDM. We now compare the convergence rate of MEG with that of GDM, which iterates as in (5). In Azizian et al. (2020b), it was shown that GD is the optimal method for games where the Jacobian eigenvalues are within a *disc* in the complex plane. This suggests that acceleration is not possible for this type of problem.9 On the other hand, it is well-known that GDM achieves an accelerated convergence rate for strongly-convex (quadratic) minimization, where the eigenvalues of the Hessian lie on the (strictly positive) |
|
real line segment (Polyak, 1987). Hence, Azizian et al. (2020b) studies the intermediate case, where the Jacobian eigenvalues are within an ellipse, which can be thought of as the real segment [*µ, L*] perturbed with ϵ in an elliptic way. That is, they consider the spectral shape:10 |
|
|
|
$K_{\epsilon}=\left\{z\in\mathbb{C}:\left(\frac{\Re z-(\mu+L)/2}{(L-\mu)/2}\right)^{2}+\left(\frac{2z}{\epsilon}\right)^{2}\leq1\right\}$. |
|
Similarly to GD and EG above, in Azizian et al. (2020b), GDM is interpreted as a fixed point iteration:11 wt+1 = wt − hv(wt) + m(wt − wt−1) = F |
|
GDM(wt, wt−1). (29) |
|
To study the convergence rate of GDM, we use the following theorem from Azizian et al. (2020b): |
|
|
|
9Yet, one can consider the case where, e.g., a cross-shape is contained in a disc. Then, by knowing more fine-grained structure of the Jacobian spectrum, MEG can have faster convergence in (19). |
|
|
|
10A visual illustration of this ellipse can be found in Azizian et al. (2020b, Figure 2). |
|
|
|
11As GDM updates wt+1 using both wt and wt−1, Azizian et al. (2020b) uses an augmented fixed point operator; see Lemma 2 in that work for details. |
|
|
|
$$(29)$$ |
|
|
|
Theorem 8 (Azizian et al. (2020b)). Define ϵ(µ, L) as ϵ(*µ, L*)/L = (µ/L) |
|
θ = τ θ with θ > 0 and a ∧ b = |
|
min(a, b). *If Sp*(∇F |
|
GDM(w |
|
⋆, w⋆)) ⊂ Kϵ, and when τ → 0, *it satisfies:* |
|
|
|
$$\rho(\nabla F^{GDM}(w^{*},w^{*}))\leqslant\begin{cases}1-2\sqrt{\tau}+O\left(\tau^{\theta\wedge1}\right),&\text{if}\ \ \theta>\frac{1}{2}\\ 1-2(\sqrt{2}-1)\sqrt{\tau}+O\left(\tau\right),&\text{if}\ \ \theta=\frac{1}{2}\\ 1-\tau^{1-\theta}+O\left(\tau^{1\wedge(2-3\theta)}\right),&\text{if}\ \ \theta<\frac{1}{2},\end{cases}\tag{30}$$ |
|
|
|
where the hyperparametes h and m are functions of µ, L, and ϵ *only.* |
|
For Case 1, GDM converges at the rate 1 − 2 |
|
√τ + O(τ ) (i.e., with θ = 1 from the above), which is always slower than the rate of MEG in (18) by the factor of √1 − R2. For Case 2, we see from Theorem 8 that GDM |
|
achieves an accelerated rate, i.e., 1 − O( |
|
√τ ), until θ = |
|
1 2 |
|
. In other words, the biggest elliptic perturbation ϵ where GDM permits the accelerated rate is ϵ = |
|
õL.12 We interpret Theorem 8 for games with cross-shaped Jacobian spectrum in (14) and shifted imaginary spectrum in (15) in the following corollary. |
|
|
|
Corollary 2. *Consider the gradient method with momentum, interpreted as fixed point iteration as in* (29). |
|
|
|
For games with cross-shaped Jacobian spectrum in (14) *with* c = |
|
L−µ 2*, GDM cannot achieve an accelerated* rate when L−µ 2 = *c > ϵ* = |
|
√µL. Since L > µ, *this further implies* L |
|
µ > |
|
√5. *That is, when the condition* number exceeds √5 ≈ 2.236, GDM cannot achieve an accelerated convergence rate. On the contrary, as we showed in (19)*, MEG can converge at an accelerated rate in the ill-conditioned regime.* The convergence rate of GDM for Case 3 cannot be determined from Theorem 8, as this theorem assumes the spectrum model of the real line segment [*µ, L*] with ϵ perturbation (along the imaginary axis), while S |
|
⋆ |
|
3 in (15) has a fixed real component. Instead, we utilize the link function of GDM in (7) to show that it is unlikely for GDM to stay in the robust region: ξ |
|
−1([−1, 1]). |
|
|
|
Proposition 2. *Consider solving* (1) *for games where the Jacobian has a shifted imaginary spectrum in* |
|
(15)*, using the gradient method with momentum in* (5). For any complex number z = p + qi ∈ C+*, if* 2(1+m) |
|
h < p, *then GDM cannot stay in the robust region, i.e.,* |ξ(λ)| > 1. |
|
|
|
Note that the condition 2(1+m) |
|
h < p is hard to avoid even for small p, considering h is usually a small value. |
|
|
|
## 6 Local Convergence For Non-Affine Vector Fields |
|
|
|
The optimal hyperparameters of MEG for each spectrum model and the associated convergence rate we obtained in Section 4 are attainable when the vector field is affine. A natural question is, then, what can we say about the convergence rate of MEG when the vector field is not affine? To that end, we provide the local convergence of MEG by restarting the momentum, as detailed below. Let us consider the operator G representing the MEG in (4) such that: |
|
|
|
$\subset\cap\mathcal{A}$ |
|
[wt+1, wt] = G([wt, wt−1]) and G([w |
|
|
|
$$([w^{\star},w^{\star}])=[w^{\star},w^{\star}].$$ |
|
|
|
In addition, we assume that w1 = w0 −h 1+m v(w0 −γv(w0)), in order to induce the residual polynomials from Theorem 1; see also its proof and Algorithm 1 in the appendix. Now let us consider the following algorithm: |
|
|
|
$$[w_{tk+i+1},w_{tk+i}]=G\big{(}[w_{tk+i},w_{tk+i-1}]\big{)}\quad\text{for}\quad1\leqslant i\leqslant k-1,\quad\text{and then}\tag{31}$$ $$w_{(t+1)k+1}=w_{(t+1)k}-\frac{h}{1+m}v\big{(}w_{(t+1)k}-\gamma v\big{(}w_{(t+1)k}\big{)}\big{)}.$$ |
|
|
|
In other words, we repeat MEG for k steps, and then restart the momentum at [w(t+1)k+1, w(t+1)k]. The local convergence of the restarted MEG is established in the next theorem. |
|
|
|
Theorem 9 (Local convergence). Let G : R |
|
2d → R |
|
2d*be the continuously differentiable operator representing* the momentum extragradient method (MEG) in (4)*. Let* w |
|
⋆be a stationary point. Let wt *denote the output* of MEG, which enjoys a convergence rate of the form ∥wt − w |
|
⋆∥ = C(1 − φ) |
|
t(t + 1)∥w0 − w |
|
⋆∥ *for some* 12Observe that (µ/L) |
|
1/2 = ϵ(µ, L)/L =⇒ ϵ(*µ, L*) = √µL. |
|
|
|
0 < φ < 1 *when the vector field is affine. Further, consider restarting the momentum of MEG after running* k *steps, as in* (31). Then, for each ε > 0, there exists k > 0 and δ > 0 *such that, for all initializations* w0 satisfying ∥w0 − w |
|
⋆∥ ⩽ δ, *the restarted MEG satisfies:* |
|
|
|
$$\|w_{t}-w^{\star}\|=O((1-\varphi+\varepsilon)^{t})\|w_{0}-w^{\star}\|.$$ |
|
|
|
## 7 Experiments |
|
|
|
![12_Image_0.Png](12_Image_0.Png) |
|
|
|
Figure 3: *Illustration of the game Jacobian spectra and the performance of different algorithms considered.* |
|
Jacobian spectrum in the first plot matches S |
|
⋆ |
|
2 in (14) precisely, while that in the third plot inexactly follows S |
|
⋆ |
|
2 |
|
. The second (fourth) plot shows the performance of different algorithms for solving quadratic games in |
|
(16) with the Jacobian spectrum following the first (third) plot. |
|
|
|
In this section, we perform numerical experiments to optimize a game when the Jacobian has a cross-shaped spectrum in (14). We focus on this spectrum as it may be the most challenging case, involving both real and complex eigenvalues (c.f., Theorem 3). To test the robustness, we consider two cases where the Jacobian spectrum exactly follows S |
|
⋆ |
|
2 in (14), as well as the inexact case. We illustrate them in Figure 3. |
|
|
|
We focus on two-player quadratic games, where player 1 controls x ∈ R |
|
d1 and player 2 controls y ∈ R |
|
d2 with loss functions in (16). In our setting, the corresponding vector field in (17) satisfies M12 = −M⊤ |
|
21, but S1 and S2 can be nonzero symmetric matrices. Further, the Jacobian ∇v = A has the cross-shaped eigenvalue structure in (14), with c = |
|
L−µ 2(c.f., Proposition 1, Case 2). For the problem constants, we use µ = 1, and L = 200. The optimum [x |
|
⋆ y |
|
⋆] |
|
⊤ = w |
|
⋆ ∈ R |
|
200 is generated using the standard normal distribution. For simplicity, we assume b = [b1 b2] |
|
⊤ = [0 0]⊤. For the algorithms, we compare GD in (22), GDM in (5), EG in |
|
(23), and MEG in (4). All algorithms are initialized with 0. We plot the experimental results in Figure 3. |
|
|
|
For MEG (optimal), we set the hyperparameters using Theorem 5. For GD (theory) and EG (theory), we set the hyperparameters using Theorem 7, both for the exact and the inexact settings. For GDM (grid search), we perform a grid search of h GDM and mGDM, and choose the best-performing ones, as Theorem 8 does not give a specific form for hyperparameter setup. Specifically, we consider 0.005 ⩽ h GDM ⩽ 0.015 with 10−3 increment, and 0.01 ⩽ mGDM ⩽ 0.99 with 10−2increment. In addition, as Theorem 7 might be conservative, we conduct grid searches for GD and EG as well. For GD (grid search), we use the same setup as h GDM. |
|
|
|
For EG (grid search), we use 0.001 ⩽ h EG ⩽ 0.05 with 10−4increment. |
|
|
|
There are several remarks to make. First, although the third plot in Figure 3 does not exactly follow the spectrum model in (14), MEG still works well with the optimal hyperparameters from Theorem 5. As expected, MEG (optimal) required more iterations in the inexact case compared to the exact case. Second, compared to other algorithms, MEG (optimal) indeed exhibits a significantly faster rate of convergence, even when compared to other methods that use grid-search hyperparameter tuning, supporting our theoretical findings in Section 4. Third, while EG (theory) is slower than GD (theory), which confirms Corrolary 1, EG (grid search) can be tuned to converge faster via grid search. Lastly, even though the best performance of GDM (grid search) is obtained through grid search, one can see the GD (grid search) obtains a slightly faster convergence rate than GDM (grid search), confirming Corollay 2. |
|
|
|
## 8 Conclusion |
|
|
|
In the study of differentiable games, finding stationary points efficiently is crucial. This work analyzes the momentum extragradient method, revealing three distinct convergence modes dependent on the Jacobian eigenvalue distribution. Through a polynomial-based analysis, we derive optimal hyperparameters for each mode, achieving accelerated asymptotic convergence rates. We compared the obtained rates with other firstorder methods and showed that the considered methods do not achieve the accelerated convergence rate. |
|
|
|
Notably, our initial analysis for affine vector fields extends to guarantee local convergence rates on twicedifferentiable vector fields. Numerical experiments on quadratic games validate our theoretical findings. |
|
|
|
## Acknowledgments |
|
|
|
The authors would like to thank Fangshuo Liao, Baptiste Goujaud, Damien Scieur, Miri Son, and Giorgio Young for their useful discussions and feedback. This work is supported by NSF FET: Small No. 1907936, NSF MLWiNS CNS No. 2003137 (in collaboration with Intel), NSF CMMI No. 2037545, NSF CAREER award No. 2145629, NSF CIF No. 2008555, Rice InterDisciplinary Excellence Award (IDEA), and the Canada CIFAR AI Chairs program. |
|
|
|
## References |
|
|
|
Yossi Arjevani and Ohad Shamir. On the iteration complexity of oblivious first-order optimization algorithms. |
|
|
|
In *International Conference on Machine Learning*. PMLR, 2016. |
|
|
|
Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In *International Conference on* Artificial Intelligence and Statistics. PMLR, 2020a. |
|
|
|
Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. Accelerating smooth games by manipulating spectral shapes. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020b. |
|
|
|
David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In *International Conference on Machine Learning*. PMLR, |
|
2018. |
|
|
|
Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, and Simon Lacoste-Julien. A closer look at the optimization landscapes of generative adversarial networks. In International Conference on Learning Representations, 2020. |
|
|
|
Raphaël Berthier, Francis Bach, and Pierre Gaillard. Accelerated gossip in networks of given dimension using Jacobi polynomial iterations. *SIAM Journal on Mathematics of Data Science*, 2020. |
|
|
|
Aleksandr Beznosikov, Pavel Dvurechensky, Anastasia Koloskova, Valentin Samokhin, Sebastian U Stich, and Alexander Gasnikov. Decentralized local stochastic extra-gradient for variational inequalities. *arXiv* preprint arXiv:2106.08315, 2021. |
|
|
|
Theodore S Chihara. *An introduction to orthogonal polynomials*. Courier Corporation, 2011. |
|
|
|
Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. *Advances in neural information processing systems*, 31, 2018. |
|
|
|
Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a nash equilibrium. *SIAM Journal on Computing*, 2009. |
|
|
|
Carles Domingo-Enrich, Fabian Pedregosa, and Damien Scieur. Average-case acceleration for bilinear games and normal matrices. In *International Conference on Learning Representations*, 2021. |
|
|
|
Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2021. |
|
|
|
Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. *arXiv preprint arXiv:1802.10551*, 2018. |
|
|
|
Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Rémi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019. |
|
|
|
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in Neural Information Processing* Systems, volume 27, 2014. |
|
|
|
Eduard Gorbunov, Nicolas Loizou, and Gauthier Gidel. Extragradient method: O (1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In International Conference on Artificial Intelligence and Statistics. PMLR, 2022. |
|
|
|
Eduard Gorbunov, Adrien Taylor, Samuel Horváth, and Gauthier Gidel. Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity. In International Conference on Machine Learning. PMLR, 2023. |
|
|
|
Baptiste Goujaud and Fabian Pedregosa. Cyclical step-sizes, 2022. URL http://fa.bianp.net/blog/2022/ |
|
cyclical/. |
|
|
|
Baptiste Goujaud, Damien Scieur, Aymeric Dieuleveut, Adrien B Taylor, and Fabian Pedregosa. Superacceleration with cyclical step-sizes. In *International Conference on Artificial Intelligence and Statistics*. |
|
|
|
PMLR, 2022. |
|
|
|
Magnus R Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving. *Journal of research of* the National Bureau of Standards, 49(6):409, 1952. |
|
|
|
Andrew J Hetzel, Jay S Liew, and Kent E Morrison. The probability that a matrix of integers is diagonalizable. *The American Mathematical Monthly*, 114(6):491–499, 2007. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. On the convergence of singlecall stochastic extra-gradient methods. *Advances in Neural Information Processing Systems*, 32, 2019. |
|
|
|
Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. *Advances in Neural* Information Processing Systems, 33, 2020. |
|
|
|
Junhyung Lyle Kim, Gauthier Gidel, Anastasios Kyrillidis, and Fabian Pedregosa. Momentum extragradient is optimal for games with cross-shaped spectrum. In *OPT 2022: Optimization for Machine Learning* |
|
(NeurIPS 2022 Workshop), 2022. |
|
|
|
Galina M Korpelevich. The extragradient method for finding saddle points and other problems. *Matecon*, |
|
1976. |
|
|
|
Peter Lancaster and Hanafi K Farahat. Norms on direct sums and tensor products. *mathematics of computation*, 1972. |
|
|
|
Alistair Letcher, David Balduzzi, Sébastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. *The Journal of Machine Learning Research*, 2019. |
|
|
|
Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, and Michael I Jordan. |
|
|
|
On the convergence of stochastic extragradient for bilinear games with restarted iteration averaging. arXiv preprint arXiv:2107.00464, 2021. |
|
|
|
Tengyuan Liang and James Stokes. Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks. In *The 22nd International Conference on Artificial Intelligence and* Statistics, pp. 907–915. PMLR, 2019. |
|
|
|
Mingrui Liu, Wei Zhang, Youssef Mroueh, Xiaodong Cui, Jarret Ross, Tianbao Yang, and Payel Das. A |
|
decentralized parallel algorithm for training generative adversarial nets. *Advances in Neural Information* Processing Systems, 2020. |
|
|
|
Jonathan P. Lorraine, David Acuna, Paul Vicol, and David Duvenaud. Complex momentum for optimization in games. In *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*, volume 151 of *Proceedings of Machine Learning Research*, 2022. |
|
|
|
Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In *International conference on machine learning*. PMLR, 2018. |
|
|
|
Aryan Mokhtari, Asuman Ozdaglar, and Sarath Pattathil. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020. |
|
|
|
Renato DC Monteiro and Benar Fux Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. *SIAM Journal on Optimization*, 20(6):2755–2787, 2010. |
|
|
|
Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Andrea Michi, et al. Nash learning from human feedback. *arXiv preprint arXiv:2312.00886*, 2023. |
|
|
|
Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 2004. |
|
|
|
Samet Oymak. Provable super-convergence with a large cyclical learning rate. *IEEE Signal Processing* Letters, 28, 2021. |
|
|
|
Balamurugan Palaniappan and Francis Bach. Stochastic variance reduction methods for saddle-point problems. *Advances in Neural Information Processing Systems*, 2016. |
|
|
|
Vardan Papyan. Traces of class/cross-class structure pervade deep learning spectra. The Journal of Machine Learning Research, 21, 2020. |
|
|
|
Fabian Pedregosa. Momentum: when Chebyshev meets Chebyshev, 2020. URL http://fa.bianp.net/ |
|
blog/2020/momentum/. |
|
|
|
Fabian Pedregosa and Damien Scieur. Acceleration through spectral density estimation. In Proceedings of the 37th International Conference on Machine Learning. PMLR, November 2020. |
|
|
|
David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945, 2016. |
|
|
|
Boris T Polyak. Introduction to optimization. optimization software. *Inc., Publications Division, New York*, |
|
1:32, 1987. |
|
|
|
Ernest K Ryu, Kun Yuan, and Wotao Yin. ODE analysis of stochastic gradient methods with optimism and anchoring for minimax problems. *arXiv preprint arXiv:1905.10899*, 2019. |
|
|
|
Yoav Shoham and Kevin Leyton-Brown. *Multiagent systems: Algorithmic, game-theoretic, and logical foundations*. Cambridge University Press, 2008. |
|
|
|
Mikhail V Solodov and Benar F Svaiter. A hybrid approximate extragradient–proximal point algorithm using the enlargement of a maximal monotone operator. *Set-Valued Analysis*, 7(4):323–345, 1999. |
|
|
|
Paul Tseng. On linear convergence of iterative methods for the variational inequality problem. *Journal of* Computational and Applied Mathematics, 1995. |
|
|
|
Jian Zhang and Ioannis Mitliagkas. Yellowfin and the art of momentum tuning. Proceedings of Machine Learning and Systems, 1, 2019. |
|
|
|
## A Missing Proofs In Section 3 A.1 Proof Of Lemma 1 |
|
|
|
Proof of Lemma 1 can be found for example in Azizian et al. (2020b, Section B). |
|
|
|
To obtain the residual polynomials of MEG, w1 has to be set slightly differently from the rest of the iterates, as we write in the pseudocode below: |
|
Algorithm 1: Momentum extragradient (MEG) method Input: Initialization w0, hyperparameters *h, γ, m*. |
|
|
|
Set: w1 = w0 −h 1+m v(w0 − γv(w0)) |
|
for t = 1, 2*, . . .* do wt+1 = wt − hv(wt − γv(wt)) + m(wt − wt−1) |
|
end |
|
|
|
## Derivation Of The First Part |
|
|
|
Proof. We want to find the residual polynomial P˜t(A) of the extragradient with momentum (MEG) in (4). |
|
|
|
That is, we want to find |
|
|
|
$$w_{t}-w^{\star}=\tilde{P}_{t}(A)(w_{0}-w^{\star}),$$ |
|
$$(32)$$ |
|
⋆), (32) |
|
where {wt}t=0 is the iterates generated by MEG, which is possible by Lemma 1, as MEG is a first-order method (Arjevani & Shamir, 2016; Azizian et al., 2020b). We now prove this is by induction. To do so, we will use the following properties. First, note that as we are looking for a stationary point, it holds that v(w |
|
⋆) = 0. Further, as v is linear by the assumption of Lemma 1, it holds that v(w) = A(w − w |
|
⋆). |
|
|
|
Base case. For t = 0, P˜0(A) is a degree-zero polynomial, and hence equals Id, which denotes the identity matrix. Thus, w0 − w |
|
⋆ = Id(w0 − w |
|
⋆) holds true. |
|
|
|
For completeness, we also prove when t = 1. In that case, observe that MEG in proceeds as w1 = w0 − |
|
h 1+m v(w0 − γv(w0)). Subtracting w |
|
⋆ on both sides, we have: |
|
|
|
w1 − w ⋆ = w0 − w ⋆ −h 1 + m v(w0 − γv(w0)) = w0 − w ⋆ −h 1 + m v(w0 − γA(w0 − w ⋆)) = w0 − w ⋆ −h 1 + m A(w0 − γA(w0 − w ⋆) − w ⋆) = w0 − w ⋆ −h 1 + m A(w0 − w ⋆) + hγ 1 + m A 2(w0 − w ⋆) = Id −h 1 + m A +hγ 1 + m A 2 (w0 − w ⋆) = Id −h 1 + m A(Id − γA) (w0 − w ⋆) = P˜1(A)(w0 − w ⋆). |
|
Induction step. As the induction hypothesis, assume P˜t satisfies (32). We want to prove this holds for t + 1. We have: |
|
|
|
$$w_{t+1}=w_{t}-hv(w_{t}-\gamma v(w_{t}))+m(w_{t}-w_{t-1})$$ $$=w_{t}-hv(w_{t}-\gamma A(w_{t}-w^{\star}))+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(w_{t}-\gamma A(w_{t}-w^{\star})-w^{\star})+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(w_{t}-w^{\star})+h\gamma A^{2}(w_{t}-w^{\star})+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(I_{d}-\gamma A)(w_{t}-w^{\star})+m(w_{t}-w_{t-1}).$$ |
|
|
|
Subtracting w |
|
⋆ on both sides, we have: |
|
|
|
wt+1 − w ⋆ = wt − w ⋆ − hA(Id − γA)(wt − w ⋆) + m(wt − wt−1) = (Id − hA(Id − γA)) (wt − w ⋆) + m(wt − w ⋆ − (wt−1 − w ⋆)) (32) = (Id − hA(Id − γA)) P˜t(A)(w0 − w ⋆) + m(P˜t(A)(w0 − w ⋆) − P˜t−1(A)(w0 − w ⋆)) = (Id + mId − hA(Id − γA)) P˜t(A)(w0 − w ⋆) − mP˜t−1(A)(w0 − w ⋆) = P˜t+1(A)(w0 − w ⋆), |
|
where in the third equality, we used the induction hypothesis in (32). |
|
|
|
## Derivation Of The Second Part In (6) |
|
|
|
Proof. We show Pt = P˜t for all t via induction. |
|
|
|
Base case. For t = 0, by the definition of Chebyshev polynomials of the first and the second kinds, we have T0(λ) = U0(λ) = 1. Thus, |
|
|
|
$$P_{0}(\lambda)=m^{0}\left(\frac{2m}{1+m}T_{0}(\sigma(\lambda))+\frac{1-m}{1+m}U_{0}(\sigma(\lambda))\right)$$ $$=\frac{2m}{1+m}+\frac{1-m}{1+m}=1=\tilde{P}_{0}(\lambda).$$ |
|
|
|
Again, for completeness, we prove when t = 1 as well. In that case, by the definition of Chebyshev polynomials of the first and the second kinds, we have T1(λ) = λ, and U1(λ) = 2λ. Therefore, |
|
|
|
$$P_{1}(\lambda)=m^{t/2}\left(\frac{2m}{1+m}T_{1}(\sigma(\lambda))+\frac{1-m}{1+m}U_{1}(\sigma(\lambda))\right)$$ $$=m^{t/2}\left(\frac{2m}{1+m}\,\sigma(\lambda)+\frac{1-m}{1+m}\cdot2\cdot\sigma(\lambda)\right)$$ $$=m^{t/2}\left(\frac{2\sigma(\lambda)}{1+m}\right)$$ $$=1-\frac{h\lambda(1-\gamma\lambda)}{1+m}=\tilde{P}_{1}(\lambda).$$ |
|
|
|
Induction step. As the induction hypothesis, assume that Pt = P˜t for t. In this step, we show that the same holds for t + 1. |
|
|
|
Pt+1 = m(t+1)/2 2m 1 + m Tt+1(σ(λ)) + 1 − m 1 + m Ut+1(σ(λ)) = m(t+1)/2 2m 1 + m 2σ(λ)Tt(σ(λ)) − Tt−1(σ(λ)) + 1 − m 1 + m 2σ(λ)Ut(σ(λ) − Ut−1(σ(λ))) = 2σ(λ) · m1/2· mt/2 2m 1 + m Tt(σ(λ)) + 1 − m 1 + m Ut(σ(λ)) | {z } Pt(λ) − m · m(t−1)/2 2m 1 + m Tt−1(σ(λ)) + 1 − m 1 + m Ut−1(σ(λ)) | {z } Pt−1(λ) = 2σ(λ) · √m · P˜t(λ) − m · P˜t−1(λ) = (1 + m − hλ(1 − γλ))P˜t(λ) − mP˜t−1(λ), |
|
$\square$ |
|
where in the second to last equality we use the induction hypothesis. |
|
|
|
## A.3 Proof Of Lemma 2 |
|
|
|
Proof of Lemma 2 can be found in Goujaud & Pedregosa (2022). |
|
|
|
Proof. We first recall that using (3), we can upper bound the worst-case convergence rate as: |
|
|
|
$$\sup_{\lambda\in\mathcal{S}^{+}}|P_{t}(\lambda)|\stackrel{{\eqref{eq:20}}}{{=}}\sup_{\lambda\in\mathcal{S}^{+}}\left|m^{t/2}\bigg{(}\frac{2m}{1+m}T_{t}(\sigma(\lambda))+\frac{1-m}{1+m}U_{t}(\sigma(\lambda))\bigg{)}\right|$$ $$\leqslant m^{t/2}\bigg{(}\frac{2m}{1+m}\sum_{\lambda\in\mathcal{S}^{+}}|T_{t}(\sigma(\lambda))|+\frac{1-m}{1+m}\sum_{\lambda\in\mathcal{S}^{+}}|U_{t}(\sigma(\lambda))|\bigg{)}\tag{33}$$ $\lambda$ is $\lambda$-independent. For $\lambda$, $\lambda$ is $\lambda$-independent. The $\mathcal{T}_{t}(\lambda)$ and $\mathcal{U}_{t}(\lambda)$ has a unique |
|
Now, denote σ¯ := supλ∈S⋆ |σ(λ; *h, γ, m*)|. For the first case, if σ¯ ⩽ 1, both Tt(x) and Ut(x) behave nicely, per Lemma 2. Thus, we have |
|
|
|
$$(33)\stackrel{{(8)}}{{\leqslant}}m^{t/2}\bigg{(}\frac{2m}{1+m}+\frac{1-m}{1+m}(t+1)\bigg{)}\leqslant m^{t/2}(t+1)\implies\limsup_{t\to\infty}\left(m^{t/2}(t+1)\right)^{\frac{1}{2t}}=\sqrt[4]{m}.\tag{33}$$ |
|
|
|
For the second case, we use the following expressions of Chebyshev polynomials: |
|
|
|
$$T_{n}(x)={\frac{\left(x-{\sqrt{x^{2}-1}}\right)^{n}+\left(x+{\sqrt{x^{2}-1}}\right)^{n}}{2}},\quad{\mathrm{and}}$$ $$U_{n}(x)={\frac{\left(x+{\sqrt{x^{2}-1}}\right)^{n+1}-\left(x-{\sqrt{x^{2}-1}}\right)^{n+1}}{2{\sqrt{x^{2}-1}}}}.$$ |
|
|
|
Therefore, in the second case where σ >¯ 1, we have both Tn(x) and Un(x) growing at rate (x + |
|
√x 2 − 1)n. |
|
|
|
Hence, we have: |
|
|
|
(33) $\leqslant O\left({m}^{t/2}\left(\bar{\sigma}+\sqrt{{\bar{\sigma}}^{2}-1}\right)^{t}\right)\implies\limsup\limits_{t\to\infty}$ |
|
$$\left(m^{t/2}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{t}\right)^{\frac{1}{2t}}=\sqrt[4]{m}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{1/2}.$$ |
|
Finally, in order for MEG to converge in the second case, we need: |
|
|
|
$$\sqrt[4]{m}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{1/2}<1$$ |
|
|
|
which is equivalent to |
|
|
|
$$\bar{\sigma}\leqslant\frac{\sqrt{m}(m+1)}{2m}=\frac{m+1}{2\sqrt{m}}.$$ |
|
|
|
## A.5 Derivation Of Extreme Points Of Robust Region In (11) |
|
|
|
We first write a general formula for inverting a quadratic function. For f(x) = ax2 + bx + c, its inverse is given by: |
|
|
|
$$f(x)=a x^{2}+b x+c:=y$$ $$f^{-1}(y)={\frac{-b\pm{\sqrt{b^{2}-4a(c-y)}}}{2a}},$$ |
|
|
|
with some abuse of notation (i.e., f |
|
−1 above is not a function). |
|
|
|
Applying the above to the link function of MEG in (6), we get |
|
|
|
$$\sigma^{-1}(y)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{1+m}{h\gamma}+\frac{2\sqrt{m}}{h\gamma}\cdot y}.$$ |
|
|
|
With this formula, we can plug in 1 and −1 to get: |
|
|
|
$$\sigma^{-1}(-1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}\quad\mathrm{and}\quad\sigma^{-1}(1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}.$$ |
|
|
|
Proof. We analyze each case separately. |
|
|
|
Case 1: There are two square roots: q1 |
|
anonymous uare roots: $\sqrt{\frac{1}{4\gamma^2}-\frac{(1-\sqrt{m})^2}{h\gamma}}$ and $\sqrt{\frac{1}{4\gamma^2}-\frac{(1+\sqrt{m})^2}{h\gamma}}.$ The second one is real if: $$\frac{1}{4\gamma^2}\geqslant\frac{(1+\sqrt{m})^2}{h\gamma}\implies\frac{h\gamma}{4\gamma^2}=\frac{h}{4\gamma}\geqslant(1+\sqrt{m})^2,$$ |
|
|
|
which implies the first is real, as (1 + √m) |
|
2 ⩾ (1 − |
|
√m) |
|
2. |
|
|
|
**Case 3:** There are two square roots: $\sqrt{\frac{1}{k\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{k\gamma}}$ and $\sqrt{\frac{1}{k\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{k\gamma}}$. The first one is complex if: $$\frac{1}{4\gamma^{2}}<\frac{(1-\sqrt{m})^{2}}{h\gamma}\implies\frac{h\gamma}{4\gamma^{2}}=\frac{h}{4\gamma}<(1-\sqrt{m})^{2},$$ |
|
which implies the second is complex, as (1 + √m) |
|
2 ⩾ (1 − |
|
√m) |
|
2. |
|
|
|
Case 2: This case follows automatically from the above two cases. |
|
|
|
## A.7 Proof Of Proposition 1 |
|
|
|
Proof. Define D3 = diag(*a, . . . , a*) of dimensions d × d. Let us prove that if there exists *U, V* orthonormal matrices and D1, D2 matrices with non-zeros coefficients only on the diagonal such that (with a slight abuse of notation) |
|
|
|
$$S_{1}=U\mathrm{diag}(D_{3},D_{1})U^{\top},S_{2}=V D_{3}V^{\top},\quad\mathrm{and}\quad B=U D_{2}V^{\top},$$ |
|
|
|
then the spectrum of A is crossed shaped. In that case, we have |
|
|
|
$$\begin{array}{r l}{A={\left[\begin{array}{l l l}{U[D_{3};D_{1}]U^{\top}}&{U D_{2}V^{\top},}\\ {-V D_{2}^{\top}U^{\top}}&{V D_{3}V^{\top}}\end{array}\right]}}\\ {={\left[\begin{array}{l l l}{U}&{0}\\ {0}&{V}\end{array}\right]{\left[\begin{array}{l l l}{[D_{3};D_{1}]}&{D_{2}}\\ {-D_{2}^{\top}}&{D_{3}}\end{array}\right]}{\left[\begin{array}{l l l}{U}&{0}\\ {0}&{V}\end{array}\right]}^{\top}.}\end{array}$$ |
|
|
|
Now by considering the basis W = ((U1, 0),(0, V1)*, . . . ,*(Udv |
|
, 0),(0, Vdv |
|
),(Udv+1, 0)*, . . . ,*(Ud, 0)) we have that A can be block diagonalized in that basis as |
|
|
|
$$A=W\mathrm{diag}\left(\left[\begin{matrix}a&[D_{2}]_{11}\\ -[D_{2}]_{11}&a\end{matrix}\right],\ldots,\left[\begin{matrix}a&[D_{2}]_{d_{u},d_{v}}\\ -[D_{2}]_{d_{u},d_{v}}&a\end{matrix}\right],[D_{1}]_{1},\ldots,[D_{1}]_{d_{u}-d_{v},d_{u}}\right)W^{\top}.$$ |
|
W⊤. (34) |
|
Now, notice that |
|
|
|
$$(34)$$ |
|
$${\rm Sp}\left(\begin{bmatrix}a&-b\\ b&a\end{bmatrix}\right)=\{a\pm bi\},\tag{1}$$ |
|
$$(35)$$ |
|
|
|
since the associated characteristic polynomial of the above matrix is: |
|
|
|
$$(a-\lambda)^{2}+b^{2}=0\implies$$ |
|
|
|
2 = 0 =⇒ a − λ = ±bi =⇒ λ = a ± bi. |
|
|
|
Hence, using (35) in the formulation of A in (34), we have that the spectrum of A is cross-shaped. |
|
|
|
## B Missing Proofs In Section 4 B.1 Proof Of Theorem 4 |
|
|
|
Proof. We write the conditions required for Theorem 5 below: |
|
|
|
$$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{1},$$ $$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=L_{1},$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=\mu_{2},\quad\text{and}$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{2}.$$ In both the solutions (10) and (20) are |
|
|
|
By adding (37) and (38) (or equivalently by ading (36) and (39)), we get |
|
|
|
$$\gamma={\frac{1}{\mu_{1}+L_{2}}}={\frac{1}{\mu_{2}+L_{1}}}.$$ |
|
. (40) |
|
$$(39)$$ |
|
$$(40)$$ |
|
|
|
From (36), we have: |
|
|
|
$$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{1}$$ $$\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}=\left(\frac{1}{2\gamma}-\mu_{1}\right)^{2}$$ $$\frac{(1-\sqrt{m})^{2}}{h}=\mu_{1}(1-\gamma\mu_{1})$$ $$h=\frac{(1-\sqrt{m})^{2}}{\mu_{1}(1-\gamma\mu_{1})}=\frac{(1-\sqrt{m})^{2}(\mu_{1}+L_{2})}{\mu_{1}L_{2}}\tag{41}$$ we: |
|
$$(41)$$ |
|
|
|
Similarly, from (38), we have: |
|
|
|
$$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=\mu_{2}$$ $$\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}=\left(\frac{\mu_{2}-L_{1}}{2}\right)^{2}$$ $$\left(\frac{\mu_{2}+L_{1}}{2}\right)^{2}-\left(\frac{\mu_{2}-L_{1}}{2}\right)^{2}=\mu_{2}L_{1}=\frac{(1+\sqrt{m})^{2}}{h\gamma}\tag{5.1}$$ which is for $\mu_{2}$. |
|
$$\left(42\right)$$ |
|
$$(43)$$ |
|
|
|
Combining (41) and (42), and solving for m, we get µ2L1(1 − |
|
√m) |
|
2 = µ1L2(1 + √m) |
|
2 |
|
|
|
$$m\rangle^{2}=\mu_{1}L_{2}(1+\sqrt{m})$$ $$m=\left(\frac{\sqrt{\mu_{2}L_{1}}-\sqrt{\mu_{1}L_{2}}}{\sqrt{\mu_{2}L_{1}}+\sqrt{\mu_{1}L_{2}}}\right)^{2}\stackrel{(13)}{=}\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{2}.$$ (11) |
|
. (43) |
|
Finally, plugging (43) back to (41), we get: |
|
|
|
1), we get: $ h=\frac{(1-\sqrt{m})^2(\mu_1+L_2)}{\mu_1L_2}$ $ =\frac{4\mu_1L_2}{(\sqrt{\mu_2+L_1}+\sqrt{\mu_1+L_2})^2}\cdot\frac{\mu_1+L_2}{\mu_1L_2}$ $ =\frac{4(\mu_1+L_2)}{(\sqrt{\mu_2+L_1}+\sqrt{\mu_1+L_2})^2}$. |
|
Proof. We write the conditions required for Theorem 5 below: |
|
First, by adding (44) and (45), we get: |
|
|
|
$${\frac{1}{\gamma}}=\mu+L\implies\gamma={\frac{1}{\mu+L}}.$$ |
|
. (47) |
|
$$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu,$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=L,\quad\text{and}$$ $$\sqrt{\frac{(1+\sqrt{m})^{2}}{h\gamma}-\frac{1}{4\gamma^{2}}}=c.$$ |
|
$$(444)$$ |
|
$$(45)$$ |
|
$$(46)$$ |
|
$$(47)$$ |
|
|
|
Plugging (47) back into (44), we have: |
|
|
|
$$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu$$ $$\frac{\mu+L}{2}-\mu=\sqrt{\left(\frac{\mu+L}{2}\right)^{2}-\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}}$$ $$\left(\frac{L-\mu}{2}\right)^{2}=\left(\frac{\mu+L}{2}\right)^{2}-\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}$$ $$\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}=\left(\frac{\mu+L}{2}\right)^{2}-\left(\frac{L-\mu}{2}\right)^{2}=\mu L$$ $$h=\frac{(1-\sqrt{m})^{2}(\mu+L)}{\mu L}.\tag{48}$$ |
|
|
|
Plugging (47) and (48) into (46), we have: |
|
|
|
s(1 + √m) 2 hγ − 1 4γ 2 = c s(1 + √m) 2 · µL (1 − √m) 2− µ + L 2 2= c (1 + √m) 2· µL (1 − √m) 2= c 2 + µ + L 2 2= 4c 2 + (µ + L) 2 4 (1 + √m) 2 (1 − √m) 2 = 4c 2 + (µ + L) 2 4µL (1 + √m)p4µL = (1 − √m)p4c 2 + (µ + L) 2 √m(p4c 2 + (µ + L) 2 +p4µL) = p4c 2 + (µ + L) 2 −p4µL √m = p4c 2 + (µ + L) 2 − √4µL p4c 2 + (µ + L) 2 + √4µL . (49) |
|
Finally, to simplify (48) further, from (49), we have: |
|
|
|
$$1-\sqrt{m}=\frac{4\sqrt{\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}.$$ |
|
|
|
Hence, from (48), |
|
|
|
$$h={\frac{(\mu+L)(1-\sqrt{m})^{2}}{\mu L}}={\frac{{\frac{16\mu L(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}}}}{\mu L}}={\frac{16(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}}}.$$ |
|
. (50) |
|
$$(50)$$ |
|
|
|
Proof. We write the conditions required for (6) below: |
|
|
|
$\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}-\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\gamma$ is not a good approximation. |
|
− (1 + √m) 2 hγ = c + bi, (51) − (1 + √m) 2 hγ = c + ai, (52) − (1 + √m) 2 hγ = c − ai, and (53) − (1 − √m) 2 hγ = c − bi. (54) |
|
First, we can see from all cases that the optimal γ is |
|
|
|
$$(51)$$ |
|
$$(52)$$ |
|
$$(53)$$ |
|
$$(54)$$ |
|
$$\gamma={\frac{1}{2c}}.$$ |
|
$$(55)$$ |
|
. (55) |
|
(51) and (54) equivalently imply |
|
|
|
$$\sqrt{\frac{(1+\sqrt{m})^{2}}{h\gamma}-\frac{1}{4\gamma^{2}}}=b$$ $$(1+\sqrt{m})^{2}=h\gamma b^{2}+\frac{h}{4\gamma}=\frac{h(c^{2}+b^{2})}{2c}$$ $$h=\frac{2c(1+\sqrt{m})^{2}}{c^{2}+b^{2}}.$$ |
|
$$(56)$$ |
|
|
|
Similarly, (52) and (53) imply |
|
|
|
and (50) imply $$\sqrt{\frac{(1-\sqrt{m})^2}{h\gamma}-\frac{1}{4\gamma^2}}=a$$ $$\frac{(1-\sqrt{m})^2}{h\gamma}=a^2+\frac{1}{4\gamma^2}=a^2+c^2$$ $$\frac{(1-\sqrt{m})^2(c^2+b^2)}{(1+\sqrt{m})^2}=a^2+c^2$$ $$(1-\sqrt{m})\sqrt{c^2+b^2}=(1+\sqrt{m})\sqrt{c^2+a^2}$$ $$\sqrt{m}=\frac{\sqrt{c^2+b^2}-\sqrt{c^2+a^2}}{\sqrt{c^2+b^2}+\sqrt{c^2+a^2}}=1-\frac{2\sqrt{c^2+a^2}}{\sqrt{c^2+b^2}+\sqrt{c^2+a^2}}\tag{57}$$ In (56), we get... |
|
Plugging (57) to (56), we get |
|
|
|
$$h={\frac{2c(1+{\sqrt{m}})^{2}}{c^{2}+b^{2}}}={\frac{8c}{({\sqrt{c^{2}+b^{2}}}+{\sqrt{c^{2}+a^{2}}})^{2}}}.$$ |
|
|
|
## C Missing Proofs In Sections 5 C.1 Proof Of Corollary 1 |
|
|
|
Proof. To compute the convergence rates of GD and EG from Theorem 7 applied to each spectrum model in (12), (14), and (15), we need to compute minλ∈∆⋆ R(1/λ) and minλ∈∆⋆ R(λ) for GD. Similarly for EG, |
|
we need to compute additionally minλ∈∆⋆ |λ|, minλ∈∆⋆ |λ| 2, and supλ∈∆⋆ |λ| 2. |
|
|
|
$\square$ |
|
Case 1: It's straightforward to compute |
|
|
|
$$\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}\Re(1/\lambda)=1/L_{2},\quad\mathrm{and}\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}\Re(\lambda)=\mu_{1}$$ |
|
|
|
Thus, GD for Case 1 has the rate |
|
|
|
$$1-{\frac{\mu_{1}}{L_{2}}}=1-\tau.$$ |
|
|
|
For EG, it's also simple to obtain |
|
|
|
$$\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|=L_{2},\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|^{2}=\mu_{1}^{2},\quad\mathrm{and}\quad\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|^{2}=L_{2}^{2}.$$ |
|
|
|
Thus, EG for Case 1 has the rate |
|
|
|
$$1-\frac{1}{4}\left(\frac{\mu_{1}}{L_{2}}+\frac{1}{16}\left(\frac{\mu_{1}}{L_{2}}\right)^{2}\right).$$ |
|
|
|
Case 2: For a complex number z = p + qi ∈ C, we can compute R(1/z) as: |
|
|
|
$${\frac{1}{z}}={\frac{1}{p+q i}}={\frac{p-q i}{p^{2}+q^{2}}}={\frac{p}{p^{2}+q^{2}}}-{\frac{q}{p^{2}+q^{2}}}i\implies\Re\left({\frac{1}{z}}\right)={\frac{p}{p^{2}+q^{2}}}.$$ |
|
|
|
The four extreme points of the cross-shaped spectrum model in (14) are: |
|
|
|
$$\mu=\mu+0i,$$ |
|
µ = µ + 0*i, L* = L + 0i, and L − µ |
|
$$L=L+0i,\quad\mathrm{and}\quad\frac{L-\mu}{2}\pm c i.$$ |
|
|
|
Hence, R(1/z) for each of the above points is: |
|
|
|
$$\Re\left(\frac{1}{\mu}\right)=\frac{\mu}{\mu^{2}}=\frac{1}{\mu},$$ $$\Re\left(\frac{1}{L}\right)=\frac{L}{L^{2}}=\frac{1}{L},\quad\text{and}$$ $$\Re\left(\frac{1}{\frac{L-\mu}{2}\pm ci}\right)=\frac{\frac{L-\mu}{2}}{\left(\frac{L-\mu}{2}\right)^{2}+c^{2}}$$ $$=\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}.$$ |
|
|
|
Therefore, minλ∈S⋆ |
|
2 R1λ |
|
= |
|
1 L |
|
. As *µ < L,* we only need to compare the last two values. Observe that: |
|
|
|
$$\begin{array}{c}{{c>\sqrt{\frac{L^{2}-\mu^{2}}{4}}}}\\ {{4c^{2}>(L-\mu)(L+\mu)}}\\ {{4c^{2}>2L(L-\mu)-(L-\mu)^{2}}}\\ {{\frac{1}{L}>\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}.}}\end{array}$$ |
|
|
|
Therefore, |
|
|
|
$$\operatorname*{min}_{\lambda\in S_{2}^{*}}\Re\left({\frac{1}{\lambda}}\right)={\begin{cases}{\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}}&{{\mathrm{if}}\quad c>{\sqrt{\frac{L^{2}-\mu^{2}}{4}}}}\\ {{\frac{1}{L}}}&{{\mathrm{otherwise.}}}\end{cases}}$$ |
|
|
|
For minλ∈S⋆ |
|
2 R(λ), it's straightforward from the definition that |
|
|
|
$$\operatorname*{min}_{\lambda\in{\mathcal{S}}_{2}^{*}}\Re(\lambda)=\mu.$$ |
|
|
|
Thus, GD for Case 2 has the rate |
|
|
|
$$\begin{cases}1-{\frac{2\mu(L-\mu)}{4c^{2}+(L-\mu)^{2}}}&\quad{\mathrm{if}}\quad c>{\sqrt{\frac{L^{2}-\mu^{2}}{4}}}\\ 1-{\frac{\mu}{L}}&\quad{\mathrm{otherwise.}}\end{cases}$$ |
|
|
|
Similarly for EG, we need to compute minλ∈S⋆ |
|
2 R(λ), which was computed above; additionally, we need to compute minλ∈S⋆ |
|
2 |λ|, minλ∈S⋆ |
|
2 |λ| 2, and supλ∈S⋆ |
|
2 |λ| 2. For z = p + qi ∈ C, |z| =pp 2 + q 2. Hence, we have |
|
|
|
$$|\mu+0i|=\mu,\quad|L+0i|=L,\quad{\mathrm{and}}\quad\left|{\frac{L-\mu}{2}}\pm c i\right|={\sqrt{c^{2}+\left({\frac{L-\mu}{2}}\right)^{2}}}\,.$$ |
|
|
|
Observe that: |
|
|
|
$$c>\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}}$$ $$c^{2}>L^{2}-\frac{L^{2}-2L\mu+\mu^{2}}{4}$$ $$c^{2}+\left(\frac{L-\mu}{2}\right)^{2}>L^{2}$$ |
|
|
|
Thus, for supλ∈S⋆ |
|
2 |λ|, we have |
|
|
|
$$\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{2}^{*}}|\lambda|={\begin{cases}{\sqrt{c^{2}+\left({\frac{L-\mu}{2}}\right)^{2}}}&{{\mathrm{if}}\quad c>{\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}}}}\\ L&{{\mathrm{otherwise,}}}\end{cases}}$$ |
|
|
|
from which supλ∈S⋆ |
|
2 |λ| 2can also be obtained. Lastly, minλ∈S⋆ |
|
2 |λ| 2 = µ 2, as we know *µ < L,* and (L − µ)/2 is the center of [*µ, L*]. |
|
|
|
Combining all three, we get the rate of EG for Case 3 is |
|
|
|
we get the rate of L6 for Case 3 is $$\begin{cases}1-\frac{1}{4}\left(\frac{\mu}{\sqrt{c^{2}+\left(\frac{L-\mu}{2}\right)^{2}}}+\frac{\mu^{2}}{16\left(c^{2}+\left(\frac{L-\mu}{2}\right)^{2}\right)}\right)&\text{if}c\geqslant\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}},\\ 1-\frac{1}{4}\Big{(}\frac{\mu}{L}+\frac{\mu^{2}}{16L^{2}}\Big{)}&\text{otherwise.}\end{cases}$$ |
|
Case 3: Since (15) has fixed real component, minλ∈S⋆ |
|
3 R(λ) = c. |
|
|
|
For minλ∈S⋆ |
|
3 R(1/λ) we can compute compare |
|
|
|
$$\Re\left(\frac{1}{c+ai}\right)=\frac{c}{c^{2}+a^{2}}>\frac{c}{c^{2}+b^{2}}=\Re\left(\frac{1}{c+bi}\right),$$ since $a<b$ from (15). Thus, GD for Case 3 has the rate $$1-\frac{c^{2}}{c^{2}+b^{2}}.$$ |
|
|
|
For EG, it's also simple to obtain |
|
|
|
$$\operatorname*{lim}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|={\sqrt{c^{2}+b^{2}}},\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|^{2}=c^{2}+a^{2},\quad{\mathrm{and}}\quad\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|^{2}=c^{2}+b^{2}.$$ |
|
|
|
Thus, EG has the rate |
|
|
|
$$1-{\frac{1}{4}}\left({\frac{c}{\sqrt{c^{2}+b^{2}}}}+{\frac{1}{16}}{\frac{(c^{2}+a^{2})}{(c^{2}+b^{2})}}\right).$$ |
|
|
|
## C.2 Proof Of Corollary 2 |
|
|
|
Proof. Per Theorem 8, the largest ϵ that permits acceleration for GDM is ϵ = |
|
õL. Therefore, in the special case of (14) we consider, i.e., when c = |
|
L−µ 2, GDM *cannot* achieve acceleration if L−µ 2 > |
|
õL. Hence, we have: |
|
|
|
$$\begin{array}{c}{{\frac{L-\mu}{2}>\sqrt{\mu L}}}\\ {{L-\mu>2\sqrt{\mu L}}}\\ {{L^{2}+\mu^{2}>6\mu L>6\mu^{2}\ \ (\because L>\mu)}}\\ {{L>\sqrt{5}\mu.}}\end{array}$$ |
|
|
|
## C.3 Proof Of Proposition 2 |
|
|
|
Proof. For an arbitrary complex number p + qi with p > 0, and using the link function of GDM from (7), we have |
|
|
|
$$|\xi(p+qi)|=\sqrt{\left(\frac{1+m-hp}{2\sqrt{m}}\right)^{2}+\left(\frac{hq}{2\sqrt{m}}\right)^{2}}\leqslant1$$ $$\frac{(1+m-hp)^{2}+h^{2}q^{2}}{4m}\leqslant1$$ $$(1+m-hp)^{2}+h^{2}q^{2}\leqslant4m$$ $$(1-m)^{2}+hp(hp-2(1+m))+h^{2}q^{2}\leqslant0$$ $$\frac{(1-m)^{2}+h^{2}q^{2}}{hp}\leqslant2(1+m)-hp$$ |
|
$$(58)$$ |
|
|
|
Notice that the LHS is positive. Therefore, if the RHS is negative, the above inequality cannot hold. In other words, if 2(1+m) |
|
h < p, GDM cannot stay in the robust region. This is very hard to satisfy, even with a small p. |
|
|
|
## D Missing Proofs In Section 6 |
|
|
|
Let us consider an affine vector field v(w) = Aw + b and its associated augmented MEG linear operator J: |
|
|
|
$$\begin{array}{r l}{{\left[w_{t+1}-w^{*}\right]=J\left[\begin{array}{l l}{w_{t}-w^{*}}\\ {w_{t}-w^{*}}\end{array}\right]}}&{{\mathrm{with}}}\quad J={\left[\begin{array}{l l}{(1+\beta)I_{d}-h A(I_{d}-\gamma A)}&{-\beta I_{d}}\\ {I_{d}}&{0_{d}}\end{array}\right],}}\end{array}$$ |
|
, (58) |
|
where Id and 0d respectively stand for the identity and the null matrices. To show the local convergence of (restarted) MEG for non-affine vector fields in Theorem 9, we first establish the following lemma, which connects the augmented state and the non-augmented one. |
|
|
|
Lemma 3. Let P MEG |
|
tbe the residual polynomial associated with t updates of MEG (c.f., Theorem *1). Let* J *be defined as* (58)*. If* w1 = w0 −h 1+m v(w0 − γv(w0))*, we then have* |
|
|
|
$$J^{t}\begin{bmatrix}w_{1}-w^{\star}\\ w_{0}-w^{\star}\end{bmatrix}=\begin{bmatrix}P_{t}^{M E G}(A)(w_{1}-w^{\star})\\ P_{t}^{M E G}(A)(w_{0}-w^{\star})\end{bmatrix}.$$ |
|
|
|
Consequently, if we denote zt+1 := [wt+1, wt] and z∗ := [w |
|
⋆, w⋆]*, we have* |
|
|
|
$$\|z_{t+1}-z_{*}\|\leqslant C(t+1)(1-\varphi)^{t}\|z_{0}-z_{*}\|.$$ |
|
t∥z0 − z∗∥. (60) |
|
$$(59)$$ |
|
$$(60)$$ |
|
Proof. Let us express J J t = P 11 t(A) P 12 t(A) P 21 t(A) P 22 t(A) , and J t w1 − w ⋆ w0 − w ⋆ = P 11 t(A)(w0 − w ⋆) + P 12 t(A)(w0 − w ⋆) P 21 t(A)(w0 − w ⋆) + P 22 t(A)(w0 − w ⋆) By writing J t+1 = JJt and using the block-matrix form of J in (58), we get that for any t ⩾ 0, P 11 t+1(A) = ((1 + β)Id − hA(Id − γA))P 11 t(A) − βP21 t(A) P 21 t+1(A) = P 11 t(A) (62) P 12 t+1(A) = ((1 + β)Id − hA(Id − γA))P 12 t(A) − βP22 t(A) P 22 t+1(A) = P 12 t(A). (63) |
|
tsuch that |
|
|
|
. (61) |
|
$$(61)$$ |
|
$$(62)$$ |
|
Hence, we have that, |
|
|
|
$$\begin{array}{l}{{P_{t+1}^{11}(A)\stackrel{(62)}{=}((1+\beta)I_{d}-h A(I_{d}-\gamma A))P_{t}^{11}(A)-\beta P_{t-1}^{11}(A)}}\\ {{P_{t+1}^{12}(A)\stackrel{(63)}{=}((1+\beta)I_{d}-h A(I_{d}-\gamma A))P_{t}^{12}(A)-\beta P_{t-1}^{12}(A).}}\end{array}$$ |
|
|
|
We claim that |
|
|
|
$$(63)$$ |
|
(64) $\left(65\right)$ (65) |
|
$$P_{t}^{11}(A)+P_{t}^{12}(A)=P_{t}^{\mathrm{MEG}}(A)\quad\mathrm{for~all}\quad t\geqslant0.$$ |
|
$$(66)$$ |
|
t(A) for all t ⩾ 0. (66) |
|
We prove this via induction. |
|
|
|
For the base case, using the fact that w1 = w0 −h 1+m (w0 − γv(w0)), we have that |
|
|
|
$(P_{1}^{11}(A)+P_{1}^{12}(A))(w_{0}-w^{\star})=w_{1}-w^{\star}=(I_{d}-\frac{1}{1+m}(I_{d}-\gamma A))(w_{0}-w^{\star})=P_{0}^{\rm MEG}(A)(w_{0}-w^{\star})\,,$ $(P_{0}^{11}(A)+P_{0}^{22}(A))(w_{0}-w^{\star})=(P_{1}^{21}(A)+P_{1}^{22}(A))(w_{0}-w^{\star})=I_{d}(w_{0}-w^{\star})=P_{0}^{\rm MEG}(A)(w_{0}-w^{\star})$. |
|
To show the induction step, by adding (64) and (65), we get |
|
|
|
$P_{t+1}^{11}(A)+P_{t+1}^{12}(A)=((1+\beta)I_{d}-hA(I_{d}-\gamma A))(P_{t}^{11}(A)+P_{t}^{12}(A))-\beta(P_{t-1}^{11}(A)+P_{t-1}^{12}(A))$ $\stackrel{{\rm()}}{{=}}((1+\beta)I_{d}-hA(I_{d}-\gamma A))P_{t}^{\rm MECG}(A)-\beta P_{t-1}^{\rm MEG}(A),$ |
|
where in the last step we used the induction hypothesis. Also notice P |
|
11 t+1(A) + P |
|
12 t+1(A) = P |
|
MEG |
|
t+1 on the left-hand side. |
|
|
|
Hence we have for any t ⩾ 0, |
|
|
|
$$(P_{t}^{11}(A)+P_{t}^{12}(A))(w_{0}-w^{\star})=P_{t}^{\mathrm{MEG}}(A)(w_{0}-w^{\star}).$$ |
|
|
|
Therfore, going back to (61), we have: |
|
|
|
wt+1 − w ⋆ wt − w ⋆ = J t w1 − w ⋆ w0 − w ⋆ = (P 11 t(A) + P 12 t(A))(w0 − w ⋆) (P 21 t(A) + P 22 t(A))(w0 − w ⋆) (62),(63) =(P 11 t(A) + P 12 t(A))(w0 − w ⋆) (P 11 t−1 (A) + P 12 t−1 (A))(w0 − w ⋆) (66) = P MEG t(A)(w0 − w ⋆) P MEG t−1(A)(w0 − w ⋆) Thm. 1 = P MEG t(A)(w0 − w ⋆) P MEG t(A)(w−1 − w ⋆) = P MEG t(A) 0 0 P MEG t(A) · w0 − w ⋆ w−1 − w ⋆ = P MEG t(A) ⊗ I2 · w0 − w ⋆ w−1 − w ⋆ , |
|
$$(67)$$ |
|
where we use the convention that w0 = w−1. Finally, using the fact that ∥A ⊗ B∥ = ∥A∥∥B∥ for ℓ2-operator norm (Lancaster & Farahat, 1972), we have |
|
|
|
$$\|z_{t+1}-z_{*}\|\leqslant\|P_{t}^{\mathrm{MEG}}(A)\|\|z_{0}-z_{*}\|\,\stackrel{(10)}{\leqslant}\,C(t+1)(1-\varphi)^{t}\|z_{0}-z_{*}\|.$$ |
|
|
|
Proof. We first recall the restarted MEG algorithm we consider in (31): |
|
|
|
$[w_{tk+i+1},w_{tk+i}]=G([w_{tk+i},w_{tk+i-1}])\quad\text{for}\quad1\leqslant i\leqslant k-1,\quad\text{and then}\quad1$ $w_{(t+1)k+1}=w_{(t+1)k}-\frac{h}{1+m}v(w_{(t+1)k}-\gamma v(w_{(t+1)k})).$ |
|
$$\square$$ |
|
In other words, we repeat MEG for k steps, and then re-start the momentum at [w(t+1)k+1, w(t+1)k]. |
|
|
|
We can analyze this method as follows, where we denote zt := [wt, wt−1] and z∗ = [w |
|
⋆, w⋆]: |
|
|
|
∥z(t+1)k − z∗∥ = ∥G (k)(ztk) − z∗∥ (68) = ∥∇G (k)(˜ztk)(ztk − z∗)∥ ⩽ ∥∇G (k)(z∗)(ztk − z∗)∥ + ∥(∇G (k)(˜ztk) − ∇G (k)(z∗))(ztk − z∗)∥ (60) ⩽ C(k + 1)(1 − φ) k∥ztk − z∗∥ + ∥∇G (k)(˜ztk) − ∇G (k)(z∗)∥∥(ztk − z∗)∥, where in the second line we use the Mean Value Theorem: |
|
$$\exists\xi_{tk}\in[z_{tk},z_{*}]\quad\text{such that}\quad G^{(k)}(z_{tk})=G^{(k)}(z_{*})+\nabla G^{(k)}(\xi_{tk})(z_{tk}-z_{*})$$ $$=z_{*}+\nabla G^{(k)}(\hat{z}_{tk})(z_{tk}-z_{*})\quad\text{(since$z_{*}$is the fixed point).}$$ |
|
In the fourth line we used the fact that ∇G(k)(z∗)(ztk − z∗) exactly correspond to k updates of MEG when the vector field is affine, as well as Lemma 3 to account for the augmented state. |
|
|
|
Now let us consider *φ > ε >* 0 and k large enough such that C(k + 1)(1 − φ) |
|
k ⩽ (1 − φ + |
|
ε 2 |
|
) |
|
k. Since ∇G is assumed to be continuous, ∇G(k)is continuous too. Therefore, there exists δ > 0 such that ∥ztk − z∗∥ ⩽ δ implies ∥∇G(k)(˜ztk) − ∇G(k)(z∗)∥ ⩽ ε |
|
′. In particular, choose ε |
|
′ = (1 − φ + ε) |
|
k − (1 − φ + |
|
ε 2 |
|
) |
|
k ∼kε 2(1−φ) |
|
. |
|
|
|
Then, we have |
|
|
|
$$\|z_{(t+1)k}-z_{*}\|\leqslant C(k+1)(1-\varphi)^{k}\|z_{tk}-z_{*}\|+\|\nabla G^{(k)}(\tilde{z}_{tk})-\nabla G^{(k)}(z_{*})\|\|z_{tk}-z_{*}\|$$ $$\leqslant(1-\varphi+\frac{\pi}{2})^{k}\|z_{tk}-z_{*}\|+\varepsilon^{\prime}\|z_{tk}-z_{*}\|$$ $$\leqslant(1-\varphi+\varepsilon)^{k}\|z_{tk}-z_{*}\|<\|z_{tk}-z_{*}\|<\|z_{0}-z_{*}\|.$$ |
|
|
|
From the above, we can conclude that for all ε > 0, there exists k > 0 and δ > 0 such that, for all initialization satisfying ∥w0 − w |
|
⋆∥ ⩽ δ, the restarted MEG described above satisfies: |
|
|
|
$$\|w_{t}-w^{\star}\|=O((1-\varphi+\varepsilon)^{t})\|w_{0}-w^{\star}\|.$$ |
|
|