Inexact Alternating Direction Method Of Multipliers With Efficient Local Termination Criterion For Cross-Silo Federated Learning
Anonymous authors Paper under double-blind review
Abstract
Federated learning has attracted increasing attention in the machine learning community at the past five years. In this paper, we propose a new cross-silo federated learning algorithm with fast convergence guarantee to solve the machine learning models with nonsmooth regularizers. To solve this type of problems, we design an inexact federated alternating direction method of multipliers (ADMM). This method enables each agent to solve a strongly convex local problem. We introduce a new local termination criterion that can be quickly satisfied when using efficient solvers such as stochastic variance reduced gradient (SVRG). We prove that our method has faster convergence than existing methods. Moreover, we show that our proposed method has sequential convergence guarantees under the Kurdyka-Łojasiewicz (KL) assumption. We conduct experiments using both synthetic and real datasets to demonstrate the superiority of our new methods over existing algorithms.
1 Introduction
Federated learning (FL) is an emerging research paradigm in which multiple agents collaborate to solve a machine learning problem. Cross-silo FL is an important subclass where the participating agents are pre-defined silos, such as organizations or institutions (e.g., hospitals and banks) (Kairouz et al., 2021a).
Typically, there are around 2-100 agents in this setting. Cross-silo federated learning finds significant applications in many domains such as medical and healthcare, finance, and manufacturing (Nandury et al., 2021; Huang et al., 2022; Yang et al., 2019). In a cross-silo federated learning (FL) task, each agent possesses a specific portion of the data, which they use to train their machine learning model locally. Once the local training is completed, all agents send their outputs to a central server. The server then aggregates these outputs and sends an update back to the participating agents. Most FL works focus on the following federated composite optimization (Kairouz et al., 2021b; McMahan et al., 2017b; Pathak & Wainwright, 2020).
fi(x) + g(x), (1) where each fi: R n → R is smooth (probably nonconvex) and Li-smooth, and g : R n → R ∪ {+∞} is a proper closed convex regularizer. In machine learning applications, fiis the loss function of the agent i's local data sets and g can be ℓ1-regularizer, grouped ℓ1-regularizer, nuclear-norm regularizer (for matrix variable) (Candès & Recht, 2009; Bao et al., 2022), the indicator function of a convex constraint (Yuan et al., 2021; Bao et al., 2022), etc. Problem (1) is called the federated composite optimization in Yuan et al. (2021). In Yuan et al. (2021), the federated dual averaging (FedDualAvg) was proposed as an early attempt to deal with the nonsmooth g. Bao et al. (2022) proposed a fast federated dual averaging for problem (1) with a strongly convex f. Although FedAvg, FedProx, FedDualAvg, and their variants have intuitive approaches to distribute tasks and aggregate local outputs, they face limitations in both theory and practice. For instance, Braverman et
Table 1: Comparison in the inner updates of federated splitting methods. i Local Solver | Local Complexity | ||||||
---|---|---|---|---|---|---|---|
fi | g | ||||||
Model | Local Termination Criterion | Assumptions on ϵ t | |||||
FedSplit | t+1 i − Proxfi (˜x t i)∥ ≤ ϵ t i | ϵ t i ≤ O(ϵ) | GD | log(ϵ−1 ) | |||
Pathak & Wainwright (2020) | SC | 0 | ∥x | ||||
FedPD | t+1 i )∥ 2 ≤ ϵ t i | ϵ t i ≤ O(ϵ) | GD (SGD) | log(ϵ−1 ) (ϵ−1 ) | |||
Zhang et al. (2021) | NC | 0 | E∥∇Li(x t+1 | t | t | t | |
FedDR | i | − Proxfi (˜x i)∥ ≤ ϵ i | 1 p Pp i=1 PT t=0 ϵ i ≤ O(1) | - | - | ||
Tran-Dinh et al. (2021) | NC NS | ∥x | |||||
t+1 | t i)∥ ≤ r∥x t+1 | t | |||||
∥x i | − Proxfi (˜x | i | − x i∥ | None | - | - | |
FedADMM1 | t+1 | ||||||
Gong et al. (2022) | NC | 0 | ∥∇Li(x i )∥ 2 ≤ ϵ t i | ϵ t i ≤ O(ϵ) | - | - | |
FedADMM2 | t+1 i )∥ 2 ≤ ϵ t i | ϵ t+1 i ≤ νiϵ t ; νi ∈ [1/2, 1) | - | log[(ϵ t+1 i )−1 ] | |||
Zhou & Li (2022) | NC | 0 | ∥∇Li(x | i | |||
FedADMM3 | t+1 i − Proxfi (˜x t i)∥ ≤ ϵ t i | 1 Pp PT t=0 ϵ t i ≤ O(1) | - | - | |||
Wang et al. (2022) | NC NS | ∥x | p | i=1 | |||
i | t+1 | t | t | t | |||
FIAELT(Ours) | NC NS E t∥x i | − Proxfi (˜x i)∥ 2 ≤ ri∥x i − Proxfi (˜x i)∥ 2 | None | SVRG | O(1) |
Table 2: Comparison in the server updates of the federated splitting methods in Table 1. SC = Strongly Convex, NC = Nonconvex, NS = Nonsmooth. ϵ is the same as in Table 1.
Model Convergence
fi g Gradient Sequence
FedSplit SC 0 - Linear
FedPD NC 0 O(T −1) + ϵ -
FedDR NC NS O(T −1) -
FedADMM1 NC 0 O(T −1) + ϵ -
FedADMM2 NC 0 O(T −1) - FedADMM3 NC NS O(T −1) -
FIAELT(Ours) NC NS O(T −1) Linear when θ ∈ (0,
1
2
)
al. McMahan et al. (2017a) demonstrated that FedAvg can diverge in certain scenarios. Even when FedAvg converges, as shown in Pathak & Wainwright (2020), the resulting fixed points may not necessarily be stationary points of the original problem. Additionally, the analysis in Yuan et al. (2021); Li et al. (2020a); Reddi et al. (2021) often assumes that the dissimilarity between agents is bounded, which may not hold in real-world applications. These shortcomings of existing methods motivate the exploration of federated splitting methods for solving (1). In general, the idea behind splitting methods in federated learning is to establish a connection between (1) and a constrained problem of the form:
fi(xi) + g(x1) s.t. x1 = x2 = · · · = xp, (2) where X = (x1, x2*, . . . , x*p).
Popular splitting methods in federated learning include FedSplit Pathak & Wainwright (2020), FedDR TranDinh et al. (2021), FedPD Zhang et al. (2021), and ADMM based federated learning methods, Gong et al.
(2022); Zhou & Li (2021); Zhang et al. (2021); Yue et al. (2021); Zhou & Li (2022). FedDR considers nonzero regularizer g while FedSplit, FedPD, and FedADMM deal with the unregularized case where g = 0, which can not apply to the applications where regularizers are needed to induce sparse parameters Zou & Hastie (2005); Yuan et al. (2021) or low rank matrices Candès & Recht (2009); Bao et al. (2022).
At each round t of federated splitting methods, each agent needs to find x t+1 ito approximate the proximal operator of each fi for the current point x˜ t i (denoted as Proxfi (˜x t i )), via a number of local updates with a certain termination criterion. However, the number of local updates (defined as local complexity) required by existing criteria is either unexplored or tends to infinity with an infinitesimal tolerance ϵ as the number of server updates T increases, as shown in Table 1. Therefore, a more advanced criterion that leads to a known constant number of more efficient local updates is much desired, which is an important goal of this work.
Moreover, existing federated splitting methods on nonconvex optimization with nonsmooth regularizer g also only focus on the convergence rate of the gradient but ignore the convergence of the generated sequences to a desired critical point. Zhou & Li (2022); Yue et al. (2021) proves that the accumulation point is critical point for regularized case (g = 0) but the convergence rate is still unknown. To obtain sequential convergence rate for nonsmooth regularizer g ̸= 0 is also an important goal of this work.
1.1 Our Contributions
To fulfill the above two goals, we propose a novel splitting method called Federated Inexact ADMM with Efficient Local Termination (FIAELT) for the nonconvex nonsmooth composed optimization problem (1) in the context of cross-silo federated learning, based on the equivalence between (1) and an np-dimensional constrained problem (4). Compared with existing works on federated splitting methods, we summarize our contributions as follows.
- For the local update of our algorithm, we propose a new criterion E i t∥x t+1 i − Proxfi (˜x t i )∥ 2 ≤ ri∥x t i − Proxfi (˜x t i )∥ 2(see Algorithm 1 for detail) where the tolerance ri ∈ (0, 1) does not need to be infinitesimal with large number T of communication rounds. Hence, our local complexity can be O(1), which outperforms existing splitting methods with an unexplored or large number of local updates (see Table 1 for comparison).
At the same time, we keep the state-of-the-art gradient convergence rate O(1/T) in the server updates (see Table 2).
Furthermore, we demonstrate that FIAELT has sequential convergence properties in the deterministic case. Specifically, we prove that any accumulation point of the sequence generated at the server of FIAELT is a stationary point of (1). Moreover, we prove that FIAELT achieves global convergence under KurdykaŁojasiewicz (KL) geometry, which covers a wide range of functions in practice. Specifically, the server updates and the outputs of local servers converge in finitely many communications when the KL exponent α of the potential function is 0. These sequences converge linearly when α ∈ (0, 1 2 ). These sequences converge sublinearly when α ∈ ( 1 2 , 1). In the analysis, our proposed new criterion plays a key role. To the best of our knowledge, FIAELT is the first federated learning method that has sequential convergence rate in nonconvex nonsmooth settings.
Finally, we conducted experiments involving the training of fully-connected neural networks. In these experiments, we compared our method against existing splitting methods as well as other state-of-the-art Federated methods. The experimental results revealed that our method is competitive and consistently outperforms the other approaches in terms of training loss, training accuracy, and testing accuracy. These findings indicate the superior performance and effectiveness of our proposed method in the task of training fully-connected neural networks.
1.2 Related Work
The literature of federated learning is rich. In this work, we only focus on the splitting methods in federated learning. A comparison between our method and existing splitting methods is summarized in Table 1.
In Pathak & Wainwright (2020), FedSplit was proposed. It implements the Peaceman-Rachford splitting P method for (2). Pathak & Wainwright (2020) analyzed the proposed method in the case where g = 0 and i fiis strongly convex. Pathak & Wainwright (2020) showed that when the error between the local output and the Proxfiis under a threshold ϵ, the sequence generated at the server by FedSplit linearly converges to an inexact solution of (1) up to an error determined by ϵ. They also applied the FedSplit to a strongly convex majorization of the original problem. In this setting, they showed a complexity of O˜( √ϵ) to obtain an ϵ-optimal function value. However, in general convex settings, it assumes FedSplit locally computes Proxfi exactly, which is unrealistic when the local server solves large-scale problems.
When g = 0, there are several work on federated ADMM, Zhang et al. (2021); Gong et al. (2022); Zhou & Li (2022); Elgabli et al. (2022). Gong et al. (2022) proposed FedADMM that randomly selects agents to attend each round. The ith agent terminates the local iterations when the norm of the local gradient of the current iterate is under a threshold ϵi. When there is an upper bound ϵ for {ϵi}, they showed FedADMM has a complexity of O(ϵ −1) + O(ϵ) to reach an ϵ-surrogate stationary point. When fi's are twice differentiable, ADMM is applied in designing a second-order FL method in Elgabli et al. (2022). Zhou & Li (2022) proposed an inexact ADMM for federated learning problems. At round t, the ith agent terminates the local updates when the norm of the local gradient is under a threshold ϵ t i . They assume {ϵ t i }t decreases exponentially, i.e., ϵ t+1 i ≤ νiϵ t i with νi ∈ [ 1 2 , 1). They showed that the generated sequence accumulates at the stationary point.
By further assuming the accumulation point of the generated sequence is isolated, they show the generated sequence converges globally. Compared with this work, we do not assume the accumulation point of the generated sequence to be isolated when we analyze the sequential convergence of our method.
When g ̸= 0, Tran-Dinh et al. (2021) proposed FedDR that applies the Douglas-Rachford (DR) splitting algorithms for (2). They combined the DR method with randomized block-coordinate strategies and asynchronous implementation. They estimated the complexity of FedDR under different termination criteria for local updates.The termination criteria in Tran-Dinh et al. (2021) test whether the distance between the prox of f and its approximation can be bounded by a certain value. However, this distance is unable to check in practice, especially when we use stochastic gradient methods for local updates. Yue et al. (2021) also considered the case where g ̸= 0. Specifically, they considered the case when g is the Bregman distance.
Assuming the Hessian of fi's in (1) being Lipschitz continuous, Yue et al. (2021) showed any accumulation point of the generated sequence is a stationary point. Yue et al. (2021) also showed the proposed method has a complexity of O(ϵ −1) to reach an ϵ-stationary point.
2 Preliminaries
In this paper, we denote R n the n-dimensional Euclidean space with inner product ⟨·, ·⟩ and Euclidean norm ∥ · ∥. We denote the set of all positive numbers as R++. We denote the distance from a point a to a set A as d(a, A). For a random variable ξ defined on a probability space (Ξ, Σ, P), we denote its expectation as Eξ.
Given an event A, the conditional expectation of ξ is denoted as E(ξ|A).
An extended-real-valued function f : R n → [−∞, ∞] is said to be proper if domf = {x ∈ R n : f(x) < ∞} is not empty and f never equals −∞. We say a proper function f is closed if it is lower semicontinuous. We define the indicator function of a closed set A as δA(x), which is zero when x ∈ A and ∞ otherwise.
We define the regular subdifferential of a proper function f : R n → [−∞, ∞] at x ∈ domf as ∂fˆ (x) := nξ∈R n:lim infz→x, z̸=x f(z)−f(x)−⟨ξ,z−x⟩ ∥z−x∥ ≥0 oThe (limiting) subdifferential of f at x ∈ domf is defined as ∂f(x):=nξ ∈ R n:∃x k f→x, ξk→ξ with ξ k ∈∂fˆ (x k),∀k o, where x k f→ x means both x k → x and f(x k) → f(x).
For x ̸∈ domf, we define ∂fˆ (x) = ∂f(x) = ∅. We denote dom∂f := {x : ∂f(x) ̸= ∅}. For a differential function h : R m × R n → R l, we denote ∇xL(x, y) and ∇yL(x, y) as the partial derivatives with respect to x and y correspondingly. We defined the normal cone of a set A at x as NA(x) := ∂δA(x). For a proper function f : R n → [−∞, ∞], we denote the proximal operator of f as Proxαf (x) = Arg minz∈Rnf(z) + 1 2α ∥z − x∥ 2 .
Consider a problem min f + g, where f is a smooth function and g is properly closed convex. We say x is a stationary point of this problem when 0 ∈ ∇f(x) + ∂g(x). We say x is an ε-stationary point if d 2(0, ∇f(x) + ∂g(x)) ≤ ε.
We next introduce the KL property used in analyzing the sequential convergence. Let Ψa be defined as the set of concave functions ψ : [0, a) → [0, ∞) satisfying ψ(0) = 0, being continuously differentiable on (0, a), and satisfying ψ ′ > 0 on (0, a).
Definition 1 (Kurdyka-Łojasiewicz property and exponent). A proper closed function f : R n → (−∞, ∞] is said to satisfy the Kurdyka-Łojasiewicz (KL) property at an xˆ ∈ dom∂f if there are a ∈ (0, ∞], a neighborhood V of xˆ and a φ ∈ Ψa such that for any x ∈ V with f(ˆx) < f(x) < f(ˆx) + a*, it holds that* ψ ′(f(x) − f(ˆx))dist(0, ∂f(x)) ≥ 1. If f *satisfies the KL property at* xˆ ∈ dom∂f and ψ *can be chosen as* ψ(ν) = a0ν 1−α for some a0 > 0 and α ∈ [0, 1), then we say that f satisfies the KL property at xˆ with exponent α. A proper closed function f satisfying the KL property with exponent α ∈ [0, 1) at every point in dom∂f is called a KL function with exponent α.
Functions satisfying KL property includes proper closed semi-algebraic functions, the quadratic loss function plus possibly nonconvex piecewise linear regularizers Attouch et al. (2010); Li & Pong (2018); Attouch et al. (2013); Zeng et al. (2021).
3 Federated Inexact Admm With Efficient Termination Criterion
We relate the problem (1) to (2). For (2), we view it as the following np-dimensional problem:
$\downarrow$ . F(X) + G(X), (3) where X = (x1, x2*, . . . , x*p) with each xi ∈ R n, F(X) := Pp i=1 fi(xi) with fi's in (1), G(X) := g(x1) +δC(X) with C := {X : x1 = · · · = xp} and g in (1).
The following proposition establishes the relation between (3) and (1).
Proposition 1. If X∗ = (x ∗ 1 , . . . , x∗p ) is a stationary point of (3), then x ∗ 1 is a stationary point of (1).
Furthermore, if X = (x1, . . . , xp) is an ε*-stationary point of* (1), then x1 is a pε*-stationary point of* (1).
Based on this relation, we consider ADMM to solve (3). Rewrite (3) as the following equivalent problem:
F(X) + G(Y ) s. t. X = Y. (4) The augmented lagrangian function of (4) is defined as:
2. (5) Given a starting point (X0, Y 0, Z0) ∈ R np × R np × R np and τ, β > 0, the ADMM for (3) is as follows:
(6) Now we give an equivalent form of the third equation in (6) as follows.
Proposition 2. Consider (3). Let {(Xt+1, Y t+1, Zt+1)} be generated by (6). Suppose β > maxi Li*. Then* the solution of the problem in the third equation of (6) is (y1, . . . , y1) with y1 = Prox 1 βp g ( 1 p Pp i=1(x t+1 i + 1 β z t+1 i))).
P On the other hand, since F(X) in (3) is separable, we can write Lβ(X, Y, Z) in (5) as Lβ(X, Y, Z) = p i=1 Lβ,i(xi, yi, zi), where
Therefore, the first equality in (6) can be rewritten as x t+1 i =x t+1 i,∗ where
$x_{i,*}^{t+1}:=\underset{x_{i}}{\operatorname{argmin}}L_{\beta,i}(x_{i},y^{t},z_{i}^{t});i=1,\ldots,p.$ In practice, (7) cannot be exactly solved as fiis usually a nonconvex loss function involving large training data. Hence, existing federated splitting methods inexactly solve (7) up to a certain local criterion. However, the computational complexities of the local updates required by these criteria are either unexplored or very large (see Table 1). To solve this limitation, we propose the following criterion.
where E i t denotes conditional expectation given the past trajectory {(x s i , ys, zs i ) : s = 0, 1*, . . . , t*}, and the tolerance ri ∈ (0, 1) does not need to be arbitrarily small to ensure O(1) local complexity even with stochastic gradient, as will be shown in the convergence analysis.
Algorithm 1 Federated Inexact ADMM with Efficient Local Termination (FIAELT) for (1) 1: Input: β, τ > 0, ri > 0, mi ∈ N+, ηi > 0. (x 0 i , y0 i , z0 i ) and x¯ 0 = 1 p Pi x 0 i , z¯ 0 = 1 p Pi z 0 i for agents i = 1*, . . . , p*.
2: for iteration t = 0, 1*, . . . , T* − 1 do 3: for agent i = 1*, . . . , p* in parallel do 4: Find x t+1 ito approximately solve:
i,⋆ . (8) such that the criterion (9) is satisfied.
Upload ∆xi,t+1 = x t+1 i − x t i and ∆zi,t+1 = τ β(x t+1 i − y t i ) to the server.
5: end for 6: The server calculates x¯ t+1 = ¯x t+ 1 p Pi ∆xi,t+1, z¯ t+1 = ¯z t+ 1 p Pp i=1 ∆zi,t+1 and y t+1 = Prox 1 βp g (¯x t+1+ 1 β z¯ t+1), and broadcasts these variables to each agent.
7: end for We propose Algorithm 1 that implements the ADMM rule (6) in a federated way, where x t+1 iinexactly solves (7) with stochastic gradient methods.
When β > L := maxi Li, the local problem (8) is minimizing a strongly convex smooth function that has Lipscihtz continuous gradient. Hence, using the stochastic method called SVRG in Johnson & Zhang (2013), we obtain x t+1 that satisfies the following property.
Proposition 3. Consider (1). Set β > L := maxi Li*. Let* {(x t i , yt i , zt i )} be generated by Algorithm 1. Using SVRG in Johnson & Zhang (2013) with Option II with frequency mi, learning rate ηi*, and initialization* x t i for (8), such that
Then criterion (9) is satisfied in at most k i t = log1/ρiβ+Li ri(β−Li) iterations of SVRG.
Remark 1. The above proposition shows that fixing any ri ∈ (0, 1), SVRG outputs an inexact solution of the local subproblem (8) within O(1) steps, independent of the number of communication rounds T. In contrast, the number of local updates required by other existing federated splitting methods is either unexplored or increases to infinity with T.
Remark 2. When (9) is deterministic, our subproblem degenerates to minimizing a strongly convex function.
According to the well know results, minimizing a strongly convex function with the simplest gradient descent method produce a linear convergent sequence of variables. Following the same analysis in the proofs of Proposition 3, we will have the local complexity of order O(1).
4 Convergence Analysis Of Algorithm 1
We analyze the convergence properties of the variables Xt:= [x t 1 ; . . . ; x t p ], Y t:= [y t 1 ; . . . ; y t p ], Z t:= [z t 1 ; . . . ; z t p ] generated by Algorithm 1. We also denote L := maxi Li, r := maxi ri, Xt+1 ∗:= [x t+1 1,∗ ; . . . ; x t+1 p,∗ ] and W = infX F(X) + infY G(Y ) > −∞ throughout the paper. First, the update rules of Algorithm 1 can be rewritten into the combined vectors Xt, Y t, Zt as follows.
We first show the following property.
Proposition 4. The update rules in Algorithm 1 satisfy
2, (11) t), (12) YLβ(Xt+1*, Y, Z*t+1), (13)
With Proposition 4, we can analyze {(Xt, Y t, Zt)} to analyze the convergence properties of Algorithm 1.
For {(Xt, Y t, Zt)}, we have the following theorem that is important in establishing our main convergence properties.
Proposition 5. Select hyperparameters β ≥ 5L, ri ∈ (0, 0.01], τ ∈ [1/2, 1). Denote Γ := 1−τ τ, Θ = 2β 2 + 4L 2, Λ := 4L 2. Υ := Θ τβ4r 1−2r and δ := 14 (β − L) − 2Υ*. Define*
and Ht+1 := EH(Xt+1, Y t+1, Zt+1, Xt, Zt). Then for t ≥ 1, it holds that δ ≥ 0.1L and
Hence, the sequence {Ht} converges to some H∗ ≥ W.
Thanks to Proposition 5, we have the following property with respect to the successive changes.
Corollary 1. Consider (1) and let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Then limt E∥Xt − Xt+1∥ 2 = limt E∥Y t+1 − Y t∥ 2 = limt E∥Z t+1 − Z t∥ 2 = lim E∥Y t − Xt∥ 2 = 0.
Remark 3. Corollary 1 together with Propositions 1 and 4 shows that the expectations of successive changes of {(x t 1 , . . . , xtp , yt, z1*, . . . , z*tp )} generated by Algorithm 1 also converge to 0.
Based on Proposition 5, {(Xt, Y t, Zt)} has the following convergence property.
Theorem 1. Select hyper-parameters per Proposition 5 hold and let H∗ be defined as in Proposition 5. Then
where
with D1 := 2Γ+Θ 8r 1−2r +2 min{δ, 12 β}, D2 := (1 + Γ) 3(r+1) (L−β) 2 + D14 (L−β) 2 L+β+1 2 + 2τ β(Γ + 1) + Υ + (L−β) 2 8 , D3 := max{3, D12τ β(Γ + 1)}, Γ, Υ and Θ being defined in Proposition 5. Combining Theorem 1 with Proposition 1 and Proposition 3, we immediately obtain the following convergence rate of Algorithm 1.
Corollary 2. Select hyperparameters β = 5L, ri = 0.005, τ = 1/2 in Algorithm 1. Then the following convergence rate holds.
$$\leq p D\left(|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})|^{2}+|X^{0}-Y^{0}|^{2}\right)+p D\left(L_{\beta}(X^{0},Y^{0},Z^{0})-W\right).$$
where D is the one defined in Theorem 1. Furthermore, the criterion (9) can be satisfied by implementing 10 iterations of SVRG Johnson & Zhang (2013) with Option II with frequency mi = 200*, learning rate* ηi =1 40L , and initialization x t i for (8).
Remark 4. Corollary 2 indicates that compared with existing federated methods, we keep the same state-ofthe-art convergence rate O(1/T) with T being the number of the communication round, while only O(1) local update steps for the local (8) is required.
Figure 1: Results on Synthetic-{(0,0), (0.5, 0.5), (1,1)} dataset.
4.1 Sequential Convergence In The Deterministic Case
In this section, we further investigate the convergence of the sequence {(Xt, Y t, Zt)} generated by Algorithm 1 when (9) holds deterministically, i.e., holds without the expectation. We first show the properties of the set of accumulation points of {(Xt, Y t, Zt, Xt−1, Zt−1)}.
Proposition 6. Consider (1) and let {(Xt, Y t, Zt)} be generated by Algorithm 1 with (9) holding deterministically. Suppose assumptions in Proposition 5 hold. Suppose {(Xt, Y t, Zt)} is bounded. Then any accumulation point of {Y t} is a stationary point of (3).
Combining Proposition 6 with Proposition 1 and Proposition 2, we immediately have the subsequential convergence of the sequence generated by FIAELT.
Corollary 3. Let {(x t 1 , . . . , xtp , yt, zt1 , . . . , ztp )} be generated by Algorithm 1 with (9) holding deterministically. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 6 hold. Then any accumulation point of {y t} is a stationary point of (1).
Next, we present the convergence rate of (Xt, Y t, Zt). Theorem 2. Consider (1) and Algorithm 1 with (9) holding deterministically. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Let H be defined as in Proposition 5 and suppose H is a KL function with exponent α ∈ [0, 1). Then {(Xt, Y t, Zt)} converges globally. Denoting (X∗, Y ∗, Z∗) := limt(Xt, Y t, Zt) and d t s := ∥(Xt, Y t, Zt)−(X∗, Y ∗, Z∗)∥*, then the followings hold. If* α = 0, then {d t s} converges finitely. If α ∈ (0, 1 2 ], then there exist b > 0, t1 ∈ N and ρ1 ∈ (0, 1) such that d t s ≤ bρt1 for t ≥ t1*. If* α ∈ ( 1 2 , 1), then there exist t2 and c > 0 such that d t s ≤ ct− 1 4α−2 for t ≥ t2.
Remark 5. Proposition 3 and Theorem 2 jointly illustrate that the local outputs {x t i }t and the server updates y t achieve global linear convergence towards a stationary point of (1) when the Kurdyka-Lojasiewicz (KL) exponent of function H is set to 12 . The precise determination of the KL exponent of H is interconnected
Figure 4: Results of our algorithm on FEMNIST dataset with different learning rates. (L1-norm regularize.) with another aspect involving the investigation of error bounds, which is beyond the boundaries of the present paper's scope. Interested readers are referred to sources such as Attouch et al. (2010); Li & Pong (2018); Attouch et al. (2013); Zeng et al. (2021) for more deeper insights.
5 Experimental Results
To evaluate the performance of our proposed FIAELT algorithm, we conduct experiments on both realistic and synthetic datasets. When g = 0 in (1), we compare our algorithm with FedDRTran-Dinh et al. (2021), FedPD Zhang et al. (2021), FedAvg McMahan et al. (2017b), FedAdmm Zhou & Li (2022). When g = λ| - ||1 for some \ E R++, we compare our algorithm with FedMid Yuan et al. (2021), FedDualAvg Yuan et al. (2021), and FedDR. Following FedDR Tran-Dinh et al. (2021), we choose the neural network as our model, and the details are deferred to the supplementary materials. For FedDR, FedPD, we refer to the code provided in Tran-Dinh et al. (2021), and we also re-implement the FedAdmm based on them. All experiments are running on the Linux-based server with the configuration: 8xA6000 GPU with 48GB memory each. To be in accordance with the theoretical analysis, we sample all the clients to perform updates for our algorithm in each communication round. We pick up hyper-parameters carefully and show the best results for each algorithm. For evaluation metrics, we use training loss, training accuracy, and test accuracy. Our code is available at https://anonymous.4open.science/r/FIAELT_TMLR-D6C7/. Results on synthetic datasets. Following the data generation process on Li et al. (2020a); Tran-Dinh et al. (2021), we generate three datasets: synthetic-{(0,0), (0.5, 0.5), (1,1)}. All agents perform updates at each communication round. Our algorithm is compared using synthetic datasets in both iid and non-iid settings. The performance of five algorithms on non-iid synthetic datasets is shown as Figure 1. Our algorithm can achieve better results than FedPD, FedAdmm, FedAvg, and FedDR on all three synthetic datasets. FEMNIST Cohen et al. (2017); Caldas et al. (2018) dataset is a more complex and federated extended MNIST. It has 62-class (26 upper-case and 26 lower-case letters, 10 digits) and the data is distributed to 200 devices. Figure 2 depicts the results of all 5 algorithms on FEMNIST. As it shows, FIAELT can achieve comparable training accuracy and loss value with FedDR. In comparison with FedAdmm, FedPD, and FedAvg, FIAELT has a significant improvement in both training accuracy and loss value. Our algorithm can also work much better with test accuracy than the other 4 algorithms.
Results with the L1 norm. Following FedDR Tran-Dinh et al. (2021), we also consider the composite setting with g(x) := 0.01∥x∥1 to verify our algorithm by selecting different learning rates and the number of local SGD epochs. We conduct the experiment on the FEMNIST dataset and we show the results as Figure 3. As we can see from the training loss and training accuracy, FIAELT has competitive efficiency with FedDR and outperform FedDualAvg and FedMid. In addition, in testing accuracy, FIAELT outperforms all the other methods. Figure 5 shows how different learning rates affect the performance of our FIAME on the FEMNIST dataset.
6 Conclusion
In this paper, we propose a federated inexact ADMM with a new local termination criterion. This criterion is efficient and can be satisfied within iterations unrelated to the communication rounds, particularly when using stochastic gradient methods as the local solver. Our new method has the best-known complexity while having efficient local updates. Additionally, we provide proof that the proposed method has sequential convergence guarantees in the deterministic case. Under KL assumptions, we demonstrate that the whole generated sequence can converge sublinearly, linearly, or even finitely. Our experiments consistently demonstrate that the proposed method consistently outperforms state-of-the-art methods, especially in terms of testing accuracy.
References
Hédy Attouch and Jérôme Bolte. On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program., 116(1-2):5–16, 2009.
Hédy Attouch, Jérôme Bolte, Patrick Redont, and Antoine Soubeyran. Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the kurdyka-lojasiewicz inequality. Math. Oper. Res., 35(2):438–457, 2010.
Hédy Attouch, Jérôme Bolte, and Benar Fux Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods.
Math. Program., 137(1-2):91–129, 2013.
Yajie Bao, Michael Crawshaw, Shan Luo, and Mingrui Liu. Fast composite optimization and statistical recovery in federated learning. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, 2022.
Jérôme Bolte, Shoham Sabach, and Marc Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program., 146(1-2):459–494, 2014.
Jonathan M. Borwein, Guoyin Li, and Matthew K. Tam. Convergence rate analysis for averaged fixed point iterations in common fixed point problems. SIAM J. Optim., 27(1):1–33, 2017.
Sebastian Caldas, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. CoRR, abs/1812.01097, 2018.
Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Found.
Comput. Math., 9(6):717–772, 2009.
Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pp. 2921–2926.
IEEE, 2017.
Anis Elgabli, Chaouki Ben Issaid, Amrit Singh Bedi, Ketan Rajawat, Mehdi Bennis, and Vaneet Aggarwal.
Fednew: A communication-efficient and privacy-preserving newton-type method for federated learning.
In International Conference on Machine Learning, ICML 2022, 17-23 July , Baltimore, Maryland, USA, 2022.
Ziqing Fan, Yanfeng Wang, Jiangchao Yao, Lingjuan Lyu, Ya Zhang, and Qi Tian. Fedskip: Combatting statistical heterogeneity with federated skip aggregation. In Xingquan Zhu, Sanjay Ranka, My T.
Thai, Takashi Washio, and Xindong Wu (eds.), IEEE International Conference on Data Mining, ICDM, Orlando, FL, USA, November 28 - Dec. 1, pp. 131–140. IEEE, 2022.
Yonghai Gong, Yichuan Li, and Nikolaos M. Freris. Fedadmm: A robust federated deep learning framework with adaptivity to system heterogeneity. In 38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12,, 2022.
Chao Huang, Jianwei Huang, and Xin Liu. Cross-silo federated learning: Challenges and opportunities.
CoRR, abs/2206.12949, 2022.
Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pp. 315–323, 2013.
Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. Found. Trends Mach. Learn., 14 (1-2):1–210, 2021a.
Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. Found. Trends Mach. Learn., 14 (1-2):1–210, 2021b.
Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the Polyak-Łojasiewicz condition. In Paolo Frasconi, Niels Landwehr, Giuseppe Manco, and Jilles Vreeken (eds.), Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2016, Riva del Garda, Italy, September 19-23, Proceedings, Part I, 2016.
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. SCAFFOLD: stochastic controlled averaging for federated learning. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July, Virtual Event, 2020.
Guoyin Li and Ting Kei Pong. Douglas-rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Program., 159(1-2):371–401, 2016.
Guoyin Li and Ting Kei Pong. Calculus of the exponent of kurdyka-łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comput. Math., 18(5):1199–1232, 2018.
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Inderjit S. Dhillon, Dimitris S. Papailiopoulos, and Vivienne Sze (eds.), Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020a.
Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. In 8th International Conference on Learning Representations, ICLR, Addis Ababa, Ethiopia, April 26-30, 2020b.
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas.
Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April, Fort Lauderdale, FL, USA, 2017a.
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas.
Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April, Fort Lauderdale, FL, USA, 2017b.
Kishore Nandury, Anand Mohan, and Frederick Weber. Cross-silo federated training in the cloud with diversity scaling and semi-supervised learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, pp. 3085–3089. IEEE, 2021.
Reese Pathak and Martin J. Wainwright. FedSplit: an algorithmic framework for fast federated optimization.
In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020.
Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
R. Tyrrell Rockafellar and Roger J.-B. Wets. Variational Analysis, volume 317 of Grundlehren der mathematischen Wissenschaften. Springer, 1998.
Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, and Lam M. Nguyen. FedDR - randomized douglasrachford splitting algorithms for nonconvex federated composite optimization. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021.
Han Wang, Siddartha Marella, and James Anderson. Fedadmm: A federated primal-dual algorithm allowing partial participation. In 2022 IEEE 61st Conference on Decision and Control (CDC), pp. 287–294. IEEE, 2022.
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol., 10(2):12:1–12:19, 2019.
Dataset | Size(Input x FC layer x Output) |
---|---|
Synthetic | 60 x 32 x 10 |
MNIST | 784 x 128 x 10 |
FEMNIST | 784 x 128 x 26 |
Table 3: The details of the neural networks in our numerical experiments.
Figure 5: Results of our algorithm on FEMNIST dataset with different learning rates. (L1-norm regularize.) Honglin Yuan, Manzil Zaheer, and Sashank J. Reddi. Federated composite optimization. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July, 2021.
Sheng Yue, Ju Ren, Jiang Xin, Sen Lin, and Junshan Zhang. Inexact-admm based federated meta-learning for fast and continual edge learning. In MobiHoc '21: The Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Shanghai, China, 26-29 July, 2021.
Liaoyuan Zeng, Peiran Yu, and Ting Kei Pong. Analysis and algorithms for some compressed sensing models based on L1/L2 minimization. SIAM J. Optim., 31(2):1576–1603, 2021.
Xinwei Zhang, Mingyi Hong, Sairaj V. Dhople, Wotao Yin, and Yang Liu. Fedpd: A federated learning framework with adaptivity to non-iid data. IEEE Trans. Signal Process., 69:6055–6070, 2021.
Shenglong Zhou and Geoffrey Ye Li. Communication-efficient admm-based federated learning. CoRR, abs/2110.15318, 2021. URL https://arxiv.org/abs/2110.15318.
Shenglong Zhou and Geoffrey Ye Li. Federated learning via inexact ADMM. CoRR, abs/2204.10607, 2022. H. Zou and T. Hastie. Regularization and variable selection via the elastic net. J. R. Statist. Soc. B, 67(2): 301–320, 2005.
A Supplement For Experiment
The details of the training models. For all datasets, we apply neural networks with only Fullyconnected (FC) layers as training models. The size of the models is shown as Table 3. Our code is available at https://anonymous.4open.science/r/FIAELT-8CC5/. Hyperparameter choosing. The learning rates are 0.012 for synthetic datasets, and 0.009 for FEMNIST.
For FedPD, FedDR, and FedProx, we follow Tran-Dinh et al. (2021) to select the hyper-parameters, including µ for FedProx, η for FedPD, and η, α for FedDR. As for FedMid Yuan et al. (2021) and FedDualAvg Yuan et al. (2021), we also select the hyper-parameters working best for plotting the performance and comparison. Additional Results with Different Learning Rates Figure 5 shows how different learning rates affect the performance of our FIAME on the FEMNIST dataset.
Figure 6: Results on Synthetic-{(0,0), (0.5, 0.5), (1,1)} dataset.
Figure 7: Results on FEMNIST dataset.
Additional Results Comparing Fiame With Non-Admm Based Fl Algorithms A.1
We compare our method with FedAvg Li et al. (2020b), SCAFFOLD Karimireddy et al. (2020), FedSkip Fan et al. (2022). Results on synthetic datasets. Following the data generation process on Li et al. (2020a); Tran-Dinh et al. (2021), we generate three datasets: synthetic-{(0,0), (0.5, 0.5), (1,1)}. All agents perform updates at each communication round. Our algorithm is compared using synthetic datasets in both iid and non-iid settings. The performance of 4 algorithms on non-iid synthetic datasets is shown as Figure 6. Our algorithm can achieve better results than FedAvg, SCAFFOLD, FedSkip on all three synthetic datasets.
Results on FEMNIST dataset. FEMNIST Cohen et al. (2017); Caldas et al. (2018) dataset is a more complex and federated extended MNIST. It has 62-class (26 upper-case and 26 lower-case letters, 10 digits) and the data is distributed to 200 devices. Figure 7 depicts the results of all 4 algorithms on FEMNIST. As it shows, compared with the other 3 methods, FIAME has a significant improvement in both training accuracy and loss value. Our algorithm can also work much better with test accuracy than the other 3 algorithms.
B Convergence Analysis Of Algorithm 1
Proposition 1. If X∗ = (x ∗ 1 , . . . , x∗p ) is a stationary point of (3), then x ∗ 1 is a stationary point of (1).
Furthermore, if X = (x1, . . . , xp) is an ε*-stationary point of* (1), then x1 is a pε*-stationary point of* (1).
Proof. Note that
$\mathfrak{C}={(x_{1},\ldots,x_{p}):\ x_{1}-x_{2}=0,\ x_{2}-x_{3}=0,\ \ldots,\ x_{p-1}-x_{p}=0},.$ Using Theorem 6.14 of Rockafellar & Wets (1998), we have
where 1 is the vector in R p whose coordinates are all one.
This together with Corollary 10.9, Proposition 10.5 shows that for any Y ∈ dom∂G, ∂G(Y ) can be represetned as
. (17) Suppose Y ∗ = (y ∗ 1 , . . . , y∗ p ) is a stationary point of (3). Then Y ∗ ∈ dom∂G ⊆ domG. Thus, y ∗ 1 = · · · = y ∗ p .
In addition, it holds that
$0\in\nabla F(Y^{})+\partial G(Y^{})$ $$=(\nabla f_{1}(y^{}),\ldots,\nabla f_{p}(y^{}))+(\partial g(y_{1}^{*}),0,\ldots,0)+\sum_{i=1}^{p}\lambda_{i}(0,\ldots,0,\underbrace{1}{t{i_{k}}},-1,0,\ldots,0),$$ $\lambda_{i}$ is the $i_{i_{1}}$-norm of $\lambda_{i}$.
where the second equality uses (17) together with Exercise 8.8 and Proposition 10.5 of Rockafellar & Wets
(1998). The above relation is equivalent to
...
Substituting λ1 in (19) using the rest equality in the above relation, we have that
Thus y ∗is a stationary point of (1).
Now, suppose Y = (y1*, . . . , y*p) is a ε-stationary point of (3). Then Y ∈ dom∂G ⊆ domG. Thus, y1 = · · · = yp and 2(0, ∇F(Y ) + ∂G(Y )). (20) Using (17) and Proposition 10.5 of Rockafellar & Wets (1998), we have that d 2(0, ∇F(Y ) + ∂G(Y ))
= min ξ∈∂g(y1),λ∈Rp−1 ∥∇f1(y1) + ξ + λ11∥ 2 + Xp−2 i=2 ∥∇f1(y1) + λi1 − λi−11∥ 2 + ∥∇fp(y1) − λp−11∥ 2 (21) ≥ min ξ∈∂g(y1),λ∈Rp−1 1 p ∥ X i ∇fi(y1) + ξ∥ 2∥ 2 = min ξ∈∂g(y1) 1 p ∥ X i ∇fi(y1) + ξ∥ 2∥ 2 = 1 p d 2(0, X i ∇fi(y1) + ∂g(y1)). $\square$ . This together with (20) shows that y1 is a pε-stationary point.
B.1 Proofs Of Proposition 2
The problem in updating Y t+1 in (6) is a constrained problem:
In (8) is a concatenated problem: $\min\limits_{Y},g(y_1)+\left<Z^t,X^{t+1}-Y\right>+\frac{\beta}{2}|X^{t+1}-Y|^2$ s.t. $y_2=y_3=\cdots=y_p=y_1.$ thanks you claim an issue.Than author used here. Since β > L, the objective in the above problem is strongly convex. Thus, there exists a unique solution (y1, y2*, . . . , yp) to (22). Denote the Lagrange multiplier for the above problem as W = (w1, w2, . . . , w*p).
Then the Karush–Kuhn–Tucker condition for the above problem is
$\alpha$ condition for the above problem is $$0\in\partial g(y_{1})-z_{1}^{t+1}-\beta(x_{1}^{t+1}-y_{1})-\sum_{i=2}^{p}w_{i}$$ $$0=-z_{i}^{t+1}+w_{i}-\beta(x_{i}^{t+1}-y_{i}),,i=2,\ldots,p$$ $$y_{i}=y_{1},,,i=2,\ldots,p.$$ Combining (24) with (25) gives
With (20) gives $ \sum_{i=2}^p w_i=\beta\sum_{i=2}^p(x_i^{t+1}-y_i)+\sum_{i=2}^p z_i^{t+1}=\beta\sum_{i=2}^p x_i^{t+1}-(p-1)\beta y_1+\sum_{i=2}^p z_i^{t+1}.$ with (22) shows that... This together with (23) shows that
which is equivalent to
This implies that y1 ∈ Prox 1 βp g ( 1 p Pp i=1(x t+1 i + 1 β z t+1 i)). Recalling (25), we deduce that the solution of the problem in the third equation of (6) is (y1*, . . . , y*1) with y1 = Prox 1 βp g ( 1 p Pp i=1(x t+1 i + 1 β z t+1 i))).
Proposition 3. Consider (1). Set β > L := maxi Li*. Let* {(x t i , yt i , zt i )} be generated by Algorithm 1. Using
SVRG in Johnson & Zhang (2013) with Option II with frequency mi, learning rate ηi*, and initialization* x t i for (8), such that Then criterion (9) is satisfied in at most k i t = log1/ρiβ+Li ri(β−Li) iterations of SVRG.
Proof. Note that L(x, yt i , zt i ) is strongly convex with modulos β−Li and ∇L(x, yt i , zt i ) is Lipschitz continuous with modulos Li + β. Let ρi:= 1 (β−Li)η(1−2ηi(β+Li))mi +2ηi(β+Li) 1−2ηi(β+Li) , where mi and ηiis the frequency and learning rate in SVRG respectively. Using Theorem 1 of Johnson & Zhang (2013), there exists large m such that E
t iLβ,i(x t+1 i, yt, zt i ) − Lβ,i(x t+1 i,⋆ , yt, zt i ) ≤ ρ kt i Lβ,i(x t i , yt, zt i )−Lβ,i(x t+1 i,⋆ , yt, zt i )(26) Combing this with the strong convexity of L(x, yt i , zt i ) and the Lipschitz continuity of ∇L(x, yt i , zt i ), we have
that $$\mathbb{E}{i}^{t}|x{i}^{t+1}-x_{i,}^{t+1}|^{2}\leq\frac{\beta+L_{i}}{\beta-L_{i}}\rho_{i}^{t_{i}}|x_{i}^{t}-x_{i,}^{t+1}|^{2}\leq r_{i}|x_{i}^{t}-x_{i,*}^{t+1}|^{2},\tag{27}$$ where the second inequality is based on $\frac{\beta+L_{i}}{\beta-L_{i}}\rho_{i}^{t_{i}}\leq r_{i}$. This completes the proof.
C Proof For Convergence Analysis
To prove the results in Section Convergence Analysis of Algorithm 1, we first present the following well known facts for strongly convex functions, see Theorem 2 in Karimi et al. (2016) for example.
Proposition 7. Let f : R n → R be a strongly convex function with modulus µ*. Suppose in addition that* f is smooth and has Lipschitz continuous gradient with modulus L*. Then there exists unique minimizer* x ∗ that minimize f and it holds that
Proposition 2. Consider (3). Let {(Xt+1, Y t+1, Zt+1)} be generated by (6). Suppose β > maxi Li*. Then* the solution of the problem in the third equation of (6) is (y1, . . . , y1) with y1 = Prox 1 βp g ( 1 p Pp i=1(x t+1 i + 1 β z t+1 i))).
The second and third relation in this proposition are obvious. We only need show that Xtsatisfies (11).
Using (9) and the definition that r = maxi r, we have
summing i = 1*, . . . , p*, we obtain (11).
C.1 Details And Proofs Of Proposition 5
Before proving Proposition 5, we first present several properties of the problem:
XLβ(X, Y t, Zt), (28) where Y t and z t are defined as in Proposition 4.
Proposition 8. Consider (1). Let (Xt, Y t, Zt) be defined as in Proposition 4. Let β ≥Pi Li*. Denote* Xt+1 ⋆:= minX Lβ(X, Y t, Zt+1).
1 Then the following statements hold:
(i) Denote $e^{t+1}=X^{t+1}-X^{t+1}{\star}$. Then there exists $\xi^{t+1}\in\partial G(Y^{t+1})$ such that_ ⋆) (29) and t+1) (30) (ii) It holds that
⋆) (31) 1The existence and uniqueness of X t+1 ⋆ are thanks to β ≥ maxi Li and Proposition 7.
(iii) Let r = maxi ri*. It holds that*
Proof. (i) follows from the first optimality condition of (28) and (13). Combining (29) with (12), we have that
$$\Leftrightarrow Z^{t+1}=(1-\tau)Z^{t}+\beta\tau e^{t+1}+\tau\nabla F(X_{\star}^{t+1}).$$
Now, we bound E∥e t∥ 2. Denote e t i := x t i − x t i∗ . Then using (27), we have that
$\mathbb{E}{t-1}|e{i}^{t}|^{2}\leq r_{i}|x_{i}^{t-1}-x_{i*}^{t}|^{2}\leq2r_{i}(|x_{i}^{t}-x_{i}^{t-1}|^{2}+|e_{i}^{t}|^{2})$.
where c
′ i
:=
β+Li β−Li
. Denote c
′ = maxi c
′ i
, ρ := maxi ρi and kt := maxi k i t
, r = maxi ri. Summing both sides of the above inequality from i = 1*, . . . , p*, we obtain that
Taking expectation on both sides over all randomness and rearranging the above inequality we obtain (32).
Now, we are ready to prove Proposition 5.
Proposition 5. Select hyperparameters β ≥ 5L, ri ∈ (0, 0.01], τ ∈ [1/2, 1). Denote Γ := 1−τ τ, Θ = 2β 2 + 4L 2, Λ := 4L 2. Υ := Θ τβ4r 1−2r and δ := 14 (β − L) − 2Υ*. Define*
and Ht+1 := EH(Xt+1, Y t+1, Zt+1, Xt, Zt). Then for t ≥ 1, it holds that δ ≥ 0.1L and
Hence, the sequence {Ht} converges to some H∗ ≥ W.
Proof. Note that
EtLβ(Xt+1, Y t, Zt) − Lβ(Xt, Y t, Zt) = Lβ(Xt+1, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt) + Lβ(Xt+1 ⋆, Y t, Zt) − Lβ(Xt, Y t, Zt) ≤ ρ ktLβ(Xt, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt)+ Lβ(Xt+1 ⋆, Y t, Zt) − Lβ(Xt, Y t, Zt) ≤ ρ ktLβ(Xt, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt)− β − L 2∥Xt − Xt+1 ⋆ ∥ 2 ≤ ρ ktLβ(Xt, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt)− β − L 4 Et∥Xt − Xt+1∥ 2 + β − L 2 Et∥Xt+1 − Xt+1 ⋆ ∥ 2 ≤ ρ kt β + L 2∥Xt − Xt+1 ⋆ ∥ 2 − β − L 4 Et∥Xt − Xt+1∥ 2 + β − L 2 Et∥e t+1∥ 2, (33) where the first inequality makes use of (26), the second inequality is because Lβ(X, Y t, Zt) is strongly convex with modulus β − maxi Li and Xt+1 ⋆is the minimizer of minX Lβ(X, Y t, Zt), the third inequality uses Young's inequality, the last inequality uses the Lipschitz continuity of ∇XLβ(X, Y t, Zt).
Using the fact that ∥Xt − Xt+1 ⋆ ∥ 2 ≤ 2Et∥Xt − Xt+1∥ 2 + 2Et∥e t+1∥ 2, (33) can be further passed to EtLβ(Xt+1, Y t, Zt) − Lβ(Xt, Y t, Zt) ≤ 2ρ kt β + L 2 Et∥Xt − Xt+1∥ 2 + 2ρ kt β + L 2 Et∥e t+1∥ 2 − β − L 4 Et∥Xt − Xt+1∥ 2 + β − L 2 Et∥e t+1∥ 2 = 2ρ kt β + L 2− β − L 4 Et∥Xt − Xt+1∥ 2 + 2ρ kt β + L 2+ β − L 2 Et∥e t+1∥ 2 (34) ≤ 2ρ kt β + L 2− β − L 4+ 2ρ kt β + L 2+ β − L 2 2r 1 − 2r Et∥Xt − Xt+1∥ 2 = ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) Et∥Xt − Xt+1∥ 2 where the second inequality uses (32). Next, using (12), we have
When τ ∈ (0, 1), combining (31) and the convexity of ∥ · ∥2, we have that
where the second inequality uses the Young's inequality for product, and the last inequality uses the Lipschitz continuity of ∇F. Rearranging the above inequality, we have that
∥Z t+1 − Z t∥ 2 ≤ 1 − τ τ ∥Z t − Z t−1∥ 2 − ∥Z t+1 − Z t∥ 2+ 2β 2∥e t+1 − e t∥ 2 + 2L 2∥Xt+1 ⋆ − Xt⋆∥ 2 ≤ 1 − τ τ ∥Z t − Z t−1∥ 2 − ∥Z t+1 − Z t∥ 2+ 2β 2∥e t+1 − e t∥ 2 (36) + 2L 2(1 + κ 2)∥Xt+1 − Xt∥ 2 + (1 + κ −2)∥e t+1 − e t∥ 2 = 1 − τ τ ∥Z t − Z t−1∥ 2 − ∥Z t+1 − Z t∥ 2+2β 2 + 4L 2∥e t+1 − e t∥ 2 + 4L 2∥Xt+1 − Xt∥ 2, where κ > 0 and the last inequality uses the definition of e t+1 and Young's inequality for products.
Using the definition of Γ, Θ and Λ, (36) becomes
2. (37) Now, combining (34), (35) and (37), we obtain that
EtLβ(Xt+1, Y t, Zt+1) ≤ Lβ(Xt, Y t, Zt) + ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) Et∥Xt − Xt+1∥ 2 + Γ τ β ∥Z t−1 − Z t∥ 2 − Et∥Z t+1 − Z t∥ 2+ Θ τ β Et∥e t − e t+1∥ 2 + Λ τ β Et Xt − Xt+1 2 = Lβ(Xt, Y t, Zt) + ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) ∥Xt − Xt+1∥ 2 + Γ τ β ∥Z t−1 − Z t∥ 2 − Et∥Z t+1 − Z t∥ 2+ Θ τ β Et∥e t − e t+1∥ 2. Taking expectations with respect to X t, the above inequality implies
with respect to $\lambda^{t}$, the above inequality implies $$\begin{split}&\mathbb{E}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})\ &+\left(\frac{\rho^{t_{k}}}{1-2r}(\beta+L)-\left(\frac{1}{4}-\frac{r}{1-2r}\right)(\beta-L)\right)\mathbb{E}|X^{t}-X^{t+1}|^{2}\ &+\frac{\Gamma}{r\beta}\left(\mathbb{E}|Z^{t-1}-Z^{t}|^{2}-\mathbb{E}|Z^{t+1}-Z^{t}|^{2}\right)+\frac{\Theta}{r\beta}\mathbb{E}|e^{t}-e^{t+1}|^{2}.\end{split}\tag{38}$$ (39) we obtain that Combining (32) with (38), we obtain that
ELβ(Xt+1, Y t, Zt+1) ≤ ELβ(Xt, Y t, Zt) + ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) E∥Xt − Xt+1∥ 2 (39) + Γ τ β E∥Z t−1 − Z t∥ 2 − E∥Z t+1 − Z t∥ 2 + Θ τ β 4r 1 − 2r E∥Xt − Xt−1∥ 2 + Θ τ β 4r 1 − 2r E∥Xt − Xt+1∥ 2. i , L = maxi Li, ρ = maxi ρi, r = maxi ri and k i satisfies β+L ρ kt i ≤ ri. This implies Recall that kt = mini k t t β−L
This together with (39) shows that
The general case ($\alpha$) shows that $$\mathbb{E}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})-\frac{1}{4}(\beta-L)\mathbb{E}|X^{t}-X^{t+1}|^{2}$$ $$+\frac{\Gamma}{\tau\beta}\left(\mathbb{E}|Z^{t-1}-Z^{t}|^{2}-\mathbb{E}|Z^{t+1}-Z^{t}|^{2}\right)+\underbrace{\frac{\Theta}{\tau\beta}\frac{4r}{1-2r}}_{\Upsilon}\mathbb{E}|X^{t}-X^{t-1}|^{2}+\frac{\Theta}{\tau\beta}\frac{4r}{1-2r}\mathbb{E}|X^{t}-X^{t+1}|^{2}.$$ 2.(40) Finally, using the definition of δ and Υ, (40) further implies
$$\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})-\delta\mathbb{E}|X^{t}-X^{t+1}|^{2}$$ $$+\frac{\Gamma}{\tau\beta}\left(\mathbb{E}|Z^{t-1}-Z^{t}|^{2}-\mathbb{E}|Z^{t+1}-Z^{t}|^{2}\right)$$ $$+\Upsilon\left(\mathbb{E}|X^{t}-X^{t-1}|^{2}-\mathbb{E}|X^{t+1}-X^{t}|^{2}\right).$$
Next, noting that Y t+1 is the minimizer of (13) which is β-strongly convex, it holds that
Summing (42) and (41), we have that $$\mathbb{E}L_{\beta}(X^{t+1},Y^{t+1},Z^{t+1})$$ $$\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})-\delta\mathbb{E}|X^{t}-X^{t+1}|^{2}+\frac{\Gamma}{r\beta}\left(\mathbb{E}|Z^{t-1}-Z^{t}|^{2}-\mathbb{E}|Z^{t+1}-Z^{t}|^{2}\right)$$ $$\quad+\Upsilon\left(\mathbb{E}|X^{t}-X^{t-1}|^{2}-\mathbb{E}|X^{t+1}-X^{t}|^{2}\right)-\frac{\beta}{2}\mathbb{E}|Y^{t+1}-Y^{t}|^{2}.$$ Rearranging the above inequality and recalling the definition of $H(X,Y,Z,X^{\prime},Z^{\prime})$, we have that above inequality and recalling the definition of $H(X,Y,Z,X',Z')$, we have $;;;\mathbb{E}H(X^{t+1},Y^{t+1},Z^{t+1},X^t,Z^t)$ $;;;\leq\mathbb{E}H(X^t,Y^t,Z^t,X^{t-1},Z^{t-1})-\delta\mathbb{E}|X^t-X^{t+1}|^2-\frac{\beta}{2}\mathbb{E}|Y^{t+1}-Y^t|^2$. Now we prove {Ht} is convergent. Inequality (14) implies that {Ht} is nonincreasing. Since F and G are bounded from below, we denote W = inf F + inf G . Now we show that Ht ≥ W for all t. Suppose to the contrary that there exists t0 such that Ht0 < W. Since (14) implies Ht is nonincreasing, it hold that
Thus
On the other hand, using (41), for t ≥ 1, it holds that
Ht − W ≥ EH(Xt+1, Y t+1, Zt+1, Xt, Zt) − W (a) ≥ ELβ(Xt+1, Y t, Zt+1) − W ≥ EF(Xt+1) + G(Y t) + Xt+1 − Y t, Zt+1− W ≥ EXt+1 − Y t, Zt+1 (b) =1 τ β EZ t+1 − Z t, Zt+1=1 τ β E∥Z t+1∥ 2 − E∥Z t∥ 2 + E∥Z t+1 − Z t∥ 2 ≥1 τ β (E∥Z t+1∥ 2 − E∥Z t∥ 2). where (a) makes use of the definition of Ht and Lβ, (b) uses (12). Summing the above inequality from t = 0 to T and take T to the infinity, we have that
which contradicts with (43). Therefore, Ht is bounded from below. This together with (14) gives that {Ht} is convergent.
C.2 Details And Proofs Of Corollary 1
Thanks to Proposition 5, we have the following properties with respect to the successive changes.
Corollary 4. Consider (1) and let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Then the following statements hold.
(i) It holds that
and
$$+2\left(\Gamma+2\Theta\frac{2r}{1-2r}\right)\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-W}{\min{\delta,\frac{\beta}{2}}},$$
where C := 2τ β(Γ + 1)∥X0 −Y 0∥ 2 +4 (L−β) 2 L+β+1 2 + 2τ β(Γ + 1) + Υ + (L−β) 2 8 ∥∇XLβ(X0, Y 0, Z0)∥ 2.
with Θ and Γ being defined as in Proposition 5.
$$\leq L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{T}\leq L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{*},$$
Rearranging the above inequality, we have that
(40), we have $H_{T}\leq L_{\beta}(X^{0},Y^{0},Z^{0})+C$ $-\delta\sum_{t=1}^{T}\mathbb{E}|X^{t}-X^{t+1}|^{2}-\frac{\beta}{2}\sum_{t=1}^{T}\mathbb{E}|Y^{t+1}-Y^{t}|^{2}$ $\leq H_{1}-\delta\sum_{t=1}^{T-1}\mathbb{E}|X^{t}-X^{t+1}|^{2}-\frac{\beta}{2}\sum_{t=1}^{T-1}\mathbb{E}|Y^{t+1}-Y^{t}|^{2}$.
Now we bound H1. Note that
H1 = ELβ(X1, Y 1, Z1) + Γ
τ β E∥Z
1 − Z
0∥
2 + ΥE∥X1 − X0∥
2
(i)
≤ ELβ(X1, Y 0, Z1) + Γ
τ β E∥Z
1 − Z
0∥
2 + ΥE∥X1 − X0∥
2
(ii)
≤ ELβ(X1, Y 0, Z0) + Γ + 1
τ β E∥Z
1 − Z
0∥
2 + ΥE∥X1 − X0∥
2
(iii)
≤ E
Lβ(X0, Y 0, Z0) + ∇XLβ(X0, Y 0, Z0)
⊤(X1 − X0)
+
L + β
2∥X1 − X0∥
2+ τ β(Γ + 1)E∥X1 − Y
0∥
2 + ΥE∥X1 − X0∥
2
≤ Lβ(X0, Y 0, Z0) + 12
∥∇XLβ(X0, Y 0, Z0)∥
2 + 2τ β(Γ + 1)∥X0 − Y
0∥
2
+
L + β + 1
2+ 2τ β(Γ + 1) + ΥE∥X1 − X0∥
2
(iv)
≤ Lβ(X0, Y 0, Z0) + 2τ β(Γ + 1)∥X0 − Y
0∥
2
+4
(L − β)
2
L + β + 1
2+ 2τ β(Γ + 1) + Υ + (L − β)
2
8
∥∇XLβ(X0, Y 0, Z0)∥
2, (48)
where (i) uses (42), (ii) uses (35), (iii) uses the property that Lβ(X, Y, ·) is (L + β)-smooth, and (iv) uses
the following inequality.
E∥X1 − X0∥
2 ≤ 2E∥X1 − X1
∗ ∥
2 + 2E∥X0 − X1
∗ ∥
2
≤ 4E∥X0 − X1
∗ ∥
2
≤4
(L − β)
2
∥∇XLβ(X0, Y 0, Z0)∥
2
Thus, summing (47) and (48), we have
$$\leq H_{1}-\delta\sum_{t=1}^{T-1}\mathbb{E}|X^{t}-X^{t+1}|^{2}-\frac{\beta}{2}\sum_{t=1}^{T-1}\mathbb{E}|Y^{t+1}-Y^{t}|^{2}$$
Proof. Summing (14) from t = 1 to T, it holds that
holds that $$\lim_{t}\mathbb{E}|X^{t}-X^{t+1}|^{2}=\lim_{t}\mathbb{E}|Y^{t+1}-Y^{t}|^{2}=\lim_{t}\mathbb{E}|Z^{t+1}-Z^{t}|^{2}=\lim_{t}\mathbb{E}|Y^{t}-X^{t}|^{2}=0.$$ 2 = 0. (46) (ii) It holds that where the second inequality is because {Ht} is nonincreasing and convergent. This implies (44).
Taking T in the above inequality to infinity, we deduce that
where the last inequality is because {Ht} is convergent. Therefore, we have {E∥Xt − Xt+1∥ 2}, and limt E∥Y t+1 − Y t∥ 2 are summable and
In addition, summing (37) from t = 1 to T, we have that
X T t=0 E∥Z t − Z t+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + ΘX T t=1 E∥e t − e t+1∥ 2 + ΓX T t=1 E∥Xt − Xt+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + 2Θ 2r 1 − 2r X T t=1 E∥Xt − Xt−1∥ 2+(Γ + 2Θ 2r 1 − 2r ) X T (51) t=0 E∥Xt−Xt+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + 2 Γ + 2Θ 2r 1 − 2r X T t=0 E∥Xt − Xt+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + 2 Γ + 2Θ 2r 1 − 2r Lβ(X0, Y 0, Z0) + C − H∗ min{δ, β 2 } , where the second inequality uses (32). Recall the definition of Z 1, we have that $$\leq3r|X^{0}-X^{1}{\star}|^{2}+3|X^{1}{\star}-X^{0}|^{2}+3|X^{1}{\star}-Y^{0}|^{2}$$ $$\leq\frac{3(r+1)}{(L-\beta)^{2}}|\nabla L{\beta}(X^{0},Y^{0},Z^{0})|^{2}+3|X^{0}-Y^{0}|^{2}.$$ This together with (51) gives
$$+2\left(\Gamma+2\Theta\frac{2r}{1-2r}\right)\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{\star}}{\min{\delta,\frac{\beta}{2}}}.$$
Taking T in the above inequality to infinity we deduce that {E∥Z t − Z t+1∥ 2} is summable and using (12), we have that
This together with (50) gives that
C.3 Details And Proofs Of Theorem 1
Here, we prove Theorem 1.
Theorem 3. Consider (1). Let {(x t 1 , . . . , xtp , yt, zt1 , . . . , ztp} be generated by Algorithm 1. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Then the following statements hold.
(i) There exists E > 0 such that
$|\nabla F(Y^{t+1})+\xi^{t+1}|\leq E\left(|X^{t+1}-X^{t}|+|Z^{t+1}-Z^{t}|+|Y^{t}-Y^{t+1}|\right).$ (53) In section (53) we have (ii) It holds that
$$\leq\frac{1}{T+1}R\left((1+\Gamma)\frac{3(r+1)}{(L-\beta)^{2}}|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})|^{2}+3|X^{0}-Y^{0}|^{2}\right)$$ $$+\frac{1}{T+1}R\left(2\Gamma+\Theta\frac{8r}{1-2r}+2\right)\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{}}{\min{\delta,\frac{\beta}{2}}},$$ are defined in Proposition 5. $H{}$ and $C$ is defined in Proposition 5. $\alpha$._ where Γ and Θ are defined in Proposition 5, H∗ and C is defined in Proposition 5 and Corollary 4 respectively, R := max{3(L + β) 2 2r 1−2r , L τβ + 12,(L + β) 2}.
Proof. Using (29), it hold that
Summing this with (30), we have that
$0=\nabla F(Y^{t+1})+\xi^{t+1}+\nabla F(X^{t+1}{*})-\nabla F(Y^{t+1})+Z^{t}-Z^{t+1}+\beta(X^{t+1}{*}-X^{t+1})-\beta(Y^{t+1}-Y^{t})$.
This implies that
∥∇F(Y t+1) + ξ t+1∥ ≤ ∥∇F(Xt+1 ⋆) − ∇F(Y t+1)∥ + ∥Z t − Z t+1∥ + β∥Xt+1 ⋆ − Xt+1∥ + β∥Y t+1 − Y t∥ ≤ L∥Xt+1 ⋆ − Y t+1∥ + ∥Z t − Z t+1∥ + β∥Xt+1 ⋆ − Xt+1∥ + β∥Y t+1 − Y t∥ ≤ L∥Xt+1 ⋆ − Xt+1∥ + L∥Xt+1 − Y t∥ + (L + β)∥Y t − Y t+1∥ + ∥Z t − Z t+1∥ + β∥Xt+1 ⋆ − Xt+1∥ = (L + β)∥Xt+1 ⋆ − Xt+1∥ + L τ β + 1∥Z t+1 − Z t∥ + (L + β)∥Y t − Y t+1∥, where the last equality uses (12). Using (32), we have that E∥Xt+1 ⋆ − Xt+1∥ 2 ≤ q 2r 1−2r E∥Xt+1 − Xt∥ 2.
Using this, (54) can be further passed to
In, (54) can be further passed to $$\mathbb{E}|\nabla F(Y^{t+1})+\xi^{t+1}|^2\leq(L+\beta)\sqrt{\frac{2r}{1-2r}}3\mathbb{E}|X^{t+1}-X^t|^2+\left(\frac{L}{\tau\beta}+1\right)3\mathbb{E}|Z^{t+1}-Z^t|^2$$ $$+(L+\beta)3\mathbb{E}|Y^t-Y^{t+1}|^2.$$ This together with Cauchy-Schwarz inequality, we have that
After what could easily be imaginary, we have that $$\mathbb{E}|\nabla F(Y^{t+1})+\mathbb{E}^{t+1}|^2\leq3(L+\beta)^2\frac{2r}{1-2r}\mathbb{E}|X^{t+1}-X^t|^2+\left(\frac{L}{\tau\beta}+1\right)^2\mathbb{E}|Z^{t+1}-Z^t|^2$$ $$+(L+\beta)^2\mathbb{E}|Y^t-Y^{t+1}|^2.$$ This proves (53). Summing the above inequality from t = 0 to T, it holds that
X T t=0 E∥∇F(Y t+1) + ξ t+1∥ 2 ≤ 3(L + β) 22r 1 − 2r X T t=0 E∥Xt+1 − Xt∥ 2 + L τ β + 12X T t=0 E∥Z t+1 − Z t∥ 2 + (L + β) 2X T t=0 E∥Y t − Y t+1∥ 2 ≤ max{3(L + β) 22r 1 − 2r , L τ β + 12,(L + β) 2} · X T t=0 E∥Xt+1 − Xt∥ 2 + ∥Y t − Y t+1∥ 2 + ∥Z t+1 − Z t∥ 2 ! ≤ max{3(L + β) 22r 1 − 2r , L τ β + 12,(L + β) 2} · (1 + Γ) 3(r + 1) (L − β) 2 ∥∇Lβ(X0, Y 0, Z0)∥ 2 + 3∥X0 − Y 0∥ 2 + 2Γ + Θ 8r 1 − 2r + 2Lβ(X0, Y 0, Z0) + C − H∗ min{δ, β 2 } ! , 2
where $C:=2\tau\beta(\Gamma+1)|X^{0}-Y^{0}|^{2}+\frac{4}{(L-\beta)^{2}}\Big{(}\frac{L+\beta+1}{2}+2\tau\beta(\Gamma+1)+\Upsilon+\frac{(L-\beta)^{2}}{8}\Big{)}$, the last term will be $\tau\beta(\Gamma)=\frac{1}{2}\left(\frac{L+\beta+1}{2}+2\tau\beta(\Gamma+1)+\Upsilon+\frac{(L-\beta)^{2}}{8}\right)$.
· ∥∇XLβ(X0, Y 0, Z0)∥
2, the last inequality uses (44) and (45). Dividing both sides with T + 1 and recalling ξ t+1 ∈ ∂G(Y
t+1), we have the conclusion. Grouping the constants of ∥X0 − Y
0∥
2, ∥∇XLβ(X0, Y 0, Z0)∥
2, Lβ(X0, Y 0, Z0), we have that
$$\leq D\left(|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})|^{2}+|X^{0}-Y^{0}|^{2}+L_{\beta}(X^{0},Y^{0},Z^{0})-W\right),$$
where
max{3, D12τ β(Γ + 1)}.
with D1 := 2Γ+Θ 8r 1−2r +2 min{δ, 12 β}, D2 := (1 + Γ) 3(r+1)
C.3.1 Proofs Of Proposition 6 And Corollary 3
We provide the detailed version of Proposition 6 as follows.
Proposition 9. Consider (1). Let {(x t 1 , . . . , xtp , yt, zt1 , . . . , ztp} be generated by Algorithm 1. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Suppose {(Xt, Y t, Zt)} is bounded and denote the set of accumulation points of {(Xt, Y t, Zt, Xt−1, Zt−1)} as Ω*. The following statements hold:* (i) limt d((Xt, Y t, Zt, Xt−1, Zt−1), Ω) = 0.
(ii) Any accumulation point of {Y t} is a stationary point of (1).
(iii) H ≡ H∗ on Ω.
Proof. For (i), let Y ∗ be an accumulation point of {Y t} with Y ti → Y ∗. Using (29) and (30), there exists ξ ti ∈ G(Y ti ) such that
$$=\nabla F(Y^{t})+\nabla F(X_{}^{t_{i}})-\nabla F(Y^{t})+Z^{t_{i}-1}+\beta(X_{}^{t_{i}}-Y^{t_{i}-1}).$$
and The above relations shows that ti)(58) above relations shows that $0=\nabla F(Y^t)+\xi^{ti}+\nabla F(X^t_i)-\nabla F(Y^t)+Z^{ti-1}-Z^{ti}+\beta(X^{t_i}-Y^{t_{i-1}})-\beta(X^{t_i}-Y^{t_i})$ $=\nabla F(Y^t)+\xi^{ti}+\nabla F(X^t_i)-\nabla F(Y^t)+\tau\beta(X^{t_i}-Y^{t_{i-1}})+\beta(X^{t_i}-Y^{t_{i-1}})-\beta(X^{t_i}-Y^{t_i})$ then we list as above as of (12). Now we have that $|X^t-Y^{t_i}-\beta_i Y^{t_i}-\beta_i Y^{t_i}|$ is a B-positive where the equality makes uses of (12). Now we show that limi ∥Xt⋆ − Xt∥ = 0. Using Proposition 7 and (11), we have that
Since limt ∥Xt − Xt−1∥ = 0, we have that
i∥Xt⋆ − Xt∥ = 0. (59) Next, we show that limi ∥Xt − Y t−1∥ = 0 . Using (12), it holds that
$$\leq\Gamma\left(\left|Z^{t-2}-Z^{t-1}\right|^{2}-\left|Z^{t}-Z^{t-1}\right|^{2}\right)+\Theta|e^{t-1}-e^{t}|^{2}+\Lambda\left|X^{t-1}-X^{t}\right|^{2}$$ $$\leq\Gamma\left(\left|Z^{t-2}-Z^{t-1}\right|^{2}-\left|Z^{t}-Z^{t-1}\right|^{2}\right)+\Theta\frac{4r}{1-2r}|X^{t-1}-X^{t-2}|^{2}+(\Lambda+\frac{4r}{1-2r})\left|X^{t-1}-X^{t}\right|^{2}$$ where the first inequality uses (37) and the second inequality is due to (32). Summing the above inequality, from t = 1 to T, we have that X T 1=1 Z t − Z t−1 2≤ Γ∥Z t1−2 − Z t1−1∥ 2 − ∥Z tK − Z tK−1∥ 2 +1 τ β Θ 4r 1 − 2r X T 1=1 ∥Xt−1 − Xt−2∥ 2 + (Λ + 4r 1 − 2r ) X T 1=1 Xt−1 − Xt 2 ≤ Γ∥Z t1−2 − Z t1−1∥ 2 − ∥Z tK − Z tK−1∥ 2+ Θ 4r 1 − 2r X K i=1 ∥Xt−1 − Xt−2∥ 2 + (Λ + 4r 1 − 2r ) X K i=1 Xt−1 − Xt 2 ≤ Γ∥Z t1−2 − Z t1−1∥ 2 + Θ 4r 1 − 2r X T 1=1 ∥Xt−1 − Xt−2∥ 2 + (Λ + 4r 1 − 2r ) X T 1=1 Xt−1 − Xt 2.
Taking K in the above inequality to infinity and recalling thatXt−1 − Xt 2is summable, we deduce that PT 1=1 ∥Z t − Z t−1∥ 2 < ∞. This together with (12) show that
t−1∥ = 0. (60) Next, we show that limt ∥Y t − Y t−1∥ = 0. Using (12) again, we have that
This together with the fact that limt ∥Xt −Xt−1∥ = limt ∥Z t −Z t−1∥ = 0 implies that limt ∥Y t −Y t−1∥ = 0.
Since Y ti → Y ∗, combining (59), (60) and (46), we have that
$\lim_i Y^{t_i-1}=\lim_i X^{t_i}=\lim_i X^{t_i}=\lim_i Y^{t_i}=Y^{*}$. This together with the continuity of ∇F, the closedness of ∂G and (58) shows that
This completes the proof.
Now we prove (ii). Fix any (X∗, Y ∗, Z∗, X¯ ∗,Z¯∗) ∈ Ω. Then there exists {ti}i such that (Xti, Y ti, Zti, Xti−1, Y ti−1) converges to (X∗, Y ∗, Z∗, X¯ ∗,Z¯∗). Thanks to Proposition 5 (ii), we know that
iH(Xti, Y ti, Zti, Xti−1, Y ti−1) (61) and
Since Y tis the minimizer of (13), it holds that
Taking the above inequality to infinity, we have that
$$=\limsup_{i}G(Y^{t_{i}})+\langle X^{t_{i}}-Y^{t_{i}},Z^{t_{i}}\rangle+\frac{\beta}{2}|X^{t_{i}}-Y^{t_{i}}|^{2}$$ $$\leq G(Y^{})+\langle X^{}-Y^{},Z^{}\rangle+\frac{\beta}{2}|X^{}-Y^{}|^{2}.$$
This together with the closedness of G shows that limi G(Y ti ) = G(Y ∗). This together with the continuity of F, Corollary 4 (ii) and (61) gives that
$H_{0}=\underset{i}{\max}(X^{},Y^{},Z^{},Z^{})$, $=F(X^{})+G(Y^{})+\langle X^{}-Y^{},Z^{}\rangle+\frac{\beta}{2}|X^{}-Y^{}|^{2}=H(X^{},Y^{},Z^{},\bar{X}^{},\bar{Z}^{})$, where the second equality uses (62).
Corollary 3. Let {(x t 1 , . . . , xtp , yt, zt1 , . . . , ztp )} be generated by Algorithm 1 with (9) holding deterministically. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 6 hold. Then any accumulation point of {y t} is a stationary point of (1).
Proof. From Proposition 2, we understand that Y t = (y t*, . . . , yt) for any t. Let y ∗ be any accumulation point of y t. Then Y ∗ = (y ∗, . . . , y*∗) is an accumulation point of {Y t}. Proposition 6 demonstrates that the Y ∗is a stationary point of (3). By applying Proposition 1, we deduce that y ∗is a stationary point of (1).
C.3.2 Details And Proofs For Theorem 2
To show the global convergence of the generated sequence, we first need to bound the subdifferential of ∂H(Xt+1, Y t+1, Zt+1, Xt, Zt).
Lemma 1. Consider (1). Let {(x t 1 , . . . , xtp , yt, zt1 , . . . , ztp} be generated by Algorithm 1. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose (9) is satisfied deterministically (satisfied without expectation). Suppose assumptions in Proposition 5 hold. There exists D > 0 such that
Proof. Using Exercise 8.8, Proposition 10.5 and Corollary 10.9 of RockWets98, it holds that
Thus,
∂H(Xt+1, Y t+1, Zt+1, Xt, Zt) ⊇ ∇F(Xt+1) + Z t+1 + β(Xt+1 − Y t+1) + Θ τβ 16r 1−2r (Xt+1 − Xt) ∂G(Y t+1) − Z t+1 − β(Xt+1 − Y t+1) Xt+1 − Y t+1 + 2Γ τβ (Z t+1 − Z t) − Θ τβ 16r 1−2r (Xt+1 − Xt) − 2Γ τβ (Z t+1 − Z t) (63) ⊇ ∇F(Xt+1) + Z t+1 + β(Xt+1 − Y t+1) + Θ τβ 16r 1−2r (Xt+1 − Xt) 0 Xt+1 − Y t+1 + 2Γ τβ (Z t+1 − Z t) − Θ τβ 16r 1−2r (Xt+1 − Xt) − 2Γ τβ (Z t+1 − Z t)
where the seconde inclusion follows from (30).
Now, we bound each coordinate in the right hand side of the relation. For the first one, we denote At+1 := ∇F(Xt+1) + Z t+1 + β(Xt+1 − Y t+1) + Θ τβ 16r 1−2r (Xt+1 − Xt). Using (29), we have that
Thus, we deduce that d 2(0, At+1) is bounded above by
$$+\frac{4\Theta^{2}}{\tau^{2}\beta^{2}}\frac{64r^{2}}{(1-2r)^{2}}|X^{t+1}-X^{t}|^{2}\tag{64}$$
where we also make use of the Lipscitz continuity of ∇F.
For the third coordinate in (63), using (12), it holds that
$$\leq2|Y^{t}-Y^{t+1}|^{2}+\frac{(1+2\Gamma)^{2}}{\tau^{2}\beta^{2}}|Z^{t+1}-Z^{t}|^{2}$$
This together with (63) and (64), we deduce that
d 2(0, ∂H(Xt+1, Y t+1, Zt+1, Xt, Zt)) ≤ 4(L + β) 2∥Xt+1 − Xt+1 ⋆ ∥ 2 + 4∥Z t+1 − Z t∥ 2 + 4β 2∥Y t − Y t+1∥ 2 + 4Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 2∥Y t − Y t+1∥ 2 + (1 + 2Γ)2 τ 2β 2∥Z t+1 − Z t)∥ 2 (65) +Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 4Γ2 τ 2β 2 ∥Z t+1 − Z t∥ 2. Note that using 32, we have that
Combining (65) with (66), we have that
d 2(0, ∂H(Xt+1, Y t+1, Zt+1, Xt, Zt)) ≤ 4(L + β) 22r 1 − 2r ∥Xt+1 − Xt∥ 2 + 4∥Z t+1 − Z t∥ 2 + 4β 2∥Y t − Y t+1∥ 2 + 4Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 2∥Y t − Y t+1∥ 2 + (1 + 2Γ)2 τ 2β 2∥Z t+1 − Z t)∥ 2 +Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 4Γ2 τ 2β 2 ∥Z t+1 − Z t∥ 2 = D′(∥Xt+1 − Xt∥ 2 + ∥Y t − Y t+1∥ 2 + ∥Z t+1 − Z t∥ 2), where D is the maximum of the coordinates of ∥Xt+1 − Xt∥ 2, ∥Y t − Y t+1∥ and ∥Z t+1 − Z t∥ 2 on the right hand side of the above inequality. Finally, using the fact that P3 i s 2 i ≤ (P3 i ai) 2for any a1, a2, a3 ≥ 0, the above inequality can be further passed to
Taking square root on both sides of the above inequality we have the conclusion.
Now we are ready to prove Theorem 2. In fact, we already show the key properties that will be needed. They are Proposition 5, Corollary 4, Proposition C.3.1 and Lemma 1. The rest steps are routine. We follow the proofs in Borwein et al. (2017); Bolte et al. (2014); Li & Pong (2016) and include it only for completeness.
Theorem 2. Consider (1) and Algorithm 1 with (9) holding deterministically. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Let H be defined as in Proposition 5 and suppose H is a KL function with exponent α ∈ [0, 1). Then {(Xt, Y t, Zt)} converges globally. Denoting (X∗, Y ∗, Z∗) := limt(Xt, Y t, Zt) and d t s := ∥(Xt, Y t, Zt)−(X∗, Y ∗, Z∗)∥*, then the followings hold. If* α = 0, then {d t s} converges finitely. If α ∈ (0, 1 2 ], then there exist b > 0, t1 ∈ N and ρ1 ∈ (0, 1) such that d t s ≤ bρt1 for t ≥ t1*. If* α ∈ ( 1 2 , 1), then there exist t2 and c > 0 such that d t s ≤ ct− 1 4α−2 for t ≥ t2.
Proof. We first show that {(Xt, Y t, Zt)} is convergent. If there exists t0 such that Ht0 = H∗. Since {Ht} is nonincreasing thanks to (14), we deduce that Ht = H∗ for all t ≥ t0. Using (14) again we have that for all t ≥ t0, it holds that Xt = Xt−1 = · · · = Xt0−1 and Y t = Y t−1 = · · · = Y t0. Recalling in (46) we have that limt(Xt − Y t) = 0, we have that Y t0 = Xt0−1. Thus, Xt+1 − Y t = Y t0 − Xt0−1 = 0 for all t ≥ t0. This together with (12), we deduce that Z t+1 = Z t = · · · = Z t0for all t ≥ t0. Therefore, when there exists t0 such that Ht0 = H∗, {(Xt, Y t, Zt)} converge finitely.
Next, we consider the case where Ht > H∗ for all t. Thanks to Proposition C.3.1 (iii), using Lemma 6 of Bolte et al. (2014), there exists r > 0, a > 0 and ψ ∈ Ψa such that
when d((X, Y, Z, X′, Z′), Ω) ≤ r and H∗ < H(X, Y, Z, X′, Z′) < H∗ +a. Thanks to Corollary 4 and Theorem 5, we know that there exists t1 such that when t > t1, d((Xt, Y t, Zt, Xt−1, Zt−1), Ω) ≤ r and H∗ < H(Xt, Y t, Zt, Xt−1, Zt−1) < H∗ + a. Thus, it holds that ψ ′(H((Xt, Y t, Zt, Xt−1, Zt−1) − H∗)d(0, ∂H((Xt, Y t, Zt, Xt−1, Zt−1)) ≥ 1. (67) Recaling (14), we have that Since ψ is concave, using the above inequality we have that
$$\leq\psi^{\prime}(H_{t}-H_{s})d(0,\beta H(X^{t},Y^{t},Z^{t},X^{t-1},Z^{t-1}))\left(H_{t}-H_{t+1}\right)\tag{68}$$ $$\leq\Delta_{0}^{t+1}d(0,\beta H(X^{t},Y^{t},Z^{t},X^{t-1},Z^{t-1}))$$ where the second inequality uses (67) and the last inequality uses the concavity of $\psi$. Using Lemma 1, we have have from (68) that on (85) that $ \frac{1}{2}\min{\delta,\frac{\beta}{2}}\left(|X^{t+1}-X^t|+|Y^{t+1}-Y^t|\right)^2\leq\min{\delta,\frac{\beta}{2}}\left(|X^{t+1}-X^t|^2+|Y^{t+1}-Y^t|^2\right)$ $ \leq\delta|X^{t+1}-X^t|^2+\frac{\beta}{2}|Y^{t+1}-Y^t|^2$ $ \leq\Delta_{\theta}^{t+1}D\left(|X^t-X^{t-1}|+|Y^t-Y^{t-1}|+|Z^t-Z^{t-1}|\right)$ In fact, we will use the fact that $ |X^t|^2+\frac{\beta}{2}|Z^t|^2$ for every $ t\in\mathbb{R}$. where the first inequality uses the fact that 12 (a + b) 2 ≤ a 2 + b 2for any a, b ∈ R. Now we bound ∥Z t − Z t−1∥. Using (31), we have that ∥Z t+1 − Z t∥ = |1 − τ |∥Z t − Z t−1∥ + βτ∥e t+1 − e t∥ + τ∥∇F(Xt+1 ⋆) − ∇F(Xt⋆ )∥ ≤ |1 − τ |∥Z t − Z t−1∥ + βτ∥e t+1 − e t∥ + τL∥Xt+1 ⋆ − Xt⋆∥ ≤ |1 − τ |∥Z t − Z t−1∥ + (β + L)τ∥e t+1 − e t∥ + τL∥Xt+1 − Xt∥ ≤ |1 − τ |∥Z t − Z t−1∥ + (β + L)τ4 (β − L) 2 ∥Xt − Xt−1∥ + τL∥Xt+1 − Xt∥ where the second inequality uses the definition of e t and last inequality uses (32). Rearranging the above inequality, it holds that
$$+\frac{2}{1-|1-\tau|}(\beta+L)\tau\frac{4}{(\beta-L)^{2}}|X^{t}-X^{t-1}|+\frac{2}{1-|1-\tau|}\tau L|X^{t+1}-X^{t}|.$$
Plugging this bound into (69), we have that
1 2 min{δ, β 2 }∥Xt+1 − Xt∥ + ∥Y t+1 − Y t∥2 ≤ ∆ t+1 ψ D∥Xt − Xt−1∥ + ∥Y t − Y t−1∥ + ∆t+1 ψ D 1 + |1 − τ | 1 − |1 − τ | ∥Z t − Z t−1∥ − ∥Z t − Z t+1∥− ∥Z t − Z t+1∥ + ∆t+1 ψ D 2(β + L)τ 1 − |1 − τ | 4 (β − L) 2 ∥Xt − Xt−1∥ +2τL 1 − |1 − τ | ∥Xt+1 − Xt∥ ≤ ∆ t+1 ψ DD1 ∆1 t + ∆2 t , where
Rearranging the above inequality and taking square toot on both sides, we obtain that
where the second inequality uses the fact that √ab ≤ 1 2 (a + b) for any a, b > 0. Recalling the definitions of ∆1 t and ∆2 t , and rearranging the above inequality, we have that
∥Xt+1 − Xt∥ + ∥Y t+1 − Y t∥ ≤ s2 min{δ, β 2 } ∆ t+1 ψ DD1∆ ≤2 min{δ, β 2 } ∆ t+1 ψ DD1 + 1 4 ∥Xt − Xt−1∥ + ∥Xt+1 − Xt∥ + ∥Y t − Y t−1∥ + 1 4 ∥Z t − Z t−1∥ − ∥Z t − Z t+1∥ − ∥Z t − Z t+1∥ Further rearranging the above inequality, we have
Then, denoting ∆t+1 := ∥Xt+1 − Xt∥ + ∥Y $X^{t+1}-X^{t}|+|Y^{t+1}-Y^{t}|+D_{2}|Z^{t+1}-Z^{t}|$ (70) can be further passed to $$\frac{1}{4}\Delta_{t+1}\leq\frac{2}{\min{\delta,\frac{\beta}{2}}}\Delta_{\psi}^{t+1}DD_{1}+\frac{1}{4}\left(\Delta_{t}-\Delta_{t+1}\right)$$
Summing the above inequality from t = t1 + 1 to T, we have that
where the last inequality uses the fact that P ψ > 0. Taking T in the above inequality to infinity, we see that ∞ t=t1+1 ∆t+1 < ∞. Thus {(Xt, Y t, Zt)} is convergent.
Next, we show the convergence rate of the generated sequence. Denote the limit of (Xt, Y t, Zt) as (X∗, Y ∗, Z∗). Define St =P∞ i=t+1 ∆i. Noting that ∥X∗ − Xt∥ + ∥Y ∗ − Y t∥ + ∥Z t − Z ∗∥ ≤ P∞ i=t ∆i = St, it suffices to show the convergence rate of St. Using (71), there exists D2 > 0 such that
$$\leq D_{2}\psi(H_{t}-H_{})+\Delta_{t}=D_{2}\psi(H_{t}-H_{})+\left(S_{t-1}-S_{t}\right).$$
Now we bound ψ(Ht − H∗). From the KL assumption, ψ(w) = cw1−θ with some c >. Thanks to Theorem 5 (ii) and (14), we have from the KL inequality, it holds that
θ. (73) Combining this with (1), we have that
This is equivalent to
Using this (72) can be further passed to
θ + (St−1 − St), (74) where D3 := D2c (c(1 − θ)D) 1−θ θ. Now we claim 1. When θ = 0, {(Xt, Y t, Zt)} converges finitely.
When θ ∈ (0, 1 2 ], there exist a > 0 and ρ1 ∈ (0, 1) such that St ≤ aρt1 .
When θ ∈ ( 1 2 , 1), there exists D4 such that St ≤ ct− 1−θ 2θ−1 for large t.
When θ = 0, we claim that there exists t such that Ht = H∗. Suppose to the contrary that Ht > H∗ for all t. Then, for large t, (73) holds, i.e., d(0, ∂H(Xt, Y t, Zt, Xt−1, Zt−1)) ≥1 c(1−θ) > 0. However, thanks to 1 and Corollary 4, we know that limt d(0, ∂H(Xt, Y t, Zt, Xt−1, Zt−1)) = 0, a contradiction. Therefore, there exists t such that Ht = H∗. From the argument in the beginning of this proof, we see that {(Xt, Y t, Zt)} converges finitely.
When θ ∈ (0, 1 2 ], we have 1−θ θ ≥ 1. Thanks to Corollary 4, we know that there exists t2 such that St−St−1 <
- Thus, (74) can be further passed to St ≤ D3(St−1 − St) + (St−1 − St). This implies that
Thus there exist a > 0 and ρ1 ∈ (0, 1) such that St ≤ aρt1 .
When θ ∈ ( 1 2 , 1), it holds that 1−θ θ < 1. From the last case, we know that St − St−1 < 1 when t > t2. Using (74), we have that St ≤ D3(St−1 − St) 1−θ θ + (St−1 − St) 1−θ θ = (D3 + 1) (St−1 − St) 1−θ θ. This implies that
With this inequality, following the arguments in Theorem 2 of Attouch & Bolte (2009) starting from Equation (13) in Attouch & Bolte (2009), there exists c > 0 such that St ≤ ct− 1−θ 2θ−1 for large t. Thus, {S t} converges sublinearly.