RedTachyon
commited on
Commit
•
6da4a6a
1
Parent(s):
ae9197b
Upload folder using huggingface_hub
Browse files- fARVGN1Xzu/10_image_0.png +3 -0
- fARVGN1Xzu/10_image_1.png +3 -0
- fARVGN1Xzu/20_image_0.png +3 -0
- fARVGN1Xzu/21_image_0.png +3 -0
- fARVGN1Xzu/9_image_0.png +3 -0
- fARVGN1Xzu/9_image_1.png +3 -0
- fARVGN1Xzu/fARVGN1Xzu.md +1116 -0
- fARVGN1Xzu/fARVGN1Xzu_meta.json +25 -0
fARVGN1Xzu/10_image_0.png
ADDED
Git LFS Details
|
fARVGN1Xzu/10_image_1.png
ADDED
Git LFS Details
|
fARVGN1Xzu/20_image_0.png
ADDED
Git LFS Details
|
fARVGN1Xzu/21_image_0.png
ADDED
Git LFS Details
|
fARVGN1Xzu/9_image_0.png
ADDED
Git LFS Details
|
fARVGN1Xzu/9_image_1.png
ADDED
Git LFS Details
|
fARVGN1Xzu/fARVGN1Xzu.md
ADDED
@@ -0,0 +1,1116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Linear Speedup In Personalized Collaborative Learning
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Collaborative training can improve the accuracy of a model for a user by trading off the model's bias (introduced by using data from other users who are potentially different) against its variance (due to the limited amount of data on any single user). In this work, we formalize the personalized collaborative learning problem as a stochastic optimization of a task 0 while given access to N related but different tasks 1*, . . . , N*. We give convergence guarantees for two algorithms in this setting—a popular collaboration method known as *weighted gradient* averaging, and a novel *bias correction* method—and explore conditions under which we can achieve linear speedup w.r.t. the number of auxiliary tasks N. Further, we also empirically study their performance confirming our theoretical insights.
|
8 |
+
|
9 |
+
## 1 Introduction
|
10 |
+
|
11 |
+
Collaborative learning is the setup where agents/users/clients collaborate in hope of better performance
|
12 |
+
(faster convergence, smaller inference time, or generalization) compared to each agent working alone. Federated learning and auxiliary learning are two examples of collaborative learning. In Federated Learning multiple users train a machine learning model on their combined datasets (Kairouz et al., 2019). Collaboration vastly increases the amount of data available for training. However, the other users may be heterogeneous, i.e., they may have datasets and objectives which do not match those of the considered user.
|
13 |
+
|
14 |
+
Combining data from such heterogeneous users can significantly hamper performance, with even worse performance than when training alone (Yu et al., 2020).
|
15 |
+
|
16 |
+
Training alone and on combined data represent two extremes, with the former having no bias but high variance and the latter having low variance but high bias. Alternatively, personalized collaborative learning algorithms (where each user only cares about its own performance) (Wang et al., 2019; Mansour et al., 2020) attempt to find 'in-between' models that trade off some bias against variance. In the best case, we can use the data from the N other users to reduce our variance by a factor N (called *linear speedup*) while simultaneously not incurring any bias.
|
17 |
+
|
18 |
+
In auxiliary Learning, the goal is to train one main task by combining it with auxiliary tasks that are related in some sense to the main task. In this sense, auxiliary learning can be seen as more general than personalization in Federated Learning.
|
19 |
+
|
20 |
+
In this work, we explore from a purely theoretical lens, under what conditions a given agent can benefit from personalized collaborative learning. We consider an idealized scenario where the goal is optimizing a fixed user's stochastic function f0(x), while also giving access to stochastic gradients of N other collaborators
|
21 |
+
{f1(x)*, . . . , f*N (x)}. We also neglect communication issues: the users can be all on the same server for example or the collaborators can be treated as auxiliary functions. In the latter case, one important question is to know how much can we benefit from such auxiliary "information" available to us (maybe for free). We start with the simple strategy of **weighted gradient averaging** that uses a weighted average of the gradient estimates as a pseudo-gradient and then takes an SGD step. We show that while there do exist scenarios where this simple strategy suffices, it can also incur significant bias introduced by the collaborators. This then motivates our main method of **bias correction** which uses the past observed gradients to estimate and correct for these biases. We show that our proposed solution solves the problems WGA had with bias.
|
22 |
+
|
23 |
+
Furthermore, we get a linear speedup in the number of agents that satisfy a mild dissimilarity constraint.
|
24 |
+
|
25 |
+
Contributions. Our main contributions include:
|
26 |
+
- Formalizing the collaborative stochastic optimization problem where an agent is required to minimize their objective by collaborating with other agents, in contrast to traditional federated learning.
|
27 |
+
|
28 |
+
- Proving convergence rates for *weighted gradient averaging* and proposing and analyzing a novel *bias* correction algorithm.
|
29 |
+
|
30 |
+
- Showing that with the correct choice of hyper-parameters and under a mild condition on the dissimilarity between agents, bias correction enjoys a linear speedup in the number of (relatively similar) collaborators, with variance reducing as collaborators increase (and a bias going to zero in the number of steps).
|
31 |
+
|
32 |
+
## 2 Related Work
|
33 |
+
|
34 |
+
Federated and Decentralized Learning. Federated learning (FL) (Konecny et al., 2016; McMahan et al., 2017; Mohri et al., 2019) denotes a machine learning setting where a global set of training data is distributed over multiple users (also called agents or clients). These users form a 'federation' to train a global model on the union of all users' data. The training is coordinated by a central server, and the users' local data never leaves its device of origin. Owing to data locality and privacy awareness, FL has become prominent for privacy-preserving machine learning (Kairouz et al., 2019; Li et al., 2020a; Wang et al., 2021). Our studied setting is different because we learn the objective of one specific user, not the union of users. Decentralized learning refers to the analogous more general setting without a central server, where users communicate peer-to-peer during training, see e.g. (Nedic, 2020).
|
35 |
+
|
36 |
+
Personalization. Due to device heterogeneity, and data heterogeneity, a 'one model fits all approach leads to poor accuracy on individual users. Instead, we need to learn personalized models for each user. Prominent approaches for this include performing additional local adaptation or fine-tuning (Wang et al., 2019; Fallah et al., 2020), or weighted averaging between a global model and a locally trained model (Mansour et al.,
|
37 |
+
2020; Deng et al., 2020; Hanzely & Richtárik, 2020). Collins et al. (2020); Khodak et al. (2019) investigate how such local fine-tuning can improve performance in some simple settings if the users' optima are close to each other. In another highly relevant line of work, Maurer et al. (2016); Tripuraneni et al. (2020);
|
38 |
+
Koshy Thekumparampil et al. (2021); Feng et al. (2021) shows how a shared representation can be leveraged to perform efficient transfer of knowledge between different tasks (and users). Li et al. (2020c); Mohri et al. (2019); Yu et al. (2020) investigate how FL distributes the accuracy across users and show that personalization gives a more equitable distribution. We refer to (Kulkarni et al., 2020) for a broader survey of personalization methods. Unlike most of the above works, we consider the perspective of a single agent/user . Further, while our weighted gradient averaging is closely related to weighted model averaging, the bias correction method is novel and is directly motivated by our theory. Finally, while several of the above works (e.g. Mansour et al.,
|
39 |
+
2020; Deng et al., 2020) also provide theoretical guarantees, they use a statistical learning theory viewpoint whereas we use a stochastic optimization lens.
|
40 |
+
|
41 |
+
Perhaps the works closest to ours are (Donahue & Kleinberg, 2020) and (Grimberg et al., 2021), both of whom study model averaging. The former uses game theory to investigate whether self-interested players have an incentive to join an FL task. This is true as long as users achieve significantly better performance when training together than when training alone. Their work further highlights the importance of understanding when personalization can improve performance. More recently, Grimberg et al. (2021) consider a weighted model averaging of two users for mean estimation in 1D. Both these works study only toy settings with restrictive assumptions. Our results are more general and include non-convex optimization.
|
42 |
+
|
43 |
+
More recently there was an attempt to formalize a new selfish variant of Federated Learning (Ruichen et al.,
|
44 |
+
2022), a new setting where we only care about the performance of a subset of internal clients all the while using/collaborating with external clients. This setting is a particular case of the one considered here (by taking client 0 to be the average of internal clients). Also, (Mestoukirdi et al., 2021) proposes a user-centric formulation of federated learning that can be seen as a particular case of our weighted gradient averaging scheme, further they show empirically that communication load problems can be overcome by clustering agents. The last two works lack rigorous theory to back their results. Auxiliary learning. More generally, there is a framework that combines a main task with auxiliary tasks in order to minimize the performance of the main task; this is usually done by minimizing a weighted linear combination of the main loss with the auxiliary losses which is similar to WGA. In most of the works in this area, there is a lack of convergence guarantees. The auxiliary tasks or collaborators as we call them can be seen as (hopefully more informative) priors as in (Baifeng et al.). Such works that use this idea are for example (Xingyu et al.) for reinforcement learning (which optimizes the collaboration weights as well) and (Aviv et al.) which considers a general combination (not necessarily linear) implemented by a neural network and performs optimization via implicit differentiation. All these works are based on approximations and lack convergence guarantees as stated before. In this work, we propose a simpler model (constant collaboration weights) but analyze rigorously the convergence of all algorithms which has not been done before.
|
45 |
+
|
46 |
+
Lately, Chayti & Karimireddy (2022) took inspiration from stochastic variance reduction methods (Johnson
|
47 |
+
& Zhang, 2013) to propose a way to perform optimization when having access to auxiliary information and give theoretical guarantees, however, this work does not have the linear speedup in terms of the number of collaborators that we have in this work.
|
48 |
+
|
49 |
+
Control variates. There is some similarity between our bias correction method and other control variate methods such as SCAFFOLD (Karimireddy et al., 2019), however the local vs global objectives as well as the resulting updates are different. Also, we use an exponential moving average whereas other control variates use mainly an SVRG-like correction (for a detailed discussion see Appendix A.2).
|
50 |
+
|
51 |
+
## 3 Setup And Assumptions
|
52 |
+
|
53 |
+
In this section, we formalize personalized collaborative optimization and discuss our assumptions.
|
54 |
+
|
55 |
+
## 3.1 Personalized Collaborative Stochastic Optimization
|
56 |
+
|
57 |
+
We model collaborative optimization as an environment where N + 1 users denoted 0*, . . . , N* can interact with each other. Each user k has only access to its own objective fk(x) := Eξ
|
58 |
+
(k) [fk(x; ξ
|
59 |
+
(k))] (e.g. a loss function evaluated on their own data), where ξ
|
60 |
+
(k)is a random variable from which we can sample without necessarily knowing its distribution (this covers the online optimization setting as well as optimizing over finite training data sets). The users can collaborate by sharing (stochastic) gradients that they compute on their private loss function fk on a shared input parameter x.
|
61 |
+
|
62 |
+
We formalize the personalized collaborative stochastic optimization problem as solving for user 0's goal:
|
63 |
+
|
64 |
+
$$\operatorname*{min}_{x\in\mathbb{R}^{d}}\;f_{0}(x)\ ,$$
|
65 |
+
$$(1)$$
|
66 |
+
|
67 |
+
$$\left(2\right)$$
|
68 |
+
f0(x) , (1)
|
69 |
+
by exchanging gradients with the other users. This exchange of information between the main user '0' and their collaborators can be done in many ways. In this work, to solve problem (1), user 0 updates their state xt by using different variants of a gradient estimate g(xt) and step size ηt:
|
70 |
+
|
71 |
+
$$x_{t+1}=x_{t}-\eta_{t}g(x_{t})\ .$$
|
72 |
+
xt+1 = xt − ηtg(xt) . (2)
|
73 |
+
As illustrated in Algorithm 1, each collaborator k computes an unbiased local gradient estimate gk(xt) :=
|
74 |
+
∇xfk(xt; ξ
|
75 |
+
(k)
|
76 |
+
t) of ∇xfk(xt) at xt, and shares those with the main user 0. Using these helper gradients as well as its own gradient, user 0 then forms the final g(xt) and takes an update step.
|
77 |
+
|
78 |
+
The simplest baseline to consider (henceforth called the 'Alone' method) is the case where user 0 ignores the collaborators and decides to work alone by setting g(xt) = g0(xt). In general, g(xt) can be formed in several different ways, using current gradients as well as past gradients.
|
79 |
+
|
80 |
+
## 3.2 Assumptions
|
81 |
+
|
82 |
+
Notation. For each user k, we denote by x
|
83 |
+
⋆ k a stationary point of fk, and f
|
84 |
+
⋆
|
85 |
+
k its corresponding value. We denote the gradient noise nk(x, ξ) = gk(xt) − ∇xfk(xt).
|
86 |
+
|
87 |
+
We make the following common assumptions:
|
88 |
+
A1 (Smoothness) f0 is L smooth, i.e., ∀ x, y ∈ R
|
89 |
+
d:
|
90 |
+
|
91 |
+
$$\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})-\nabla_{\mathbf{x}}f_{0}(\mathbf{y})\|\leq L\|\mathbf{y}-\mathbf{x}\|\,.$$
|
92 |
+
|
93 |
+
Algorithm 1 Collaborative Stochastic Optimization Require: Collaborators k = 0*, . . . , N*
|
94 |
+
Require: x0; ηt; T
|
95 |
+
for t = 0 *. . . T* − 1 do for all users k = 0*, . . . , N* **in parallel do**
|
96 |
+
Sample ξ
|
97 |
+
(k)
|
98 |
+
t Compute gk(xt) := ∇xfk(xt; ξ
|
99 |
+
(k)
|
100 |
+
t)
|
101 |
+
end parfor
|
102 |
+
▽ Aggregation on user 0:
|
103 |
+
Form g(xt) using received {gk(xt
|
104 |
+
′ )}
|
105 |
+
t
|
106 |
+
′≤t k=0*,...,N*
|
107 |
+
xt+1 = xt − ηtg(xt)
|
108 |
+
end for return xT
|
109 |
+
A2 (µ*-PL)* f0 satisfies the µ−PL condition, i.e.:
|
110 |
+
∀ x ∈ R
|
111 |
+
d: ∥∇xf0(x)∥
|
112 |
+
2 ≥ 2µ(f0(x) − f
|
113 |
+
⋆
|
114 |
+
0
|
115 |
+
).
|
116 |
+
|
117 |
+
And for each agent k ∈ {0*, . . . , N*}:
|
118 |
+
A3 (δ-Bounded Hessian Dissimilarity, or δ*-BHD)*
|
119 |
+
∀ x ∈ R
|
120 |
+
d: ∥∇2xfk(x) − ∇2xf0(x)∥ ≤ δ .
|
121 |
+
|
122 |
+
A4 (Gradient Similarity) ∃ *m, ζ*2 k ≥ 0 s.t. ∀x ∈ R
|
123 |
+
d:
|
124 |
+
∥∇xfk(x) − ∇xf0(x)∥
|
125 |
+
2 ≤ m∥∇xf0(x)∥
|
126 |
+
2 + ζ 2 k
|
127 |
+
.
|
128 |
+
|
129 |
+
A5 (Bounded variance) ∃ σ 2 k ≥ 0 s.t. ∀x ∈ R
|
130 |
+
d:
|
131 |
+
E[∥nk(x, ξ(k)
|
132 |
+
t)∥
|
133 |
+
2] ≤ σ 2 k
|
134 |
+
.
|
135 |
+
|
136 |
+
A1 is a very generic assumption. A2 is not assumed in the general non-convex case, but only in the µ-PL cases in our theorems, instead of convexity. A3 is implied by smoothness, and equivalent up to multiplying δ by a constant to (Karimireddy et al., 2020, Assumption A2), and appears for quadratic functions in (Shamir et al.,
|
137 |
+
2014; Reddi et al., 2016; Karimireddy et al., 2019). A4 is also very generic, and coincides with (Ajalloeian &
|
138 |
+
Stich, 2020, Assumption 4). Similar assumptions to bound the bias appeared also in (Bertsekas & Tsitsiklis, 2000, though they require vanishing bias), in (Bertsekas, 2002, pg. 38–39) and more recently in (Karimireddy et al., 2020; 2019; Deng et al., 2020). A5 can be relaxed to allow an additional unbounded variance term which grows with the norm of the estimated gradient. Convergence results under this relaxed assumption are provided in the supplementary material. Our main conclusions are maintained in this generalized case.
|
139 |
+
|
140 |
+
Hessian dissimilarity δ: We note that Hessian dissimilarity as in A2 for δ = 2L is directly implied by L-smoothness of the users. In practice, if users are similar (and not adversarial) we expect δ ≪ L.
|
141 |
+
|
142 |
+
Bias parameters m and ζ 2: To showcase the intuition behind the bias parameters m and ζ we can limit ourselves to the case of one collaborator '1'. The parameter ζ quantifies translation between f0 and f1, while m quantifies the scaling. To be more precise, if we were collaborating with a translated copy i.e.
|
143 |
+
|
144 |
+
f1(x) ≡ f0(x) + a
|
145 |
+
⊤x + b then ζ 2 = ∥a∥
|
146 |
+
2 and m = 0. If we were collaborating with a scaled copy i.e.
|
147 |
+
|
148 |
+
f1(x) = sf0(x) then m = (1 − s)
|
149 |
+
2 and ζ = 0. Even simpler than this, m determines whether the bias is bounded or not, m = 0 means the bias can be bounded independently of x, this should be the simplest case. The constant bias term ζ 2 also quantifies how much the two collaborators' goals are different, this can be seen from the approximation ζ 2 *≈ ∥∇*xf1(x
|
150 |
+
⋆ 0
|
151 |
+
)∥
|
152 |
+
2 ∝ f1(x
|
153 |
+
⋆ 0
|
154 |
+
) − f1(x
|
155 |
+
⋆ 1
|
156 |
+
), in other words how much distant are their two stationary points that they would have found by ignoring each other. In particular, ζ 2 = 0 corresponds to the case of f0 and f1 sharing the same optimum.
|
157 |
+
|
158 |
+
## 4 Weighted Gradient Averaging
|
159 |
+
|
160 |
+
As a first basic algorithm, we here introduce *weighted gradient averaging* and analyze its convergence in the non-convex case and under the µ-PL condition. We show that for the special case of collaborative mean estimation when every user has its own distribution, we exactly recover the existing theoretical results of Grimberg et al. (2021). While recuperating analogous main ideas, our results are more general applying to any smooth stochastic optimization problem in arbitrary dimensions with multiple collaborators.
|
161 |
+
|
162 |
+
WGA Algorithm. As illustrated in Algorithm 2, at each time step t, using the current state xt, each Algorithm 2 WGA variant of Algorithm 1 Require: x0; ηt; αt;{τk}
|
163 |
+
N
|
164 |
+
k=1; T
|
165 |
+
... as Algorithm 1 ...
|
166 |
+
|
167 |
+
▽ Aggregation on user 0:
|
168 |
+
g(xt) := (1 − αt)g0(xt) + αtPN
|
169 |
+
k=1 τkgk(xt)
|
170 |
+
collaborator k = 1 *. . . N* computes gk(xt) an unbiased local gradient estimate of ∇xfk(xt), and sends those to user 0. Then using these gradient estimates and the collaboration weights αt ∈ [0, 1], {τi*} ∈ {*τi ≥
|
171 |
+
0,Pi τi = 1}, the main user 0 forms
|
172 |
+
|
173 |
+
$$\mathbf{g}(\mathbf{x}_{t}):=(1-\alpha_{t})\mathbf{g}_{0}(\mathbf{x}_{t})+\alpha_{t}\sum_{k=1}^{N}\tau_{k}\mathbf{g}_{k}(\mathbf{x}_{t})\ ,$$
|
174 |
+
|
175 |
+
and performs an SGD step with the obtained gradient estimate g(xt), reaching the new state xt+1 =
|
176 |
+
xt − ηtg(xt).
|
177 |
+
Remark. WGA is SGD applied to the "modified " function x 7→ (1 − αt)f0(x) + αtPN
|
178 |
+
k=1 τkfk(x).
|
179 |
+
We analyze now precisely the convergence rate of Algorithm 2 under heterogeneous data across the users, in the non-convex and µ-PL case in the following Theorem 4.1. Theorem 4.1 (Convergence of WGA). Under Assumptions A1, A4, A5, Algorithm 2 after T *rounds for*
|
180 |
+
constant collaboration weight αt := α < 1/
|
181 |
+
√m, and constant step-size ηt := η *satisfies the following convergence bound, (where* Ft := E[f0(xt)] − f
|
182 |
+
⋆
|
183 |
+
_re $F_{t}:=\mathbb{E}[f_{0}(\mathbf{x}_{t})|-f_{0}^{\prime}]$;_ _e. For $\eta=\min\Bigl{(}\frac{1}{L},\sqrt{\frac{2F_{0}}{L^{2}T^{2}}}\Bigr{)}$:_ $$\frac{1-\alpha^{2}m}{2T}\sum_{t=0}^{T-1}\mathbb{E}\bigl{[}\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}\bigr{]}=\mathcal{O}\biggl{(}\frac{LF_{0}}{T}+\sqrt{\frac{LF_{0}\tilde{\sigma}^{2}(\alpha)}{T}}+\alpha^{2}\zeta^{2}\biggr{)}\,.$$
|
184 |
+
Non-convex case. For η = min1L
|
185 |
+
$\mu$_-PL case. If in addition A2 holds, then for the choice $\eta=\min\left(\frac{1}{T},\frac{\log(\max(1,\frac{2\mu F_{T}}{\delta L(\mu)T}))}{(1-\alpha^{2}m)\mu T}\right)$: $F_{T}=\mu$_-_
|
186 |
+
$$\hat{\mathcal{O}}\bigg(F_{0}\exp\big(-\frac{\mu T}{L}\big)+\frac{L\tilde{\sigma}(\alpha)^{2}}{\mu^{2}T(1-\alpha^{2}m)^{2}}+\frac{\alpha^{2}\zeta^{2}}{\mu(1-\alpha^{2}m)}\bigg)\,,$$
|
187 |
+
where O˜ *suppresses* log(T) *factors and we defined* σ˜
|
188 |
+
2(α) := (1−α)
|
189 |
+
2σ
|
190 |
+
$$\O_{0}^{2}+\alpha^{2}\sum_{k=1}^{N}\O_{k}^{N}$$
|
191 |
+
2
|
192 |
+
k σ
|
193 |
+
2
|
194 |
+
k
|
195 |
+
2 =PN
|
196 |
+
k=1 τkζ
|
197 |
+
2
|
198 |
+
k
|
199 |
+
.
|
200 |
+
$$\mathrm{\boldmath~\Gamma~}a n d\,\zeta^{2}=1$$
|
201 |
+
Similar to (Karimi et al., 2016, Theorem 4), we can get rid of the logarithmic factors in the µ-PL case by choosing a decreasing step size.
|
202 |
+
|
203 |
+
Bias-variance trade-off. Crucially, the collaborative variance σ˜
|
204 |
+
2(α) is smaller than the individual variance σ 2 0 of user 0's gradient estimates, however, this decrease in variance is accompanied by an additional bias term O(α 2ζ 2), hence we have established a bias-variance trade-off, which motivates the proper choice of the collaboration weight α.
|
205 |
+
|
206 |
+
Choice of {τk}
|
207 |
+
N
|
208 |
+
k=1. The best choice of {τk}
|
209 |
+
N
|
210 |
+
k=1 is based on a constrained quadratic programming problem
|
211 |
+
(see App. C.2). However as T → ∞ this best choice of the weights {τk}
|
212 |
+
N
|
213 |
+
k=1 is completely dictated by the bias term. We have τk ∝ 1{k=arg minl ζ 2 l
|
214 |
+
}i.e the best we can do is collaborate with the agents with the smallest bias.
|
215 |
+
|
216 |
+
Application of WGA to collaborative mean estimation. Weighted gradient averaging generalizes the model averaging problem studied in (Donahue & Kleinberg, 2020; Grimberg et al., 2021). We show how to recover their results here.
|
217 |
+
|
218 |
+
$$1\ \ ^{\prime}k\ ^{\circ}$$
|
219 |
+
|
220 |
+
Suppose we want to estimate the mean µ0 of real random stochastic samples {z
|
221 |
+
(0)
|
222 |
+
0*, . . . , z*
|
223 |
+
(T)
|
224 |
+
0} with E[z
|
225 |
+
(t)
|
226 |
+
0
|
227 |
+
] = µ0.
|
228 |
+
|
229 |
+
Consider min xf0(x) := 12
|
230 |
+
(x − µ0)
|
231 |
+
2, with unbiased stochastic gradients given as ∇f(x; z t 0
|
232 |
+
) = (x − z t 0
|
233 |
+
). Similarly, we define our collaborator f1(x) := 12
|
234 |
+
(x − µ1)
|
235 |
+
2 with a different mean µ1 and its stochastic gradients. We have that f0 is 1-PL, 1smooth, ζ 2 = (µ1 − µ0)
|
236 |
+
2, and m = 0. Let us also use a starting point x 0 = z 0 0 to get E[F0] ≤ σ 2 0
|
237 |
+
. Plugging these values into Theorem 4.1, we get that
|
238 |
+
|
239 |
+
$$\mathbb{E}(x_{T}-\mu_{0})^{2}\leq\tilde{\mathcal{O}}\bigg(\sigma_{0}^{2}\exp{(-T)}+\frac{\tilde{\sigma}(\alpha)^{2}}{T}+\alpha^{2}(\mu_{0}-\mu_{1})^{2}\bigg)$$
|
240 |
+
|
241 |
+
Note that T here represents the number of stochastic samples of µ0 we use. Compare this with (Grimberg et al., 2021) who show a rate of O σ˜(α)
|
242 |
+
2 T +α 2(µ0 −µ1)
|
243 |
+
2. Thus, we recover their results for a large enough T
|
244 |
+
and ignoring logarithmic factors. These logarithmic factors can be avoided by using a decreasing step size
|
245 |
+
(see Appendix C.2). Speedup over training alone. Due to the bias-variance trade-off in Theorem 4.1, the best choice of α is
|
246 |
+
|
247 |
+
$\square$
|
248 |
+
$\square$
|
249 |
+
$\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$.
|
250 |
+
#### one. Due to the bias-variance trade-off in Theorem 4.1, the $\alpha_{\text{opt}}=\underset{\alpha\in(0,\frac{1}{\sqrt{m}})}{\operatorname{arg\,min}}\,\frac{L\tilde{\sigma}(\alpha)^2}{\mu^2T(1-\alpha^2m)^2}+\frac{\alpha^2\zeta^2}{\mu(1-\alpha^2m)}$.
|
251 |
+
We show that a linear speedup can only be obtained if m = 0 and ζ 2 = 0, this means fk ≡ f0 (collaboration with N copies), in this case the inverse of the speedup is given by 1 − αopt =1 N+1 . However, when the functions are minimized at the same point (ζ 2 = 0) but with unbounded bias (m > 0), the collaboration weight α is bounded by √
|
252 |
+
1 m due to the term 1 − α 2m in the denominator and leads to a speedup relative to training alone that is sub-linear (see Figure 6). In the case where ζ 2 > 0, the speedup gained due to weighted averaging is further limited. In fact, in this case when T → ∞ we have αopt → 0 making the gain 0. Intuitively, WGA controls for the bias introduced by using gradient estimates from the collaborators by down-weighting them. While this may reduce the bias in a single round, the bias keeps accumulating over multiple rounds. Thus, the benefit of WGA diminishes with increasing T. In the next section, we see how to directly remove this bias.
|
253 |
+
|
254 |
+
## 5 Bias Correction
|
255 |
+
|
256 |
+
In Section 4, bias was identified as the major problem limiting the performance of WGA. Therefore we propose a bias correction algorithm that directly tackles this issue. Our strategy consists of estimating the bias between the gradients of f0 and its collaborators {fk}
|
257 |
+
N
|
258 |
+
k=1 using past gradients. Then, this bias is subtracted from the current gradient estimates of each collaboratorWe first demonstrate the utility of such bias correction assuming access to some ideal bias oracle. Then, we show how to use an exponential moving average of past gradients to approximate the oracle.
|
259 |
+
|
260 |
+
Algorithm 3 Bias correction variant of Algorithm 1 Require: x0; ηt; αt; βt; T; c0 = b0
|
261 |
+
... as Algorithm 1 ...
|
262 |
+
|
263 |
+
▽ Aggregation on user 0:
|
264 |
+
gavg := PN
|
265 |
+
k=1 τkgk(xt)
|
266 |
+
g(xt) := (1 − αt)g0(xt) + αt(gavg − ct) ▷ update bt := gavg(xt) − g0(xt) ▷ observed bias ct+1 := (1 − βt)ct + βtbt ▷ next bias estimate BC Algorithm. As usual, at each time t, each user k = 0*, . . . , N* computes their own local gradient estimate gk(xt). Then, as illustrated in Algorithm 3, user 0 uses ct—an estimate of the bias ct ≈ (PN
|
267 |
+
k=1 τk∇fk(x) −
|
268 |
+
∇f0(x))—and the collaboration weight αt:
|
269 |
+
|
270 |
+
$$\mathbf{g}(\mathbf{x}_{t}):=(1-\alpha_{t})\mathbf{g}_{0}(\mathbf{x}_{t})+\alpha_{t}\Big(\sum_{k=1}^{N}\tau_{k}\mathbf{g}_{k}(\mathbf{x}_{t})-\mathbf{c}_{t}\Big)\ .$$
|
271 |
+
|
272 |
+
Then user 0 updates their parameters using this pseudo gradient as xt+1 = xt − ηtg(xt). We next discuss how to compute this estimate ct.
|
273 |
+
|
274 |
+
## 5.1 Using A Bias Oracle
|
275 |
+
|
276 |
+
As a warm-up, let us suppose we have access to an oracle that gives a noisy unbiased estimate of the true bias
|
277 |
+
|
278 |
+
$\mathbf{\mathit{c_{\mathrm{oracle,}t}=\sum_{k=1}^N\tau_k\nabla_{\mathbf{x}}f_k(\mathbf{x}_t)-\nabla_{\mathbf{x}}f_0(\mathbf{x}_t)+\mathbf{n}_{\mathrm{oracle,}t}}}$ noise of the oracle and is independent of the gradient.
|
279 |
+
The quantity noracle,t is the noise of the oracle and is independent of the gradient estimates. Using this, we have that the update satisfies
|
280 |
+
|
281 |
+
$$\mathbb{E}\big[\sum_{k=1}^{N}\tau_{k}\mathbf{g}_{k}(\mathbf{x}_{t})-\mathbf{c}_{\mathrm{oracle},t}\big]=\nabla f_{0}(\mathbf{x})\,.$$
|
282 |
+
|
283 |
+
Hence, this becomes similar to the case where ζ 2 = 0 and m = 0 with WGA, enabling linear speedup.
|
284 |
+
|
285 |
+
Theorem 5.1 formalizes this intuition. Theorem 5.1 (Convergence given a bias oracle). *Under Assumption A1, using an ideal oracle of the mean*
|
286 |
+
bias coracle,t *with variance* E[∥noracle,t∥
|
287 |
+
2] = v
|
288 |
+
2/N *(i.e.,* v
|
289 |
+
2is the variance of the bias oracle associated to each
|
290 |
+
collaborator), for constant collaboration weight αt :=α, and constant step-size ηt :=η *we have the following:*
|
291 |
+
Non-convex case. For η = min 1L
|
292 |
+
,
|
293 |
+
q 2F0
|
294 |
+
Lσ˜ 2(α) 1 2T T X−1 t=0 E[∥∇f0(xt)∥ 2] = O LF0 T+ rLF0σ˜ 2(α) T ! .
|
295 |
+
:
|
296 |
+
µ**-PL case.** *If in addition A2 holds, then for the choice* η = min 1L
|
297 |
+
,
|
298 |
+
|
299 |
+
$${\frac{\log(\operatorname*{max}(1,{\frac{2\mu F_{0}T}{3L{\tilde{\sigma}}(\alpha)^{2}}}))}{\mu T}}\Bigg)\,\colon$$
|
300 |
+
$$F_{T}=\tilde{\cal O}\left(F_{0}\exp{(-\frac{\mu T}{L})}+\frac{L\tilde{\sigma}(\alpha)^{2}}{\mu^{2}T}\right)\,,$$ _where $\tilde{\sigma}^{2}(\alpha)=(1-\alpha)^{2}\sigma_{0}^{2}+\alpha^{2}(\sigma_{a}^{2}+\frac{v^{2}}{N})$, $\sigma_{a}^{2}=\sum_{k=1}^{N}\tau_{k}^{2}\sigma_{k}^{2}$._
|
301 |
+
Choice of the weights τk. We choose these weights so that we minimize σ˜
|
302 |
+
2(α), it is easy to show that there is a choice such that σ 2 a ≤
|
303 |
+
PN
|
304 |
+
k=1 σ 2 k N2 . To simplify the discussion we suppose PN
|
305 |
+
k=1 σ 2 k N = σ 2 0 and replace σ 2 a by σ 2 0/N.
|
306 |
+
|
307 |
+
Speedup over training alone. First, note that the rate of Theorem 5.1 when v 2 = 0 matches Theorem 4.1 with m = 0 and ζ 2 = 0. We examine two cases.
|
308 |
+
|
309 |
+
- If σ 2 0 > 0 : In this case, we choose
|
310 |
+
|
311 |
+
As case, we choose $$\alpha_{\rm opt}\in\mathop{\arg\min}_{\alpha}\,\hat{\sigma}^2(\alpha)=\frac{N}{N+1+\frac{v^2}{\sigma_0^2}},$$ $\sigma_0^2\frac{1+\frac{v^2}{\sigma_0^2}}{N+1+\frac{v^2}{\sigma_0^2}}.$ For $N$ large enough $(N\geq\frac{v^2}{\sigma^2}+1)$, this simplifies to $\hat{\sigma}^2(\hat{\sigma}^2)$.
|
312 |
+
giving $\tilde{\sigma}^{2}(\alpha_{\text{opt}})=\sigma_{0}^{2}\frac{1+\frac{v^{2}}{\sigma_{0}^{2}}}{N+1+\frac{v^{2}}{\sigma_{0}^{2}}}$.
|
313 |
+
σ2
|
314 |
+
0
|
315 |
+
2(αopt) = O(
|
316 |
+
σ
|
317 |
+
2
|
318 |
+
0
|
319 |
+
N
|
320 |
+
) and
|
321 |
+
a convergence rate of Oq σ 2 0 NT in the general non-convex case and O σ 2 0 µ2NT with µ-PL inequality. Thus, we achieve linear speedup.
|
322 |
+
|
323 |
+
- If σ 2 0 = 0 the baseline here is gradient descent. If v 2 ̸= 0 then both the non-convex and µ-PL convergence rates are slower than GD. The best choice of collaboration weight α here is α = 0.
|
324 |
+
|
325 |
+
## 5.2 Approximating The Oracle Using Ema
|
326 |
+
|
327 |
+
Clearly, the previous discussion shows that given access to a bias oracle, using bias correction gives significant speedup even when we have a large bias i.e. m and ζ 2 are large. Algorithm 3 shows how we can use the exponential moving average of past gradients to estimate this bias without an oracle:
|
328 |
+
|
329 |
+
$$\mathbf{c}_{t+1}:=(1-\beta_{t})\mathbf{c}_{t}+\beta_{t}\Big(\sum_{k=1}^{N}\tau_{k}\mathbf{g}_{k}(\mathbf{x}_{t})-\mathbf{g}_{0}(\mathbf{x}_{t})\Big)\,.$$
|
330 |
+
|
331 |
+
Intuitively, this averages over ≈
|
332 |
+
1 β past independent stochastic bias estimates reducing the variance of ct.
|
333 |
+
|
334 |
+
We next examine the effect of replacing our bias oracle using such a ct.
|
335 |
+
|
336 |
+
Theorem 5.2 (Convergence of bias correction). Under Assumptions A1 and A3–A5, Algorithm 3 for constant collaboration weight αt :=α*, constant step-size* ηt :=η ≤ min( 1L
|
337 |
+
,1 6α2δ 2 ) *satisfies the following:*
|
338 |
+
Non-convex case. for βt = min(1, 10δ 2(ζ˜2/T +σ 2 0+σ 2 a
|
339 |
+
)
|
340 |
+
σ 2 0+σ2a1/3η 2/3) *we have:*
|
341 |
+
|
342 |
+
$$\frac{1}{4T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}]\leq\frac{F_{0}}{\eta T}+\frac{4\alpha^{2}E_{0}}{\beta T}+12\alpha^{2}\big((\sigma_{0}^{2}+\sigma_{a}^{2})(\vec{\zeta}^{\,2}/T+\sigma_{0}^{2}+\sigma_{a}^{2})\big)^{1/3}(\delta\eta)^{2/3}+\frac{L\sigma^{2}(\alpha)}{2}\eta+10\alpha^{2}\delta^{2}\sigma^{2}(\alpha)\eta^{2}\big).$$
|
343 |
+
|
344 |
+
µ**-PL case.** for βt = min(1,10δ 21/3η 2/3) *we have:*
|
345 |
+
|
346 |
+
$$F_{T}\leq(1-\frac{\mu\eta}{2})^{T}\Phi_{0}+\frac{L\sigma^{2}(\alpha)}{\mu}\eta+24\alpha^{2}\bigl(\sigma_{0}^{2}+\sigma_{a}^{2}\bigr)^{2/3}(\delta\eta)^{2/3}/\mu\,.$$
|
347 |
+
|
348 |
+
where Φ0 = F0 +
|
349 |
+
2α 2η β E0 c +
|
350 |
+
10α 2η β2˜ζ, Ft = E[f0(xt)] − f
|
351 |
+
⋆
|
352 |
+
0
|
353 |
+
, E0 = E[∥c0 − ∇f1(x0) + ∇f0(x0)∥
|
354 |
+
2],σ 2(α) :=
|
355 |
+
(1 − α)
|
356 |
+
2σ 2 0 + α 2σ 2 a
|
357 |
+
,
|
358 |
+
˜ζ 2:= 2(1 + m)(E[∥∇f0(x0)∥
|
359 |
+
2] + 2ζ 2), σ 2 a =PN
|
360 |
+
k=1 τ 2 k σ 2 k and ζ 2 =PN
|
361 |
+
k=1 τkζ 2 k
|
362 |
+
.
|
363 |
+
|
364 |
+
Discussion:
|
365 |
+
- **Significance of the terms.** In the non-convex inequality the first term in the inequality in theorem 5.2 measures how fast the initial condition is forgotten, the second term measures how the initial bias estimation affects the optimization whereas the third term measures the effect of having used noisy (and dependent on the past) estimates of the bias.
|
366 |
+
|
367 |
+
- **Bias correction works** We see that ζ 2is divided by T which means that our bias correction strategy works indeed in correcting the bias ζ 2. However, using EMA adds the term 12α 2(σ 2 0 +
|
368 |
+
σ 2 a
|
369 |
+
)(˜ζ 2/T +σ 2 0 +σ 2 a
|
370 |
+
)1/3(δη)
|
371 |
+
2/3 which is greater than the noise term Lσ2(α)
|
372 |
+
2η unless we limit ourselves to collaborators with small Hessian dissimilarity δ.
|
373 |
+
|
374 |
+
- **Condition on the dissimilarity** δ. Theorem 5.2 shows that to gain from the collaboration (be better than training alone) we need (δη)
|
375 |
+
2/3 ≪ η. Now if we fix T, The optimal η in the non-convex case is of order √
|
376 |
+
1 T
|
377 |
+
, then we would need δ 2 = o( √
|
378 |
+
1 T
|
379 |
+
) and in the µ-PL case the optimal η scales as 1 T
|
380 |
+
so that we need δ 2 = o( 1 T
|
381 |
+
) in this case .
|
382 |
+
|
383 |
+
Remark. The condition on the similarity parameter δ is reasonable since, in particular, it eliminates adversarial agents that would have a big δ.
|
384 |
+
|
385 |
+
Choice of the weights τk. We show (See C.4) that as T → ∞ the best choice of these weights is completely dictated by the variance term σ 2 a
|
386 |
+
. In particular there is always a choice such that σ 2 a ≤
|
387 |
+
PN
|
388 |
+
k=1 σ 2 k N2 . This means that σ 2 a scales as 1/N so for simplification's sake we suppose PN
|
389 |
+
k=1 σ 2 k N ≤ σ 2 0 and replace σ 2 a by σ 2 0/N.
|
390 |
+
|
391 |
+
Corollary 5.3 (linear speedup of BC). For σ 2 0 > 0 and a fixed horizon T, supposing that we have a mechanism to select collaborators with δ 2 = o( √
|
392 |
+
1 T
|
393 |
+
) *in the non-convex case and* δ 2 = o( 1 T
|
394 |
+
) in the µ-PL case.
|
395 |
+
|
396 |
+
Then there is an appropriate choice of the weights α, {τk} for which, in leading order of T *we have :*
|
397 |
+
Non-convex case. For η = min(1/L, 1/(6α 2δ 2),
|
398 |
+
q 2F0 Lσ2(α)T
|
399 |
+
):
|
400 |
+
|
401 |
+
$${\frac{1}{T}}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}]={\mathcal{O}}\bigg({\sqrt{\frac{L F_{0}\sigma_{0}^{2}}{(N+1)T}}}\bigg).$$
|
402 |
+
µ-PL case. for η = min(1/L, 1/(6α 2δ 2), log(max(2, 2µΦ0T 3Lσ2(α) ) µT ) :
|
403 |
+
$$F_{T}=\tilde{\mathcal{O}}\Big(\Phi_{0}\frac{L\sigma_{0}^{2}}{\mu^{2}(N+1)T}\Big)$$
|
404 |
+
|
405 |
+
Remark. It is not hard to see that the quantities ζ and δ are "perpendicular" in the sense that δ can be small and ζ very big. For example, we can take f0(x) = 12 x 2 and f1(x) = 1+δ 2
|
406 |
+
(x −ζ 1+δ
|
407 |
+
)
|
408 |
+
2. Corollary 5.3 means that we can benefit optimally from all agents that have a small δ irrespective of their bias ζ 2.
|
409 |
+
|
410 |
+
Conclusion: BC solves "partially" the problems of WGA. From the above discussion, we see that BC solves the problems WGA had with bias parameters m and ζ 2. First of all, there is no dependence on the heterogeneity parameter m, in particular the collaboration weight α can range freely in the interval [0, 1]. Secondly, with BC, the bias ζ 2 does not accumulate with time. However, we only benefit optimally from our EMA approach when the dissimilarity δ between the collaborators is small δ 2 = o( √
|
411 |
+
1 T
|
412 |
+
) (No Free Lunch).
|
413 |
+
|
414 |
+
## 6 Experiments
|
415 |
+
|
416 |
+
To validate our theory we consider the noisy quadratic model i.e. optimizing a function of the type f0(x) := 12
|
417 |
+
(x − m⋆
|
418 |
+
0
|
419 |
+
)
|
420 |
+
⊤A0(x − m⋆
|
421 |
+
0
|
422 |
+
), m⋆
|
423 |
+
0 ∼ N (x
|
424 |
+
⋆ 0
|
425 |
+
, Σ0) .
|
426 |
+
|
427 |
+
While simple, this model can serve as an illustrative test for our theory and is often used to test machine learning and federated learning algorithms (Schaul et al., 2013; Wu et al., 2018; Martens & Grosse, 2015; Zhang et al., 2019). One common simplification is to consider both A0 and Σ0 to be diagonal (or codiagonalizable). This assumption makes it possible to optimize the function f0 over each of its dimensions independently. So it suffices to consider a noisy quadratic model in 1D: optimizing f0(x) := 12 a0(x − x
|
428 |
+
⋆
|
429 |
+
0 +
|
430 |
+
ξ0 a0
|
431 |
+
)
|
432 |
+
2, ξ0 ∼ N (0, σ2) by collaborating with favg(x) := 12 a1(x − x
|
433 |
+
⋆
|
434 |
+
1 +
|
435 |
+
ξ1 a1
|
436 |
+
)
|
437 |
+
2, ξ1 ∼ N (0, σ 2 N
|
438 |
+
). Here, we have N as the number of collaborators, δ = ∥a0 − a1∥, and ζ 2 = ∥a1(x
|
439 |
+
⋆
|
440 |
+
1 − x
|
441 |
+
⋆ 0
|
442 |
+
)∥
|
443 |
+
2. The quantity f0*,test* =
|
444 |
+
1 2 a0(x − x
|
445 |
+
⋆ 0
|
446 |
+
)
|
447 |
+
2 can be interpreted as a test loss (called simply loss in the plots). In our plots we use by default δ = 1, ζ = 4 and σ = 10. Convergence speed. Figure 1 shows convergence curves of the three competing algorithms we have discussed before: working alone, weighted gradient averaging (WGA), and bias correction (BC). In particular, we see that BC reaches a lower error level compared to both other algorithms. This confirms our theory that BC reduces the bias in the algorithm enabling it to reach a lower error level. The initial increase in the loss is also characteristic of BC and is because during the initial stages our EMA estimate of the bias is quite poor. Eventually, the bias estimate improves and we get fast convergence. Dependence on data heterogeneity. Figure 2 shows how the bias parameter ζ 2influences the performance of BC. As predicted by the theory, we see that BC always converges to the same error level uninfluenced by ζ 2. This bias only effects the time horizon needed for convergence. In contrast, WGA is strongly influenced by the bias as we see in Figure 3. In fact, the convergence error level of WGA is directly proportional to α 2ζ 2, meaning that we would need to set α = 0 (i.e train alone) to ensure low error. This demonstrates that the bias correcting technique employed by BC indeed succeeds, validating our theory.
|
448 |
+
|
449 |
+
![9_image_0.png](9_image_0.png)
|
450 |
+
|
451 |
+
Figure 1: Comparing Bias Correction (orange) to WGA (green) and training alone (blue). BC achieves a lower loss than training alone or using WGA. The step sizes were tuned for training alone and WGA, but not for BC.
|
452 |
+
|
453 |
+
![9_image_1.png](9_image_1.png)
|
454 |
+
|
455 |
+
Figure 2: Effect of the bias ζ on the convergence of BC for a fixed choice of step-size η = 10−4, BC weight β = 10−4 and collaboration weight α =N
|
456 |
+
N+1 , where N = 10. We can see that ζ influences the time needed for convergence but eventually all curves converge to the same error level.
|
457 |
+
|
458 |
+
Dependence on the number of collaborators. Figure 4 shows how the number of collaborators N influences the convergence of BC for a relatively big δ = 1. We see that increasing N does have a positive effect on BC and decreases the error level to which it converges. However, the benefit saturates quickly.
|
459 |
+
|
460 |
+
While there is a substantial improvement from N = 1 to N = 10, the rest only sees negligible improvement.
|
461 |
+
|
462 |
+
![10_image_0.png](10_image_0.png)
|
463 |
+
|
464 |
+
Figure 3: Effect of the bias ζ on the convergence of WGA for a fixed choice of step-size η = 5 × 10−4, collaboration weight α = 10−3 and N = 10. We can see that the bigger ζ is the bigger the final loss will be.
|
465 |
+
|
466 |
+
In fact, WGA can only converge up to O(α 2ζ 2).
|
467 |
+
|
468 |
+
![10_image_1.png](10_image_1.png)
|
469 |
+
|
470 |
+
Figure 4: Effect of the number of collaborators N on the convergence of BC for a fixed choice of step-size η = 5 × 10−4, BC weight β = 10−4, collaboration weights α =N
|
471 |
+
N+1 and δ = 1 (not very small). We can see that increasing N does improve the level to which BC converges, due to the smaller variance with larger N.
|
472 |
+
We expect this to result from using a big δ since our theory only predicts linear speedup in N for δ very small.
|
473 |
+
|
474 |
+
## 7 Limitations And Extensions
|
475 |
+
|
476 |
+
Bias and generalization. We have proposed a strategy to correct for the "gradient" bias between the main agent and its collaborators, but in doing so we have put a lot of faith in the "quality" of the main agent's gradients. However, in the case where the main agent has a very limited dataset, some bias might be good to make out for the lack of data. Bias Correction in deep learning. In this work we have employed the idea of gradient bias correction using SGD. Our methods can also be extended to other optimizers such as momentum or Adam. A larger empirical exploration of such algorithms, as well as more real-world deep learning experiments, would be valuable but is out of scope for our more theoretical work. Adding local steps. Currently, the users communicate with each other after every gradient computation.
|
477 |
+
|
478 |
+
This is a problem for Federated Learning (which is not the aim of this paper). More communicationefficient schemes can be developed by instead allowing multiple local steps before communication, such as in FedAvg (McMahan et al., 2017). Similarly, extending our algorithms to allow personalization for all users instead of focusing only on user 0 would improve practicality in the federated learning setting.
|
479 |
+
|
480 |
+
Fine-grained measures of similarity. Our algorithms, as well as the assumptions, use static global measures of dissimilarity. Time-varying adaptive weighting strategies such as cosine similarity between gradients may further improve our algorithms. Using individual user-level similarities, such as in (Grimberg et al., 2020) would also be a fruitful extension. Similarity-based user selection rules are also closely related to Byzantine robust learning, where they are used to exclude malicious participants (Blanchard et al., 2017; Baruch et al., 2019; Karimireddy et al., 2021).
|
481 |
+
|
482 |
+
## 8 Conclusion
|
483 |
+
|
484 |
+
In this work, we have introduced the collaborative stochastic optimization framework where one "main" user collaborates with a set of willing-to-help collaborators. We considered the simplest method to solve this problem: using SGD with *weighted gradient averaging*. We discussed in detail the limitations of this idea arising mainly due to the bias introduced by the collaboration. To solve this bias problem, we proposed a second algorithm *bias correction*. We showed that our bias correction algorithm manages to remove the effect of this bias and, under some optimal choices of its parameters, leads to a linear speedup as we increase the number of collaborators.
|
485 |
+
|
486 |
+
## References
|
487 |
+
|
488 |
+
Ahmad Ajalloeian and Sebastian U. Stich. On the convergence of sgd with biased gradients. arXiv:2008.00051
|
489 |
+
[cs.LG], 2020.
|
490 |
+
|
491 |
+
Navon Aviv, Achituve Idan, Maron Haggai, Chechik Gal, and Fetay Ethan. Auxiliary learning by implicit differentiation. *ICLR 2021.* URL https://arxiv.org/pdf/2007.02693.pdf.
|
492 |
+
|
493 |
+
Shi Baifeng, Hoffman Judy, Saenko Kate, Darrell Trevor, and Xu Huijuan. Auxiliary task reweighting for minimum-data learning. 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
|
494 |
+
Vancouver, Canada. URL https://arxiv.org/pdf/2010.08244.pdf.
|
495 |
+
|
496 |
+
Moran Baruch, Gilad Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. *arXiv preprint arXiv:1902.06156*, 2019.
|
497 |
+
|
498 |
+
Martin Beaussart, Felix Grimberg, Mary-Anne Hartley, and Martin Jaggi. Waffle: Weighted averaging for personalized federated learning. In *NeurIPS 2021 Workshop on New Frontiers in Federated Learning*.
|
499 |
+
|
500 |
+
2021.
|
501 |
+
|
502 |
+
Dimitri Bertsekas. *Nonlinear Programming*. Athena scientific, 2002.
|
503 |
+
|
504 |
+
Dimitri P. Bertsekas and John N. Tsitsiklis. Gradient convergence in gradient methods with errors. SIAM
|
505 |
+
Journal on Optimization, 10(3):627–642, 2000.
|
506 |
+
|
507 |
+
Peva Blanchard, El Mahdi Mhamdi, Rachid Guerraoui, and Julien Stainer. Byzantine-tolerant machine learning. *arXiv preprint arXiv:1703.02757*, 2017.
|
508 |
+
|
509 |
+
El Mahdi Chayti and Sai Praneeth Karimireddy. Optimization with access to auxiliary information.
|
510 |
+
|
511 |
+
arXiv:2206.00395 [cs.LG]*https: // arxiv. org/ abs/ 2206. 00395*], 2022.
|
512 |
+
|
513 |
+
Liam Collins, Aryan Mokhtari, and Sanjay Shakkottai. Why does maml outperform erm? an optimization perspective. *arXiv preprint arXiv:2010.14672*, 2020.
|
514 |
+
|
515 |
+
Ashok Cutkosky and Francesco Orabona. Momentum-based variance reduction in non-convex sgd.
|
516 |
+
|
517 |
+
arXiv:1905.10018 [cs.LG], 2019.
|
518 |
+
|
519 |
+
A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. *In NIPS 27, pages 1646—1654.*, 2014.
|
520 |
+
|
521 |
+
Y. Deng, M. M. Kamani, and M. Mahdavi. Adaptive personalized federated learning. *arXiv:2003.13461 [cs,*
|
522 |
+
stat], 2020.
|
523 |
+
|
524 |
+
K. Donahue and J Kleinberg. Model-sharing games: Analyzing federated learning under voluntary participation. *arXiv preprint arXiv:2010.00753*, 2020.
|
525 |
+
|
526 |
+
Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning: A meta-learning approach. *arXiv preprint arXiv:2002.07948*, 2020.
|
527 |
+
|
528 |
+
Zhili Feng, Shaobo Han, and Simon S Du. Provable adaptation across multiway domains via representation learning. *arXiv preprint arXiv:2106.06657*, 2021.
|
529 |
+
|
530 |
+
Felix Grimberg, Mary-Anne Hartley, Martin Jaggi, and Sai Praneeth Karimireddy. Weight erosion: An update aggregation scheme for personalized collaborative machine learning. In *MICCAI Workshop on* Distributed and Collaborative Learning, pp. 160–169. 2020.
|
531 |
+
|
532 |
+
Felix Grimberg, Mary-Anne Hartley, Sai Praneeth Karimireddy, and Martin Jaggi. Optimal model averaging:
|
533 |
+
Towards personalized collaborative learning. *ICML Workshop on Federated Learning for User Privacy and* Data Confidentiality *https: // fl-icml. github. io/ 2021/ papers/ FL-ICML21_ paper_ 56. pdf* , 2021.
|
534 |
+
|
535 |
+
Filip Hanzely and Peter Richtárik. Federated learning of a mixture of global and local models.
|
536 |
+
|
537 |
+
arXiv:2002.05516 [cs.LG], 2020.
|
538 |
+
|
539 |
+
Filip Hanzely, Boxin Zhao, and Mladen Kolar. Personalized federated learning: A unified framework and universal optimization techniques. *arXiv:2102.09743 [cs.LG]*, 2021.
|
540 |
+
|
541 |
+
Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
|
542 |
+
|
543 |
+
NeurIPS, 2013.
|
544 |
+
|
545 |
+
P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D'Oliveira, S. E. Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He, L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konecný, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Özgür, R. Pagh, M. Raykova, H. Qi, D. Ramage, R. Raskar, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramèr, P. Vepakomma, J. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, , and S. Zhao. Advances and open problems in federated learning. *arXiv:1912.04977 [cs, stat]*, 2019.
|
546 |
+
|
547 |
+
Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition. In *ECML - European Conference on Machine Learning* and Knowledge Discovery in Databases - Volume 9851, pp. 795–811, 2016.
|
548 |
+
|
549 |
+
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. arXiv:1910.06378v4[cs.LG], 2019.
|
550 |
+
|
551 |
+
Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. Mime: Mimicking centralized stochastic algorithms in federated learning. arXiv:2008.03606 [cs.LG], 2020.
|
552 |
+
|
553 |
+
Sai Praneeth Karimireddy, Lie He, and Martin Jaggi. Learning from history for byzantine robust optimization. In *International Conference on Machine Learning*, pp. 5311–5319. PMLR, 2021. Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Adaptive gradient-based meta-learning methods. *arXiv preprint arXiv:1906.02717*, 2019.
|
554 |
+
|
555 |
+
Jakub Konecny, H. Brendan McMahan, Daniel Ramage, and Peter Richtarik. Federated optimization :
|
556 |
+
Distributed machine learning for on-device intelligence. *arxiv.org/abs/1610.02527*, 2016.
|
557 |
+
|
558 |
+
Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, and Sewoong Oh. Sample efficient linear meta-learning by alternating minimization. *arXiv e-prints*, pp. arXiv–2105, 2021.
|
559 |
+
|
560 |
+
Viraj Kulkarni, Milind Kulkarni, and Aniruddha Pant. Survey of personalization techniques for federated learning. *arXiv:2003.08673 [cs.LG]*, 2020.
|
561 |
+
|
562 |
+
T. Li, A. K. Sahu, A. Talwalkar, and V. Smith. Federated learning: Challenges, methods, and future directions. *IEEE Signal Processing Magazine, 37(3):50–60*, 2020a.
|
563 |
+
|
564 |
+
Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. *arXiv:2012.04221 [cs.LG]*, 2020b.
|
565 |
+
|
566 |
+
Tian Li, Maziar Sanjabi, Ahmad Beirami, and Virginia Smith. Fair resource allocation in federated learning.
|
567 |
+
|
568 |
+
In *ICLR - International Conference on Learning Representations*, 2020c.
|
569 |
+
|
570 |
+
Y. Mansour, M. Mohri, and A. T. Suresh. Three approaches for personalization with applications to federated learning. *arXiv:2002.10619 [cs, stat]*, 2020.
|
571 |
+
|
572 |
+
James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored ap- proximate curvature. *In International conference on machine learning, pages 2408–2417*, 2015.
|
573 |
+
|
574 |
+
Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. The benefit of multitask representation learning. *Journal of Machine Learning Research*, 17(81):1–32, 2016.
|
575 |
+
|
576 |
+
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. *In Proceedings of AISTATS, pp. 1273–1282*, 2017.
|
577 |
+
|
578 |
+
Mohamad Mestoukirdi, Matteo Zecchin, David Gesbert, Qianrui Li, and Nicolas Gresset. User-centric federated learning. *arXiv:2110.09869 [cs.LG]*, 2021.
|
579 |
+
|
580 |
+
M. Mohri, G. Sivek, and A. T. Suresh. Agnostic federated learning. *arXiv preprint arXiv:1902.00146*, 2019.
|
581 |
+
|
582 |
+
A. Nedic. Distributed gradient methods for convex machine learning problems in networks: Distributed optimization. *IEEE Signal Processing Magazine, 37(3):92–101*, 2020.
|
583 |
+
|
584 |
+
Sashank J Reddi, Jakub Konečn`y, Peter Richtárik, Barnabás Póczós, and Alex Smola. Aide: Fast and communication efficient distributed optimization. *arXiv preprint arXiv:1608.06879*, 2016.
|
585 |
+
|
586 |
+
Luo Ruichen, Hu Shoubo, and Yu Lequan. Rethinking client reweighting for selfish federated learning.
|
587 |
+
|
588 |
+
In *Submitted to The Tenth International Conference on Learning Representations*, 2022. URL https:
|
589 |
+
//openreview.net/forum?id=qfGcsAGhFbc.
|
590 |
+
|
591 |
+
Tom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. *In International Conference on* Machine Learning, pages 343–351, 2013.
|
592 |
+
|
593 |
+
M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient.
|
594 |
+
|
595 |
+
arXiv:1309.2388 [math.OC], 2013.
|
596 |
+
|
597 |
+
Ohad Shamir, Nati Srebro, and Tong Zhang. Communication-efficient distributed optimization using an approximate newton-type method. In *International conference on machine learning*, pp. 1000–1008. PMLR,
|
598 |
+
2014.
|
599 |
+
|
600 |
+
Nilesh Tripuraneni, Michael I Jordan, and Chi Jin. On the theory of transfer learning: The importance of task diversity. *arXiv preprint arXiv:2006.11650*, 2020.
|
601 |
+
|
602 |
+
Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021.
|
603 |
+
|
604 |
+
K. Wang, R. Mathews, c. Kiddon, H. Eichner, F. Beaufays, and D. Ramage. Federated evaluation of on-device personalization. *arXiv preprint arXiv:1910.10252*, 2019.
|
605 |
+
|
606 |
+
Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in stochastic meta-optimization. *arXiv preprint arXiv:1803.02021*, 2018.
|
607 |
+
|
608 |
+
Lin Xingyu, Singh Baweja Harjatin, Kantor George, and Held David. Adaptive auxiliary task weighting for reinforcement learning. *33rd Conference on Neural Information Processing Systems (NeurIPS 2019),*
|
609 |
+
Vancouver, Canada. URL https://openreview.net/pdf?id=rkxQFESx8S.
|
610 |
+
|
611 |
+
Tao Yu, Eugene Bagdasaryan, and Vitaly Shmatikov. Salvaging federated learning by local adaptation.
|
612 |
+
|
613 |
+
arXiv preprint arXiv:2002.04758, 2020.
|
614 |
+
|
615 |
+
Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George E Dahl, Christopher J
|
616 |
+
Shallue, and Roger Grosse. Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model. *arXiv preprint arXiv:1907.04164*, 2019.
|
617 |
+
|
618 |
+
Michael Zhang, Karan Sapra, Sanja Fidler, Serena Yeung, and Jose M. Alvarez. Personalized federated learning with first order model optimization. *ICLR2021*, 2021.
|
619 |
+
|
620 |
+
## A More Related Work And Discussion A.1 Related Work
|
621 |
+
|
622 |
+
In personalized Federated Learning, a prominent approach consists in using a local-global interpolation such as in (Hanzely & Richtárik, 2020) which proposes to use a consensus-like regularization to make such an interpolation, however, they only give convergence of the global model. In a second work (Hanzely et al., 2021) study the problem of optimizing an objective that has both local and global parameters, they propose an SVRG-like algorithm to reduce the variance of local gradient estimates. We reiterate that our goal differs from that of Federated Learning, we care about the performance of one particular agent, and the bias we have is inherent to collaborating with different agents, it is not a result of using local steps as in Federated Learning. Also, our bias correction method has the main goal to reduce the bias, the reduction in variance is a result of averaging and further using an exponential moving average to reduce the variance of our bias estimates. (Li et al., 2020b) discusses variance trade-offs for point estimation and linear regression problems, our results are more general from this perspective. In the personalized optimization setting, two very recent empirical works propose rules to learn collaboration weights. Beaussart et al. (2021) modify Scaffold (Karimireddy et al., 2019) to use Euclidean distances of the updates between different agents to derive a heuristic for weight definition. It uses both local and global control variates, though without a decay mechanism. (Zhang et al., 2021) on the other hand uses an idea from meta-learning to learn the collaboration weights, by using a first-order approximation of the objective with respect to these weights. While demonstrating practical performance on deep learning tasks, neither of the two methods comes with convergence guarantees. Our approach in contrast chooses collaboration weights to achieve provable convergence as well as speedup with the number of workers.
|
623 |
+
|
624 |
+
## A.2 Comparison With Other Control Variate Techniques
|
625 |
+
|
626 |
+
Control variates have been used extensively in variance reduction techniques such as SVRG (Johnson &
|
627 |
+
Zhang, 2013), SAGA (Defazio et al., 2014), SAG (Schmidt et al., 2013), MVR (Cutkosky & Orabona, 2019). The main idea is the following: given an unbiased gradient estimate g(x) at x, to reduce its variance we replace g(x) by g(x) − y + E[y] where y is a random variable that correlates positively with g(x), this is true for all the methods cited before except for SAG which does not bother to keep the new gradient estimate unbiased. This idea is used in Federated Learning to correct for the bias introduced by the use of local steps, In SCAFFOLD (Karimireddy et al., 2019) for example, g(x) is the ith client gradient estimate at x the current local model, and y is the gradient estimate of the same client but at the last received server model z and E[y] is client average true gradient at z. Thus the bias of this new gradient estimate is
|
628 |
+
∇fi(x) − ∇f(x) − ∇f(z) + ∇f(z), by assuming fi − f has a hessian bounded by δ it is easy to see that the norm of the bias will be bounded by δ∥x − z∥, all that is left is to efficiently bound this norm ∥x − z∥.
|
629 |
+
|
630 |
+
In our case, the bias does not come from local steps but is a result of collaborating with potentially different agents. We note here that our goal is different from that of Federated Learning which aims to train the average model, whereas we train one model using/collaborating with other models and we only care about local performance. Again, the bias is inherent to the collaboration, our solution to reduce it is estimating the future bias based on past observed biases and then subtracting it from the current gradient estimate.
|
631 |
+
|
632 |
+
For simplicity, let's discuss the case β = 1 which means we only use the last observed bias to estimate the current bias. In this case, the bias of our corrected gradient is : α(∇f1(xt)−∇f0(xt)−∇f1(xt−1)+∇f0(xt−1))
|
633 |
+
, using the bounded hessian dissimilarity assumption, it is easy to see that the norm of this quantity is bounded by αδ∥xt − xt−1∥, if we can efficiently bound this quantity, the convergence proof will be easy. It turned out that using only the last observed bias as a bias estimate incurs the same additional variance in all steps, to solve this we propose the use of an exponential average of all past observed biases.
|
634 |
+
|
635 |
+
One important idea about these approaches that use bounded hessian dissimilarity and lead to bounding the bias as above is that this makes it possible to control the bias of the model indirectly by controlling the step size.
|
636 |
+
|
637 |
+
## B Relaxing Noise Assumptions
|
638 |
+
|
639 |
+
We start by relaxing our assumptions about the noise. In general, we can make the following assumptions:
|
640 |
+
First relaxation of A5 (Bounded variance) for each agent k ∈ {0, . . . , N} ∃ Mk, σ2 k ≥ 0 s.t. ∀x ∈ R
|
641 |
+
d:
|
642 |
+
|
643 |
+
$$\left\{\begin{array}{l l}{\mathbb{E}[\|\mathbf{n}_{k}(\mathbf{x},\xi_{t}^{(k)})\|^{2}]\leq M_{k}\|\nabla_{\mathbf{x}}f_{k}(\mathbf{x})\|^{2}+\sigma_{k}^{2}\,.}\end{array}\right.$$
|
644 |
+
|
645 |
+
The quantity σ 2 k is the variance of collaborator's gradient estimates when agent k has converged to a stationary point. Using this new assumption with the gradient dissimilarity assumption:
|
646 |
+
A4 (Gradient Similarity) ∃ *m, ζ*2 k ≥ 0 s.t. ∀x ∈ R
|
647 |
+
d:
|
648 |
+
|
649 |
+
$$\|\nabla_{\mathbf{x}}f_{k}(\mathbf{x})-\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}\leq m\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}+\zeta_{k}^{2}\,.$$
|
650 |
+
|
651 |
+
Now if we denote favg =PN
|
652 |
+
k=1 τkfk and navg the variance associated to its gradient estimate, we have :
|
653 |
+
|
654 |
+
$$\begin{array}{r l}{{}}&{{}=\left[\left|\mathbf{n}_{\mathrm{avg}}(\mathbf{x},\xi_{t}^{i=(1\ldots N)})\right|\right]^{2}]\leq M_{\mathrm{avg}}m^{\prime}\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}/N+\tilde{\sigma}_{\mathrm{avg}}^{2}/N\,,}\\ {{}}&{{}=N\sum_{k=1}^{N}\tau_{k}^{2}\tilde{\sigma}_{k}^{2}\,,}\\ {{}}&{{}\tilde{\sigma}_{k}^{2}}&{{}=\sigma_{k}^{2}+2M_{k}\zeta_{k}^{2}\,,}\\ {{}}&{{}M_{\mathrm{avg}}}&{{}=2N\sum_{k=1}^{N}\tau_{k}^{2}M_{k}\,,}\\ {{}}&{{}m^{\prime}}&{{}=2(1+m)\,.}\end{array}$$
|
655 |
+
|
656 |
+
The quantity σ˜
|
657 |
+
2 avg measures the average variance of collaborators' gradient estimates this time when agent
|
658 |
+
"0" has converged to a stationary point. Mkζ 2 k is the variance resulting from collaborator k being biased from agent 0 and thus converging to a different minimizer. We can argue that when the hessian dissimilarity parameter δ = 0 i.e. each collaborator fk is a translated copy of f0 then the noise will not be changed from its original level by translation (adding a constant to a random variable does not change its variance) and thus Mkζ 2 k should be replaced by a quantity that is proportional to the parameter δ. This motivates the final form of our assumption:
|
659 |
+
Final form of A5 (Bounded variance) ∃ σ 2 k
|
660 |
+
, D2k ≥ 0 s.t. ∀x ∈ R
|
661 |
+
d:
|
662 |
+
|
663 |
+
$$\begin{array}{l l}{{\mathbb{E}[\|\mathbf{n}_{\mathrm{avg}}(\mathbf{x},\xi_{t}^{i=(1\ldots N)})\|^{2}]\leq M_{\mathrm{avg}}m^{\prime}\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}/N+\tilde{\sigma}_{\mathrm{avg}}^{2}/N\,,}}\\ {{\tilde{\sigma}_{\mathrm{avg}}^{2}}}&{{=N\sum_{k=1}^{N}\tau_{k}^{2}\tilde{\sigma}_{k}^{2}\,,}}\\ {{\tilde{\sigma}_{k}^{2}}}&{{=\sigma_{k}^{2}+2\delta M_{k}D_{k}^{2}\,,}}\\ {{M_{\mathrm{avg}}}}&{{=2N\sum_{k=1}^{N}\tau_{k}^{2}M_{k}\,,}}\\ {{m^{\prime}}}&{{=2(1+m)\,.}}\end{array}$$
|
664 |
+
|
665 |
+
D2k is a constant that can be interpreted as a diameter of the parameter space for agent k.
|
666 |
+
|
667 |
+
We note that we can still safely go to the other forms of this assumption without affecting the proofs, we can always replace δD2k by ζ 2 k in our next result if the reader is not convinced by the dependence of the noise with respect to δ, and we can replace σ˜
|
668 |
+
2 avg by σ 2 avg = NPN
|
669 |
+
k=1 τ 2 k σ 2 k if we don't want to make the noise of the collaborators when agent "0" has converged depend on their bias. We will do the proofs for only N = 1 and without taking into account the dependence of the noise of agent
|
670 |
+
"1" on its bias with respect to "0". To be explicit, for a collaboration with one agent "1" we make the following assumption on the noise:
|
671 |
+
|
672 |
+
$$\left\{\begin{array}{l l}{{\mathbb{E}[\|\mathbf{n}_{0}(\mathbf{x},\xi_{t}^{(0)})\|^{2}]}}&{{\leq M_{0}\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}+\sigma_{0}^{2}\,,}}\\ {{\mathbb{E}[\|\mathbf{n}_{1}(\mathbf{x},\xi_{t}^{i=(1)})\|^{2}]\leq M_{1}m\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}+\sigma_{1}^{2}\,.}}\end{array}\right.$$
|
673 |
+
|
674 |
+
This will not make us lose any generality since we can replace M1 by Mavg/N and σ 2 1 by σ 2 avg/N or σ˜
|
675 |
+
2 avg/N.
|
676 |
+
|
677 |
+
Furthermore, we would also need to replace ζ 2 by PN
|
678 |
+
k=1 τkζ 2 k
|
679 |
+
.
|
680 |
+
|
681 |
+
## C Missing Proofs C.1 Sgd With Biased Gradients
|
682 |
+
|
683 |
+
If we are optimizing an L-smooth function f0 on R
|
684 |
+
d using SGD iterations xt+1 = xt−ηtg(xt) with a gradient that can be written in the form
|
685 |
+
|
686 |
+
$$\mathbf{\mathit{g}}(\mathbf{\mathit{x_{t}}})=\nabla_{\mathbf{x}}f_{0}(\mathbf{\mathit{x_{t}}})+\underbrace{\mathbf{\mathit{b}}(\mathbf{\mathit{x_{t}}})}_{b i a s}+\underbrace{\mathbf{\mathit{n_{t}}}}_{n o i s e}$$
|
687 |
+
|
688 |
+
Then denoting Ft = E[f0(xt)] − f
|
689 |
+
⋆
|
690 |
+
0
|
691 |
+
, we have for ηt ≤ 1/L:
|
692 |
+
|
693 |
+
$$F_{t+1}-F_{t}\leq\frac{\eta}{2}(-\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\|\mathbf{b}(\mathbf{x}_{t})\|^{2})++\frac{L\eta^{2}}{2}\mathbb{E}[\|\mathbf{n}_{t}\|^{2}]\tag{3}$$
|
694 |
+
|
695 |
+
Proof. Using the L-smoothness of f0 we have:
|
696 |
+
|
697 |
+
E[f0(xt+1)] − f0(xt) ≤ ⟨∇xf0(xt), E[xt+1 − xt]⟩ + L 2 Eξt [∥xt+1 − xt∥ 2] = −η⟨∇xf0(xt), E[g(xt)]⟩ + L 2 η 2Eξt [∥g(xt)∥ 2] = −η⟨∇xf0(xt), ∇xf0(xt) + b(xt)⟩ + L 2 η 2(∥(∇xf0(xt) + b(xt)∥ 2 + E[∥nt∥ 2])
|
698 |
+
Using Lη ≤ 1 :
|
699 |
+
|
700 |
+
$$\mathbb{E}[f_{0}(\mathbf{x}_{t+1})]-f_{0}(\mathbf{x}_{t})\leq\frac{\eta}{2}(-2(\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t}),\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})+\mathbf{b}(\mathbf{x}_{t}))+\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})+\mathbf{b}(\mathbf{x}_{t})\|^{2})+\frac{L\eta^{2}}{2}\mathbb{E}[\|\mathbf{n}_{t}\|^{2}])$$ $$=\frac{\eta}{2}(-\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\|\mathbf{b}(\mathbf{x}_{t})\|^{2})+\frac{L\eta^{2}}{2}\mathbb{E}[\|\mathbf{n}_{t}\|^{2}]$$
|
701 |
+
$$\square$$
|
702 |
+
|
703 |
+
Taking an overall expectation yields the desired result. All of the proofs will use this inequality as a starting point.
|
704 |
+
|
705 |
+
## C.2 Proof Of Theorem 4.1
|
706 |
+
|
707 |
+
In this section, we present the detailed proof of Theorem 1 i.e the convergence of WGA for both the nonconvex and µ-PL case.
|
708 |
+
|
709 |
+
We denote n(x, ξt) = (1 − α)n(x, ξ(0)
|
710 |
+
t) + αn1(x, ξ(1)
|
711 |
+
t) the noise of the weighted average.
|
712 |
+
|
713 |
+
Bounding the average noise. Using Assumption A5 (Bounded noise), we can bound the noise in the following way:
|
714 |
+
|
715 |
+
$$E_{\xi}[\|\mathbf{n}(\mathbf{x},\xi_{t})\|^{2}]\leq M(\alpha)\|\nabla_{\mathbf{x}}f_{0}(x)\|^{2}+\tilde{\sigma}^{2}(\alpha)\,,$$
|
716 |
+
|
717 |
+
Where σ˜
|
718 |
+
2(α) := (1 − α)
|
719 |
+
2σ 2 0 + α 2σ 2 1 and M(α) := (1 − α)
|
720 |
+
2M0 + α 2M1m ≤ M = M0 + M1m.
|
721 |
+
|
722 |
+
Proof.
|
723 |
+
|
724 |
+
$$E_{\xi}[\|\mathbf{n}(x,\xi_{t})\|^{2}]=(1-\alpha)^{2}E_{\xi_{t}^{(0)}}[\|\mathbf{n}_{0}(x,\xi_{t}^{(0)})\|^{2}]+\alpha^{2}E_{\xi_{t}^{(1)}}[\|\mathbf{n}_{1}(x,\xi_{t}^{(1)})\|^{2}]$$ $$\leq(1-\alpha)^{2}\{M_{0}\|\nabla_{\mathbf{x}}f_{0}(x)\|^{2}+\sigma_{0}^{2}\}+\alpha^{2}\{M_{1}m\|\nabla_{\mathbf{x}}f_{0}(x)\|^{2}+\sigma_{1}^{2}\}$$ $$\leq M(\alpha)\|\nabla_{\mathbf{x}}f_{0}(x)\|^{2}+\hat{\sigma}^{2}(\alpha)\,.$$
|
725 |
+
|
726 |
+
Main inequality. Now denoting Ft = E[f0(xt)] − f
|
727 |
+
⋆
|
728 |
+
0
|
729 |
+
, for η ≤ 1/L, we have :
|
730 |
+
|
731 |
+
$$F_{t+1}-F_{t}\leq\frac{\eta}{2}(-1+\alpha^{2}m+L M\eta)\mathbb{E}\Big[\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}\Big]+\frac{\eta\alpha^{2}}{2}\zeta^{2}+\frac{L\eta^{2}}{2}\tilde{\sigma}^{2}(\alpha)$$
|
732 |
+
|
733 |
+
Proof. With L-smoothness of f0 and ηL ≤ 1 we can use (3) with b(xt) = α(∇xf1(xt) − ∇xf0(xt)) and nt = n(*x, ξ*t)
|
734 |
+
the Bounded Gradient Dissimilarity assumption (A4 ) lets us upper-bound the term ∥b(xt)∥
|
735 |
+
2 ≤
|
736 |
+
α 2(m∥∇xf0(x)∥
|
737 |
+
2 + ζ 2).
|
738 |
+
|
739 |
+
$$\mathbb{E}_{\xi_{t}}[f_{0}(\mathbf{x}_{t+1})]-f_{0}(\mathbf{x}_{t})\leq\frac{\eta}{2}(-\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\alpha^{2}\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})-\nabla_{\mathbf{x}}f_{1}(\mathbf{x}_{t})\|^{2})+\frac{L\eta^{2}}{2}(M\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}+\vartheta^{2}(\alpha))$$ $$\leq\frac{\eta}{2}(-1+\alpha^{2}m+LM\eta)\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\frac{\eta\alpha^{2}}{2}\varsigma^{2}+\frac{L\eta^{2}}{2}\sigma^{2}(\alpha)$$ All that is left is to take an overall expectation.
|
740 |
+
$\square$ .
|
741 |
+
Now if M = M0 + mM1 ̸= 0, then we choose η ≤
|
742 |
+
1−α 2m 2LM which gives
|
743 |
+
|
744 |
+
$$F_{t+1}-F_{t}\leq-\frac{\eta}{4}(1-\alpha^{2}m)\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\frac{\eta\alpha^{2}}{2}\zeta^{2}+\frac{L\eta^{2}}{2}\hat{\sigma}^{2}(\alpha)$$
|
745 |
+
|
746 |
+
And if M = 0, then we get
|
747 |
+
|
748 |
+
$$F_{t+1}-F_{t}\leq-\frac{\eta}{2}(1-\alpha^{2}m)\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\frac{\eta\alpha^{2}}{2}\zeta^{2}+\frac{L\eta^{2}}{2}\hat{\sigma}^{2}(\alpha)$$
|
749 |
+
|
750 |
+
We combine these two inequalities into one:
|
751 |
+
|
752 |
+
$$F_{t+1}-F_{t}\leq-\frac{\eta}{c}(1-\alpha^{2}m)\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\frac{\eta\alpha^{2}}{2}\zeta^{2}+\frac{L\eta^{2}}{2}\delta^{2}(\alpha)\tag{4}$$ at to 2 if $M=0$ and equal to 4 otherwise. This constant is not very important since
|
753 |
+
The constant c is equal to 2 if M = 0 and equal to 4 otherwise. This constant is not very important since we can always choose the step-size η small enough to make c close to 1.
|
754 |
+
|
755 |
+
Remark. We need 1 − α 2m >= 0 i.e. α ≤ 1/
|
756 |
+
√m if this bound is to guarantee any convergence.
|
757 |
+
|
758 |
+
Non-convex case of Theorem 4.1. To prove the non-convex result, it suffices to rearrange the terms in
|
759 |
+
(4), sum for t = 0 to t = T − 1 and divide by T. This manipulation gives:
|
760 |
+
|
761 |
+
$$\frac{(1-\alpha^2m)}{cT}\sum_{t=0}^{T-1}\mathbb{E}\big[\|\nabla f_0(\mathbf{x_t})\|^2\big]\leq\frac{1}{\eta T}\sum_{t=0}^{T-1}(F_t-F_{t+1})+\frac{L\eta}{2}\partial^2(\alpha)+\frac{1}{2}\alpha^2\zeta^2$$ $$\leq\frac{F_0}{\eta T}+\frac{L\eta}{2}\partial^2(\alpha)+\frac{1}{2}\alpha^2\zeta^2$$ This is true for all $\eta\leq\eta_{\max}:=\min\left(\frac{1}{L},\frac{1-\alpha^2m}{2L M}\right)$ .
|
762 |
+
Choosing $\eta=\min\Bigl(\eta_{\max},\sqrt{\frac{2F_{0}}{L\delta^{2}T}}\Bigr)$ leads to the following result: $$\frac{1-\alpha^{2}m}{cT}\sum_{t=0}^{T-1}\mathbb{E}\bigl{[}\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}\bigr{]}\leq\ \frac{F_{0}}{\eta_{\max}T}+\sqrt{\frac{2LF_{0}\hat{\sigma}^{2}(\alpha)}{T}}+\frac{1}{2}\alpha^{2}\zeta^{2}\,.$$
|
763 |
+
|
764 |
+
µ**-PL case of Theorem 4.1.** To prove the µ-PL result, we start from (4), we use Assumption A2 i.e. f0 satisfies the µ-PL condition: ∀x ∈ R
|
765 |
+
d ∥∇xf0(xt)∥
|
766 |
+
2 ≥ 2µ(f0(x) − f
|
767 |
+
⋆
|
768 |
+
0
|
769 |
+
), this yields:
|
770 |
+
|
771 |
+
$$F_{t+1}\leq\Big{(}1-\frac{2\mu\eta}{c}(1-\alpha^{2}m)\Big{)}F_{t}+\frac{\eta\alpha^{2}}{2}\zeta^{2}+\frac{L\eta^{2}}{2}\delta^{2}(\alpha)$$
|
772 |
+
$$\mathbf{\Sigma}$$
|
773 |
+
|
774 |
+
Repeating (5) recursively we get:
|
775 |
+
|
776 |
+
FT ≤ 1 − 2µη c(1 − α 2m) TF0 + ηα2 2ζ 2 + Lη2 2σ˜ 2(α) TX−1 i=0 1 − 2µη c(1 − α 2m) i ≤ 1 − 2µη c(1 − α 2m) TF0 + ηα2 2ζ 2 + Lη2 2σ˜ 2(α) c 2µη(1 − α2m) = 1 − 2µη c(1 − α 2m) TF0 +cα2 4µ(1 − α2m) ζ 2 +cLη 4µ(1 − α2m) σ˜ 2(α) Choosing 2(1 − α 2m)η/c = min ηmax, log(max(2, 2µF0T 3Lσ˜(α) 2 )) 2µT !we get: FT = O˜ F0 exp − µηmaxT+Lσ˜(α) 2 µ2T(1 − α2m) 2 +α 2 µ(1 − α2m) ζ 2 . This concludes the proof of Theorem 1 in the µ-PL case.
|
777 |
+
In the article, we argued that we can get rid of the logarithmic factors hidden in the notation O˜. We show now how to do it for the µ-PL case.
|
778 |
+
|
779 |
+
µ**-PL with a decreasing step-size.** starting from (5), we choose a step size ηt such that 1−
|
780 |
+
2µηt c(1−α 2m) =
|
781 |
+
t 2
|
782 |
+
(t+1)2 , this means ηt =c(2t+1)
|
783 |
+
2µ(1−α2m)(t+1)2 , this choice transforms (5) into
|
784 |
+
|
785 |
+
$$(t+1)^{2}F_{t+1}\leq t^{2}F_{t}+\frac{c(2t+1)\alpha^{2}}{4\mu(1-\alpha^{2}m)}\zeta^{2}+\frac{c^{2}L(2t+1)^{2}}{8\mu^{2}(1-\alpha^{2}m)^{2}(t+1)^{2}}\hat{\sigma}^{2}(\alpha)$$
|
786 |
+
|
787 |
+
Summing the last inequality for t = 0 to t = T −1, and using the fact PT −1 t=0 2t+ 1 = T
|
788 |
+
2 and 2t+ 1 ≤ 2(t+ 1),
|
789 |
+
we get:
|
790 |
+
|
791 |
+
$$T^{2}F_{T}\leq\frac{c T^{2}\alpha^{2}}{4\mu(1-\alpha^{2}m)}\zeta^{2}+\frac{c^{2}L T}{2\mu^{2}(1-\alpha^{2}m)^{2}}\hat{\sigma}^{2}(\alpha)$$
|
792 |
+
Dividing by $T^{2}$: $$F_{T}\leq\frac{c\alpha^{2}}{4\mu(1-\alpha^{2}m)}\zeta^{2}+\frac{c^{2}L}{2\mu^{2}(1-\alpha^{2}m)^{2}T}\tilde{\sigma}^{2}(\alpha)$$ This indeed is the same rate but without any hidden logarithmic factors in $T$.
|
793 |
+
To be rigorous, we need to make sure that our decreasing step-size verifies ηt ≤ ηmax, this will mean we can't sum starting from t = 0, but instead we need to start from t = t0 such that ηt0 ≤ ηmax is verified. Doing this will lead to
|
794 |
+
|
795 |
+
$$F_{T}\leq\frac{c\alpha^{2}}{4\mu(1-\alpha^{2}m)}\zeta^{2}+\frac{c^{2}L}{2\mu^{2}(1-\alpha^{2}m)^{2}T}\tilde{\sigma}^{2}(\alpha)+\frac{t_{0}^{2}F_{t_{0}}}{T^{2}}$$
|
796 |
+
|
797 |
+
In the general case where we are collaborating with N agents and using the weights {τk}
|
798 |
+
N
|
799 |
+
k=1, as discussed before, it suffices to replace M1 by Mavg/N and σ 2 1 by σ 2 avg/N.
|
800 |
+
|
801 |
+
Choice of the weights {τk}
|
802 |
+
N
|
803 |
+
k=1. Based on the µ-PL bound, the best choice of the weights {τk}
|
804 |
+
N
|
805 |
+
k=1 is given by the following constrained quadratic programming problem :
|
806 |
+
|
807 |
+
$$\operatorname*{min}_{\tau_{1}\geq0,\ldots,\tau_{N}\geq0},\sum_{j}\tau_{j}=1\sum_{k=1}^{N}\frac{L}{\mu T(1-\alpha^{2}m)}\tau_{k}^{2}\sigma_{k}^{2}+\tau_{k}\zeta_{k}^{2}\,,$$
|
808 |
+
|
809 |
+
As T → ∞, the program becomes that of minimizing the average bias i.e.
|
810 |
+
|
811 |
+
$$\operatorname*{min}_{\tau_{1}\geq0,...,\tau_{N}\geq0},\sum_{j}\tau_{j}=1\sum_{k=1}^{N}\tau_{k}\zeta_{k}^{2}\,,$$
|
812 |
+
|
813 |
+
![20_image_0.png](20_image_0.png)
|
814 |
+
|
815 |
+
Figure 5: Collaborative training speedup factor 1/(1 − αopt), indicated as color, as a function of the number of collaborators N (y-axis) and Lσ2 0 µT ζ2 (x-axis) for m = 0. The bigger N and the smaller T ζ2(cumulative bias) is relative to σ 2 0
|
816 |
+
, the bigger the resulting speedup from collaboration.
|
817 |
+
|
818 |
+
The solution to this problem is easy, only the agents who have the smallest bias will get a non-zero weight.
|
819 |
+
|
820 |
+
However, for T finite, the term PN
|
821 |
+
k=1 τ 2 k σ 2 k also plays a role and the weights should be taken to minimize it too. What is important is that as expected, the smaller ζ 2 k and σ 2 k are the bigger the weight it will be given to agent k.
|
822 |
+
To study the effect of N on the convergence rate, we will pick a middle ground where σ 2 k = σ 2 0 and ζk = ζ for all agents k. Choice of the collaboration weight α. The collaboration weight α is chosen as follows:
|
823 |
+
|
824 |
+
$$\alpha\in\operatorname*{arg\,min}_{\alpha\in(0,1/{\sqrt{m}})}{\frac{L{\hat{\sigma}}(\alpha)^{2}}{\mu^{2}T(1-\alpha^{2}m)^{2}}}+{\frac{\alpha^{2}}{\mu(1-\alpha^{2}m)}}\zeta^{2}$$
|
825 |
+
|
826 |
+
For m = 0, which means the bias is bounded, we have αopt = (1 + 1N +
|
827 |
+
µζ2T
|
828 |
+
Lσ2 0
|
829 |
+
)
|
830 |
+
−1 and we obtain a speed-up FT = O˜Lσ2 0 2µ2T
|
831 |
+
(1 − αopt)
|
832 |
+
. The speedup factor 1/(1 − αopt) is illustrated in Figure 5.
|
833 |
+
|
834 |
+
We note in particular that for ζ 2 = 0, the speedup is linear, and only, in this case, do we get such a speedup.
|
835 |
+
|
836 |
+
Now if m ̸= 0 and even in the favorable case ζ 2 = 0, Figure 6 shows how much we deviate from linear speedup (obtained for m = 0) as m is different than zero.
|
837 |
+
|
838 |
+
![21_image_0.png](21_image_0.png)
|
839 |
+
|
840 |
+
Figure 6: Effect of m (which controls the non-constant noise and is related to scaling) on the speedup of WGA when ζ = 0 (avg converges to the same point as 0). The dashed line represents the linear speedup N + 1 7→ N + 1 encountered for m = 0 (N + 1 is the total number of agents including 0). We notice that as m grows the speedup becomes more and more sub-linear. C.3 Proof of Theorem 5.1
|
841 |
+
We will use a bias oracle on only one agent. The bias oracle gives an independent noisy estimate of the true gradient bias between agent 1 and agent 0. This bias oracle is given by ct,oracle = ∇xf1(x) − ∇xf0(x) +
|
842 |
+
n*t,oracle* where n*t,oracle* is an independent noise of variance v 2. Using such an oracle means we are working with an unbiased estimate of ∇xf0(x) with a variance equal to σ˜
|
843 |
+
2(α) = (1 − α)
|
844 |
+
2σ 2 0 + α 2(σ 2 1 + v 2) .
|
845 |
+
|
846 |
+
Now using (3) and Lη ≤ 1, we get:
|
847 |
+
|
848 |
+
$$\mathbb{E}_{\xi_{t}}[f_{0}(\mathbf{x}_{t+1})]-f_{0}(\mathbf{x}_{t})\leq-\frac{\eta}{2}\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}++\frac{L\eta^{2}}{2}(M\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x})\|^{2}+\hat{\sigma}^{2}(\alpha))$$ $$\leq\frac{\eta}{2}(-1+LM\eta)\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}+\frac{L\eta^{2}}{2}\hat{\sigma}^{2}(\alpha)$$
|
849 |
+
|
850 |
+
For η ≤1 2ML we get:
|
851 |
+
|
852 |
+
$$F_{t+1}-F_{t}\leq-\frac{\eta}{c}\mathbb{E}\Big{[}\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}\Big{]}+\frac{L\eta^{2}}{2}\hat{\sigma}^{2}(\alpha)\tag{6}$$
|
853 |
+
|
854 |
+
Where the constant c = 2 if M = 0 and c = 4 otherwise.
|
855 |
+
|
856 |
+
Non-convex case of Theorem 5.1. We rearrange the terms in (6), sum for t = 0 to t = T − 1 and divide by T, we get ∀η ≤ ηmax := min( 1L
|
857 |
+
,1 2ML ),
|
858 |
+
|
859 |
+
$${\frac{1}{c T}}\sum_{t=0}^{T-1}\mathbb{E}\Big[\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}\Big]\leq{\frac{F_{0}}{\eta}}+{\frac{L\eta}{2}}\tilde{\sigma}^{2}(\alpha)\,.$$
|
860 |
+
|
861 |
+
Choosing η = min ηmax,
|
862 |
+
$$\alpha\left(\eta_{\mathrm{max}},{\sqrt{\frac{2F_{0}}{L{\tilde{\sigma}}^{2}T}}}\right),{\mathrm{~we~get:}}$$
|
863 |
+
$${\frac{1}{c T}}\sum_{t=0}^{T-1}\mathbb{E}\Big[\|\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})\|^{2}\Big]\leq{\frac{F_{0}}{\eta_{\mathrm{max}}T}}+{\sqrt{\frac{2L F_{0}{\tilde{\sigma}}^{2}(\alpha)}{T}}}\,.$$
|
864 |
+
|
865 |
+
µ**-PL case of Theorem 5.1.** We use Assumption A2, to have for all η ≤ ηmax = min( 1L
|
866 |
+
,1 2ML ),
|
867 |
+
|
868 |
+
$$F_{t+1}\leq(1-\frac{2\eta\mu}{c})F_{t}+\frac{L\eta^{2}}{2}\tilde{\sigma}^{2}(\alpha)\,.$$
|
869 |
+
$$\left(7\right)$$
|
870 |
+
2(α). (7)
|
871 |
+
A recurrence on (7) yields:
|
872 |
+
|
873 |
+
$$F_{T}\leq(1-\frac{2\eta\mu}{c})^{T}F_{0}+\frac{L\eta}{2}\tilde{\sigma}^{2}(\alpha)\sum_{i=0}^{T-1}(1-\frac{2\eta\mu}{c})^{i}\leq(1-\frac{2\eta\mu}{c})^{T}F_{0}+\frac{c L\eta}{4\mu}\tilde{\sigma}^{2}(\alpha)$$
|
874 |
+
All is left is to set $2\eta/c=\min$ .
|
875 |
+
$$\ln\left(\eta_{\mathrm{max}},{\frac{\log(\operatorname*{max}(2,{\frac{2\mu F_{0}T}{3L\tilde{\sigma}(\alpha)^{2}}})))}{2\mu T}}\right){\mathrm{~to~get:}}$$ $$F_{T}=\tilde{\mathcal{O}}\!\left(F_{0}\exp\big(-\mu\eta_{\mathrm{max}}T\big)+{\frac{L\tilde{\sigma}(\alpha)^{2}}{\mu^{2}T}}\right).$$
|
876 |
+
|
877 |
+
## C.4 Proof Of Theorem 5.2
|
878 |
+
|
879 |
+
The gradient estimator used in our bias correction algorithm g(xt) := (1 − αt)g0(xt) + αt (g1(xt) − ct) can be decomposed into a bias term and a noise term in the following way
|
880 |
+
|
881 |
+
$$\mathbf{g}(\mathbf{x}_{t}):=\nabla_{\mathbf{x}}f_{0}(\mathbf{x}_{t})+\underbrace{\alpha\mathbb{E}[\mathbf{b}_{t}-\mathbf{c}_{t}]}_{b i a s}+\underbrace{\mathbf{n}_{t,t o t a l}}_{n o i s e}$$
|
882 |
+
|
883 |
+
Where bt = g1(xt) − g0(xt) is the observed stochastic gradient bias at time t. Using the L-smoothness of f0 and η < 1/L, (3) would give us the following inequality:
|
884 |
+
|
885 |
+
$$F_{t+1}-F_{t}\leq\frac{\eta}{2}\left(-\mathbb{E}\Big[\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}\Big]+\alpha^{2}\mathbb{E}\Big[\|\mathbb{E}[\mathbf{b}_{t}-\mathbf{c}_{t}]\|^{2}\Big]\right)+\frac{L\eta^{2}}{2}\mathbb{E}\Big[\|\mathbf{n}_{t,t o t a l}\|^{2}\Big]\,,$$
|
886 |
+
|
887 |
+
However, due to the dependence of ct on the past, **this is not true**. For this reason, we use a different proof strategy.
|
888 |
+
|
889 |
+
We have:
|
890 |
+
$$g(\mathbf{x}^{t})=(1-\alpha)\mathbf{g}_{0}(\mathbf{x}^{t})+\alpha\mathbf{g}_{1}(\mathbf{x}^{t})-\alpha\mathbf{c}^{t}$$
|
891 |
+
Where
|
892 |
+
$$\mathbf{c}^{t}=(1-\beta)\mathbf{c}^{t-1}+\beta(\mathbf{g}_{1}(\mathbf{x}^{t-1})-\mathbf{g}_{0}(\mathbf{x}^{t-1}))$$
|
893 |
+
Descent Lemma. Using the L-smoothness of f0 we have :
|
894 |
+
|
895 |
+
$$f_{0}(\mathbf{x}^{t+1})-f_{0}(\mathbf{x}^{t})\leq-\eta\langle\nabla f_{0}(\mathbf{x}^{t}),\mathbf{g}(\mathbf{x}^{t})\rangle+{\frac{L\eta^{2}}{2}}\|\mathbf{g}(\mathbf{x}^{t})\|_{2}^{2}$$
|
896 |
+
|
897 |
+
Due to the dependence of x t on c t, we cannot take the expectation inside the inner-product. However, if we condition on the past (it will be denoted Et) then c tis constant and we have :
|
898 |
+
|
899 |
+
$$\mathbb{E}_{t}\langle\nabla f_{0}(\mathbf{x}^{t}),\mathbf{g}(\mathbf{x}^{t})\rangle=\langle\nabla f_{0}(\mathbf{x}^{t}),(1-\alpha)\nabla f_{0}(\mathbf{x}^{t})+\alpha\nabla f_{1}(\mathbf{x}^{t})-\alpha\mathbf{c}^{t}\rangle.$$
|
900 |
+
And
|
901 |
+
$$\mathbb{E}_{t}\|\mathbf{g}(\mathbf{x}^{t})\|_{2}^{2}=\underbrace{\sigma^{2}(\alpha)}_{=(1-\alpha)^{2}\sigma_{0}^{2}+\alpha^{2}\sigma_{1}^{2}}+\|(1-\alpha)\nabla f_{0}(\mathbf{x}^{t})+\alpha\nabla f_{1}(\mathbf{x}^{t})-\alpha\mathbf{c}^{t}\|_{2}^{2}$$
|
902 |
+
|
903 |
+
So
|
904 |
+
|
905 |
+
$$\mathbb{E}_{t}f_{0}(\mathbf{x}^{t+1})-f_{0}(\mathbf{x}^{t})\leq-\eta(\nabla f_{0}(\mathbf{x}^{t}),(1-\alpha)\nabla f_{0}(\mathbf{x}^{t})+\alpha\nabla f_{1}(\mathbf{x}^{t})-\alpha\mathbf{c}^{t})$$ $$+\frac{L\eta^{2}}{2}(\sigma^{2}(\alpha)+\|(1-\alpha)\nabla f_{0}(\mathbf{x}^{t})+\alpha\nabla f_{1}(\mathbf{x}^{t})-\alpha\mathbf{c}^{t}\|_{2}^{2})$$ $$\leq\frac{\eta}{2}\big{(}-\|\nabla f_{0}(\mathbf{x}^{t})\|_{2}^{2}+\alpha^{2}\|\nabla f_{1}(\mathbf{x}^{t})-\nabla f_{0}(\mathbf{x}^{t})-\mathbf{c}^{t}\|_{2}^{2}\big{)}$$ $$+\frac{L\eta^{2}}{2}\sigma^{2}(\alpha)$$
|
906 |
+
|
907 |
+
Where we have used above ηL ≤ 1 and the identity −2⟨a + b, a⟩ + ∥a + b∥
|
908 |
+
2 2 = ∥b∥
|
909 |
+
2 2 − ∥a∥
|
910 |
+
2 2
|
911 |
+
.
|
912 |
+
|
913 |
+
So
|
914 |
+
|
915 |
+
E[f0(x t+1)] − E[f0(x t)] ≤ η 2 − E[∥∇f0(x t)∥ 2 2 ] + α 2E[∥∇f1(x t) − ∇f0(x t) − c t∥ 2 2 ]+ Lη2 2σ 2(α) ≤ −η 2 E[∥∇f0(x t)∥ 2 2 ] + Lη2 2σ 2(α) + α 2ηE[∥∇f1(x t) − ∇f0(x t) − f1(x t−1) + ∇f0(x t−1)∥ 2 2 ] + α 2ηE[∥c t − f1(x t−1) + ∇f0(x t−1)∥ 2 2 ]
|
916 |
+
Using the δ−BHD assumption, we have :
|
917 |
+
|
918 |
+
$$\mathbb{E}[\|\nabla f_{1}(\mathbf{x}^{t})-\nabla f_{0}(\mathbf{x}^{t})-f_{1}(\mathbf{x}^{t-1})+\nabla f_{0}(\mathbf{x}^{t-1})\|_{2}^{2}]\leq\delta^{2}\mathbb{E}[\|\mathbf{x}^{t}-\mathbf{x}^{t-1}\|_{2}^{2}]:=\delta^{2}\Delta^{t}$$ \[\begin{array}{
|
919 |
+
We will use the notation : Etc = E[∥c t−f1(x t−1)+∇f0(x t−1)∥
|
920 |
+
2 2
|
921 |
+
], Gt = E[∥∇f0(x t)∥
|
922 |
+
2 2
|
923 |
+
] and F
|
924 |
+
t = E[f0(x t)]−f
|
925 |
+
⋆
|
926 |
+
0
|
927 |
+
.
|
928 |
+
|
929 |
+
All in all, we have :
|
930 |
+
|
931 |
+
$$F_{t+1}-F_{t}\leq\frac{-\eta}{2}G^{t}+\frac{L\eta^{2}}{2}\sigma^{2}(\alpha)+\alpha^{2}\delta^{2}\eta\Delta^{t}+\alpha^{2}\eta E_{c}^{t}$$
|
932 |
+
|
933 |
+
Bounding ∆t. We also show that :
|
934 |
+
|
935 |
+
$$\mathbf{\Sigma}$$
|
936 |
+
$$\Delta^{t}\leq\eta^{2}\big(\sigma^{2}(\alpha)+3G^{t-1}+3\alpha^{2}\delta^{2}\Delta^{t-1}+3\alpha^{2}E_{c}^{t-1}\big)$$
|
937 |
+
$$\mathbf{\Sigma}$$
|
938 |
+
c(9)
|
939 |
+
Proof.
|
940 |
+
|
941 |
+
∆t = E[∥x t − x t−1∥ 2 2 ] = η 2E[∥g(x t−1)∥ 2 2 ] = η 2σ 2(α) + E[∥∇f0(x t−1) + α(∇f1(x t−1) − ∇f0(x t−1) − c t−1)∥ 2 2 ] ≤ η 2σ 2(α) + 3E[∥∇f0(x t−1)∥ 2 2 ] + 3α 2E[∥∇f1(x t−1) − ∇f0(x t−1) − ∇f1(x t−2) + ∇f0(x t−2))∥ 2 2 ] + 3α 2E[∥c t−1 − ∇f1(x t−1) + ∇f0(x t−2)∥ 2 2 ] ≤ η 2σ 2(α) + 3G t−1 + 3α 2δ 2∆t−1 + 3α 2E t−1 c
|
942 |
+
Bounding momentum error Etc
|
943 |
+
. Using the recursive definition of c t, it is easy to prove:
|
944 |
+
|
945 |
+
$$E_{c}^{t}\leq(1-\beta)E_{c}^{t-1}+\frac{2\delta^{2}}{\beta}\Delta^{t-1}+\beta^{2}(\sigma_{0}^{2}+\sigma_{1}^{2})$$
|
946 |
+
) (10)
|
947 |
+
$$\square$$
|
948 |
+
$$(10)$$
|
949 |
+
|
950 |
+
Proof.
|
951 |
+
|
952 |
+
E
|
953 |
+
t
|
954 |
+
c = E[∥c
|
955 |
+
t − ∇f1(x
|
956 |
+
t−1) + ∇f0(x
|
957 |
+
t−1)∥2]
|
958 |
+
= E[∥(1 − β)c
|
959 |
+
t−1 + β(g1(x
|
960 |
+
t−1) − g0(x
|
961 |
+
t−1)) − ∇f1(x
|
962 |
+
t−1) + ∇f0(x
|
963 |
+
t−1)∥2]
|
964 |
+
= β
|
965 |
+
2(σ
|
966 |
+
2
|
967 |
+
0 + σ
|
968 |
+
2
|
969 |
+
1
|
970 |
+
) + (1 − β)
|
971 |
+
2E[∥c
|
972 |
+
t−1 − ∇f1(x
|
973 |
+
t−1) + ∇f0(x
|
974 |
+
t−1)∥2]
|
975 |
+
≤ β
|
976 |
+
2(σ
|
977 |
+
2
|
978 |
+
0 + σ
|
979 |
+
2
|
980 |
+
1
|
981 |
+
) + (1 − β)
|
982 |
+
2(1 + β
|
983 |
+
2
|
984 |
+
)E
|
985 |
+
t−1
|
986 |
+
c + (1 − β)
|
987 |
+
2(1 + 2β
|
988 |
+
)E[∥∇f1(x
|
989 |
+
t−1) − ∇f0(x
|
990 |
+
t−1) − f1(x
|
991 |
+
t−2) + f1(x
|
992 |
+
t−2)∥2]
|
993 |
+
≤ β
|
994 |
+
2(σ
|
995 |
+
2
|
996 |
+
0 + σ
|
997 |
+
2
|
998 |
+
1
|
999 |
+
) + (1 − β)
|
1000 |
+
2(1 + β
|
1001 |
+
2
|
1002 |
+
)E
|
1003 |
+
t−1
|
1004 |
+
c + (1 − β)
|
1005 |
+
2(1 + 2β
|
1006 |
+
)δ
|
1007 |
+
2E[∥x
|
1008 |
+
t−1 − x
|
1009 |
+
t−2∥
|
1010 |
+
2
|
1011 |
+
2
|
1012 |
+
]
|
1013 |
+
≤ (1 − β)E
|
1014 |
+
t−1
|
1015 |
+
c +
|
1016 |
+
2δ
|
1017 |
+
2
|
1018 |
+
β
|
1019 |
+
∆t−1 + β
|
1020 |
+
2(σ
|
1021 |
+
2
|
1022 |
+
0 + σ
|
1023 |
+
2
|
1024 |
+
1
|
1025 |
+
)
|
1026 |
+
Non-convex case.
|
1027 |
+
|
1028 |
+
Combining Inequalities 8, 9 and 10, we prove that for η ≤ 1/(6α 2δ 2) :
|
1029 |
+
|
1030 |
+
$\square$
|
1031 |
+
$$\Phi_{t+1}-\Phi_{t}\leq\frac{L\sigma^{2}(\alpha)}{2}\eta^{2}+\frac{10\alpha^{2}\delta^{2}\sigma^{2}(\alpha)}{\beta^{2}}\eta^{3}+2\alpha^{2}\beta\eta(\sigma_{0}^{2}+\sigma_{1}^{2})-\frac{\eta}{4}G^{t}\tag{11}$$
|
1032 |
+
For the potential Φt = Ft +
|
1033 |
+
2α 2η β Etc +
|
1034 |
+
10α 2δ 2η β2 ∆t.
|
1035 |
+
|
1036 |
+
By adding the terms in Inequality 11 from t = 0 to T − 1 and by noting that ∆0 ≤ η 2(2ζ 2 + 2(1 +
|
1037 |
+
m)E[∥∇f0(x 0)∥
|
1038 |
+
2]) := η 2 ˜ζ 2, we get :
|
1039 |
+
|
1040 |
+
$$\frac{1}{4T}\sum_{t=0}^{T-1}G^{t}\leq\frac{F_{0}}{\eta T}+\frac{2\alpha^{2}}{\beta T}E_{c}^{0}+\frac{L\sigma^{2}(\alpha)}{2}\eta+\frac{10\alpha^{2}\delta^{2}\eta^{2}}{\beta^{2}}(\tilde{\zeta}^{2}/T+\sigma^{2}(\alpha))+2\alpha^{2}\beta(\sigma_{0}^{2}+\sigma_{1}^{2})$$
|
1041 |
+
|
1042 |
+
At this level, we choose β ∈ arg minβ∈[0,1]
|
1043 |
+
10α 2δ 2η 2 β2 (
|
1044 |
+
˜ζ 2/T + σ 2(α)) + 2α 2β(σ 2 0 + σ 2 1
|
1045 |
+
) this means choosing β = min(1, 10δ 2(ζ˜2/T +σ 2(α))
|
1046 |
+
σ 2 0+σ 2 1 1/3η 2/3). This choice gives the inequality in theorem 5.2 :
|
1047 |
+
|
1048 |
+
$$\frac{1}{4T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}]\leq\frac{F_{0}}{\eta T}+\frac{4\alpha^{2}E_{0}}{\beta T}+12\alpha^{2}\big{(}(\sigma_{0}^{2}+\sigma_{a}^{2})(\vec{\zeta}^{\,2}/T+\sigma^{2}(\alpha))\big{)}^{1/3}(\delta\eta)^{2/3}+\frac{L\sigma^{2}(\alpha)}{2}\eta+10\alpha^{2}\delta^{2}\sigma^{2}(\alpha)\eta^{2}\big{)}.$$
|
1049 |
+
|
1050 |
+
The term 4α 2E0 βT has a smaller magnitude than the term F0 ηT (because limη→0 η/β = 0). Furthermore, using a batch S times larger for estimating the first bias means that E0 ≤ (σ 2 0 + σ 2 a
|
1051 |
+
)/S.
|
1052 |
+
|
1053 |
+
µ**-PL case.** For the µ-PL case we use the fact that 2µ(f0(x) − f
|
1054 |
+
⋆
|
1055 |
+
0
|
1056 |
+
) *≤ ∥∇*f0(x)∥
|
1057 |
+
2 2 ≤ 2L(f0(x) − f
|
1058 |
+
⋆
|
1059 |
+
0
|
1060 |
+
) which is equivalent (in our notation) to 2µFt ≤ Gt ≤ 2LFt.
|
1061 |
+
|
1062 |
+
Combining this with Inequalities 8, 9 and 10 we get :
|
1063 |
+
|
1064 |
+
$$\begin{array}{l}{{F_{t+1}\leq(1-\eta\mu)F_{t}+\frac{L\eta^{2}}{2}\sigma^{2}(\alpha)+\alpha^{2}\delta^{2}\eta\Delta^{t}+\alpha^{2}\eta E_{c}^{t}}}\\ {{\Delta^{t}\leq\eta^{2}\big(\sigma^{2}(\alpha)+6L F_{t-1}+3\alpha^{2}\delta^{2}\Delta^{t-1}+3\alpha^{2}E_{c}^{t-1}\big)}}\\ {{E_{c}^{t}\leq(1-\beta)E_{c}^{t-1}+\frac{2\delta^{2}}{\beta}\Delta^{t-1}+\beta^{2}(\sigma_{0}^{2}+\sigma_{1}^{2})}}\end{array}$$
|
1065 |
+
|
1066 |
+
Combining these three inequalities we get :
|
1067 |
+
|
1068 |
+
$$\Phi_{t+1}\leq(1-\frac{\mu\eta}{2})\Phi_{t}+\frac{L\sigma^{2}(\alpha)}{2}\eta^{2}+\frac{10\alpha^{2}\delta^{2}\sigma^{2}(\alpha)}{\beta^{2}}\eta^{3}+2\alpha^{2}\beta\eta(\sigma_{0}^{2}+\sigma_{1}^{2})$$
|
1069 |
+
$$\left(12\right)$$
|
1070 |
+
) (12)
|
1071 |
+
For the same potential as in the Non-convex case. Reiterating this inequality gives :
|
1072 |
+
|
1073 |
+
$$\Phi_{T}\leq(1-\frac{\mu\eta}{2})^{T}\Phi_{0}+\frac{L\sigma^{2}(\alpha)}{\mu}\eta+\frac{20\alpha^{2}\delta^{2}\sigma^{2}(\alpha)}{\mu\beta^{2}}\eta^{2}+\frac{4\alpha^{2}\beta}{\mu}(\sigma_{0}^{2}+\sigma_{1}^{2})$$
|
1074 |
+
$$(13)$$
|
1075 |
+
) (13)
|
1076 |
+
At this point, we choose β that optimizes the right-hand side in the previous inequality, and we obtain β = min(1, 10δ 2σ 2(α)
|
1077 |
+
σ 2 0+σ 2 1 1/3η 2/3)
|
1078 |
+
We get then
|
1079 |
+
|
1080 |
+
$$\Phi_{T}\leq(1-\frac{\mu\eta}{2})^{T}\Phi_{0}+\frac{L\sigma^{2}(\alpha)}{\mu}\eta+24\alpha^{2}\big{(}(\sigma_{0}^{2}+\sigma_{a}^{2})\sigma^{2}(\alpha)\big{)}^{1/3}(\delta\eta)^{2/3}/\mu\tag{14}$$
|
1081 |
+
|
1082 |
+
To beat training alone we would need (δη)
|
1083 |
+
2/3 ≪ η which means δ 2 ≪ η. As it is known η is of order 1 T
|
1084 |
+
in the µ-PL case, this means we need δ 2 = o( 1 T
|
1085 |
+
) to beat training alone.
|
1086 |
+
|
1087 |
+
Choosing $\eta=\min(\eta_{max},\frac{\log(\max(2,\frac{2\mu\Phi_0T}{3L^2(\alpha)})}{\mu T})$ we get : .
|
1088 |
+
$$\begin{array}{l}{{\eta=\operatorname*{min}(\eta_{m a x},\frac{\log(\operatorname*{max}(2-\frac{\mu}{\mu^{2}\sigma^{2}}))}{\mu T})\mathrm{~we~get~}:}}\\ {{\Phi_{T}\in\hat{\mathcal{O}}\Big(\Phi_{0}\exp\big(-\mu\eta_{m a x}T/2\big)+\frac{L\sigma^{2}(\alpha)}{\mu^{2}T}+24\alpha^{2}\big((\sigma_{0}^{2}+\sigma_{a}^{2})\sigma^{2}(\alpha))^{1/3}\hat{\sigma}^{2/3}/(\mu^{5/3}T^{2/3})\Big)}}\end{array}$$
|
1089 |
+
Choices of the weights. The optimal choices of the weights α and τk are obtained by minimizing the right-hand-side of the above inequality, this will give a quadratic problem that needs to be solved under the conditions PN
|
1090 |
+
k=1 τk = 1 and τk ≥ 0. As T goes to ∞, the bias ζ 2 disappears and this choice is fully dictated by the variance. In fact we can simply minimize the variance σ 2(α) = (1 − α)
|
1091 |
+
2σ 2 0 + α 2 Pk τ 2 k σ 2 k
|
1092 |
+
.
|
1093 |
+
|
1094 |
+
Proof of Corollary 5.3 :
|
1095 |
+
Now supposing δ 2 = o( √
|
1096 |
+
1 T
|
1097 |
+
), for example δ 2 =δ 2 0 T 3a+1/2for some a > 0, then by choosing η =
|
1098 |
+
min(1/L, 1/(6α 2δ 2),
|
1099 |
+
q 2F0 Lσ2(α)T
|
1100 |
+
) we get :
|
1101 |
+
|
1102 |
+
$$\frac{1}{4T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla f_{0}(\mathbf{x}_{t})\|^{2}]\leq3\sqrt{\frac{L F_{0}\sigma^{2}(\alpha)}{T}}+12\alpha^{2}\big{(}\frac{\sigma_{0}^{2}+\sigma_{a}^{2}}{\sigma^{2}(\alpha)}(\tilde{\epsilon}^{2}/T+\sigma^{2}(\alpha))\frac{2\sigma_{0}^{2}F_{0}}{L}\big{)}^{1/3}\frac{1}{T^{1/2+a}}\big{)}$$ $$+4\alpha^{2}E_{0}\big{(}\frac{L(\sigma_{0}^{2}+\sigma_{a}^{2})}{10\alpha^{2}F_{0}}\big{)}^{1/3}\frac{1}{T^{2/3}}$$ $$+\frac{(L+\alpha^{2}\delta^{2}+\alpha^{2}\delta^{2}/L)F_{0}+4\alpha^{2}E_{0}}{T}$$
|
1103 |
+
|
1104 |
+
We can choose α and the weights τk in such a way to optimize σ 2(α) = (1 − α)
|
1105 |
+
2σ 2 0 + α 2 Pk τ 2 k σ 2 k
|
1106 |
+
, but we can simply choose α =N
|
1107 |
+
N+1 and τk =
|
1108 |
+
1 N
|
1109 |
+
this will guarantee that σ 2(α) = σ 2 avg Nfor σ 2 avg =
|
1110 |
+
PN
|
1111 |
+
k=0 σ 2 k Nis the average variance. This choice of the weights implies that the dominant order in T has a linear speedup in N which is the statement of Corollary 5.3.
|
1112 |
+
|
1113 |
+
## D Code
|
1114 |
+
|
1115 |
+
The code for our experiments can be found at https://anonymous.4open.science/r/
|
1116 |
+
LinSpeedUpCode-F695.
|
fARVGN1Xzu/fARVGN1Xzu_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 26,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 26,
|
14 |
+
"code": 0,
|
15 |
+
"table": 0,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 96,
|
18 |
+
"unsuccessful_ocr": 8,
|
19 |
+
"equations": 104
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|