RedTachyon
commited on
Commit
•
e1e998b
1
Parent(s):
4d5bd37
Upload folder using huggingface_hub
Browse files- ptZiZAli6D/10_image_0.png +3 -0
- ptZiZAli6D/11_image_0.png +3 -0
- ptZiZAli6D/13_image_0.png +3 -0
- ptZiZAli6D/1_image_0.png +3 -0
- ptZiZAli6D/21_image_0.png +3 -0
- ptZiZAli6D/21_image_1.png +3 -0
- ptZiZAli6D/22_image_0.png +3 -0
- ptZiZAli6D/22_image_1.png +3 -0
- ptZiZAli6D/23_image_0.png +3 -0
- ptZiZAli6D/23_image_1.png +3 -0
- ptZiZAli6D/23_image_2.png +3 -0
- ptZiZAli6D/24_image_0.png +3 -0
- ptZiZAli6D/24_image_1.png +3 -0
- ptZiZAli6D/2_image_0.png +3 -0
- ptZiZAli6D/3_image_0.png +3 -0
- ptZiZAli6D/5_image_0.png +3 -0
- ptZiZAli6D/8_image_0.png +3 -0
- ptZiZAli6D/8_image_1.png +3 -0
- ptZiZAli6D/ptZiZAli6D.md +742 -0
- ptZiZAli6D/ptZiZAli6D_meta.json +25 -0
ptZiZAli6D/10_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/11_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/13_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/1_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/21_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/21_image_1.png
ADDED
Git LFS Details
|
ptZiZAli6D/22_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/22_image_1.png
ADDED
Git LFS Details
|
ptZiZAli6D/23_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/23_image_1.png
ADDED
Git LFS Details
|
ptZiZAli6D/23_image_2.png
ADDED
Git LFS Details
|
ptZiZAli6D/24_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/24_image_1.png
ADDED
Git LFS Details
|
ptZiZAli6D/2_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/3_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/5_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/8_image_0.png
ADDED
Git LFS Details
|
ptZiZAli6D/8_image_1.png
ADDED
Git LFS Details
|
ptZiZAli6D/ptZiZAli6D.md
ADDED
@@ -0,0 +1,742 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Mandera: Malicious Node Detection In Federated Learning Via Ranking
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Byzantine attacks aim to hinder the deployment of federated learning algorithms by sending malicious gradients to degrade the model. Although the benign gradients and Byzantine gradients are distributed differently, identifying the malicious gradients is challenging due to
|
8 |
+
(1) the gradient is high-dimensional and each dimension has its unique distribution, and (2)
|
9 |
+
the benign gradients and the malicious gradients are mixed (two-sample test methods cannot apply directly). To address these issues, we propose MANDERA which is theoretically guaranteed to efficiently detect all malicious gradients under Byzantine attacks with no prior knowledge or history about the number of attacked nodes. More specifically, we proposed to transfer the original updating gradient space into a ranking matrix. By such an operation, the scales of different dimensions of the gradients in the ranking space become identical. Then the high-dimensional benign gradients and the malicious gradients can be easily separated in the ranking space. The effectiveness of MANDERA is further confirmed by experimentation on *four* Byzantine attack implementations (Gaussian, Zero Gradient, Sign Flipping, Shifted Mean), compared with state-of-the-art defences. The experiments cover both IID and Non-IID datasets.
|
10 |
+
|
11 |
+
## 1 Introduction
|
12 |
+
|
13 |
+
Federated Learning (FL) is a decentralized learning framework that allows multiple participating nodes to learn on a local collection of training data. The updating gradient values of each respective node are sent to a global coordinator for aggregation. The global model collectively learns from each of these individual nodes by aggregating the gradient updates before relaying the updated global model back to the participating nodes. The aggregation of multiple nodes allows the model to learn from a larger dataset which will result in a model having greater performance than the ones only learning on their local subset of data. FL presents two key advantages: (1) the increase of privacy for the contributing node as local data is not communicating with the global coordinator, and (2) a reduction in computation by the global node as the computation is offloaded to contributing nodes.
|
14 |
+
|
15 |
+
However, FL is vulnerable to various attacks, including data poisoning attacks (Tolpegin et al., 2020) and Byzantine attacks (Lamport et al., 2019). The presence of malicious actors in the collaborative process may seek to poison the performance of the global model, to reduce the output performance of the model (Chen et al., 2017; Baruch et al., 2019; Fang et al., 2020; Tolpegin et al., 2020), or to embed hidden back-doors within the model (Bagdasaryan et al., 2020). Byzantine attack aims to devastate the performance of the global model by manipulating the gradient values. These gradient values that have been manipulated are sent from malicious nodes which are unknown to the global node. The Byzantine attacks can result in a global model which produces an undesirable outcome (Lamport et al., 2019).
|
16 |
+
|
17 |
+
Researchers seek to defend FL from the negative impacts of these attacks. This can be done by either identifying the malicious nodes or making the global model more robust to these types of attacks. In our paper, we focus on identifying the malicious nodes to exclude the nodes which are deemed to be malicious in the aggregation step to mitigate the impact of malicious nodes. Most of the existing methods rely on the gradient values to determine whether a node is malicious or not, for example, Blanchard et al. (2017); Yin et al. (2018); Guerraoui et al. (2018); Li et al. (2020); Fang et al. (2020); Cao et al. (2020); Wu et al. (2020b);
|
18 |
+
Xie et al. (2019; 2020); Cao et al. (2021) and So et al. (2021). All the above methods are effective in certain scenarios.
|
19 |
+
|
20 |
+
![1_image_0.png](1_image_0.png)
|
21 |
+
|
22 |
+
Figure 1: Patterns of nodes in gradient space and ranking space respectively under mean shift attacks. The columns of the figure represent the number of malicious nodes among 100 nodes: 10, 20 and 30.
|
23 |
+
There is a lack of theoretical guarantee to detect all the malicious nodes in the literature. Although the extreme malicious gradients can be excluded by the above approaches, some malicious nodes could be mis-classified as benign nodes and vice versa. The challenging issues in the community are caused by the following two phenomena: [F1] the gradient values of benign nodes and malicious nodes are often non-distinguishable; [F2]
|
24 |
+
the gradient matrix is always high-dimensional (large column numbers) and each dimension follows its unique distribution. The phenomenon [F1] indicates that it is not reliable to detect malicious nodes only using a single column from the gradient matrix. And the phenomenon [F2] hinders us from using all the columns of the gradient matrix, because it requires a scientific way to accommodate a large number of columns which are distributed considerably differently.
|
25 |
+
|
26 |
+
In this paper, we propose to resolve these critical challenges from a novel perspective. Instead of working on the node updates directly, we propose to extract information about malicious nodes indirectly by transforming the node updates from numeric gradient values to the ranking space. Compared to the original numeric gradient values, whose distribution is difficult to model, the rankings are much easier to handle both theoretically and practically. Moreover, as rankings are scale-free, we no longer need to worry about the scale difference across different dimensions. We proved under mild conditions that the first two moments of the transformed ranking vectors carry key information to detect the malicious nodes under Byzantine attacks. Based on these theoretical results, a highly efficient method called MANDERA is proposed to separate the malicious nodes from the benign ones by clustering all local nodes into two groups based on the ranking vectors. Figure 1 shows an illustrative motivation for our method. It demonstrates the behaviors of malicious and benign nodes under mean shift attacks. Obviously, the malicious and benign nodes are not distinguishable in the gradient space due to the challenges we mentioned above, while they are well separated in the ranking space.
|
27 |
+
|
28 |
+
The contributions of this work are as follows: (1) we propose the first algorithm leveraging the ranking space of model updates to detect malicious nodes (Figure 2); (2) we provide a theoretical guarantee for the detection of malicious nodes based on the ranking space under Byzantine attacks; (3) our method does not assume knowledge of the number of malicious nodes, which is required in the learning process of most of the prior methods; (4) we experimentally demonstrate the effectiveness and robustness of our defense on Byzantine attacks, including Gaussian attack (GA), Sign Flipping attack (SF), Zero Gradient attack (ZG)
|
29 |
+
and Mean Shift attack (MF); (5) an experimental comparison between MANDERA and a collection of robust aggregation techniques is provided.
|
30 |
+
|
31 |
+
Related works. In the literature, there have been a collection of efforts along the research on defensing Byzantine attacks. Blanchard et al. (2017) propose a defense referred to as Krum that treats local nodes whose update vector is too far away from the aggregated barycenter as malicious nodes and precludes them from the downstream aggregation. Guerraoui et al. (2018) propose Bulyan, a process that performs aggregation on subsets of node updates (by iteratively leaving each node out) to find a set of nodes with the most aligned updates given an aggregation rule. Cao et al. (2020) maintains a trusted model and dataset on
|
32 |
+
|
33 |
+
![2_image_0.png](2_image_0.png)
|
34 |
+
Figure 2: An overview of MANDERA.
|
35 |
+
which submitted node updates may be bootstrapped by weighting each node's update in the aggregation step based on it's cosine similarity to the trusted update. Xie et al. (2019) compute a *Stochastic Descendant Score*
|
36 |
+
(SDS) based on the estimated descendant of the loss function and the magnitude of the update submitted to the global node, and only include a predefined number of nodes with the highest SDS in the aggregation.
|
37 |
+
|
38 |
+
On the other hand, Chen et al. (2021) propose a zero-knowledge approach to detect and remove malicious nodes by solving a weighted clustering problem. The resulting clusters update the model individually and accuracy against a validation set is checked. All nodes in a cluster with significant negative accuracy impact are rejected and removed from the aggregation step.
|
39 |
+
|
40 |
+
## 2 Defense Against Byzantine Attacks Via Ranking
|
41 |
+
|
42 |
+
In this section, notations are first introduced and an algorithm to detect malicious nodes is proposed.
|
43 |
+
|
44 |
+
## 2.1 Notations
|
45 |
+
|
46 |
+
Suppose there are n local nodes in the federated learning framework, where n1 nodes are benign nodes whose indices are denoted by Ib and the other n0 = n − n1 nodes are malicious nodes whose indices are denoted by Im. The training model is denoted by f(θ, D), where θ ∈ R
|
47 |
+
p×1is a p-dimensional parameter vector and D is a data matrix. Denote the message matrix received by the central server from all local nodes as M ∈ R
|
48 |
+
n×p, where Mi,: denotes the message received from node i. For a benign node i, let Di be the data matrix on it with Ni as the sample size, we have Mi,: =
|
49 |
+
∂f(θ,Di)
|
50 |
+
∂θ|θ=θ∗ , where θ
|
51 |
+
∗is the parameter value from the global model. In the rest of the paper, we suppress ∂f(θ,Di)
|
52 |
+
∂θ|θ=θ∗ to ∂f(θ,Di)
|
53 |
+
∂θto denote the gradient value for simplicity purpose. A malicious node j ∈ Im, however, tends to attack the learning system by manipulating Mj,:in some way. Hereinafter, we denote N∗ = min({Ni}i∈Ib
|
54 |
+
) to be the minimal sample size of the benign nodes.
|
55 |
+
|
56 |
+
Given a vector of real numbers a ∈ R
|
57 |
+
n×1, define its ranking vector as b = Rank(a) ∈ perm{1, · · · , n}, where the ranking operator *Rank* maps the vector a to an element in permutation space perm{1, · · · , n} which is the set of all the permutations of {1, · · · , n}. For example, *Rank*(1.1, −2, 3.2) = (2, 3, 1), it ranks the values from largest to smallest. We adopt average ranking, when there are ties. With the *Rank* operator, we can transfer the message matrix M to a ranking matrix R by replacing its column M:,j by the corresponding ranking vector R:,j = *Rank*(M:,j ). Further, define
|
58 |
+
|
59 |
+
$$e_{i}\triangleq{\frac{1}{p}}\sum_{j=1}^{p}\mathbf{R}_{i,j}\qquad{\mathrm{and}}\qquad v_{i}\triangleq{\frac{1}{p}}\sum_{j=1}^{p}(\mathbf{R}_{i,j}-e_{i})^{2}$$
|
60 |
+
|
61 |
+
to be the mean and variance of Ri,:, respectively. As it is shown in later subsections, we can judge whether node i is a malicious node based on (ei, vi) under various attack types. In the following, we will highlight the behavior of the benign nodes first, and then discuss the behavior of malicious nodes and their difference with the benign nodes under Byzantine attacks.
|
62 |
+
|
63 |
+
## 2.2 Behaviors Of Nodes Under Byzantine Attacks
|
64 |
+
|
65 |
+
Byzantine attacks aim to devastate the global model by manipulating the gradient values of some local nodes. For a general Byzantine attack, we assume that the gradient vectors of benign nodes and malicious
|
66 |
+
|
67 |
+
![3_image_0.png](3_image_0.png)
|
68 |
+
|
69 |
+
si
|
70 |
+
Figure 3: The scatter plots of (ei, si) for the 100 nodes under four types of attack as illustrative examples demonstrating ranking mean and standard deviation from the 1st epoch of training for the FASHION-MNIST
|
71 |
+
dataset. Four attacks are Gaussian Attack (GA), Zero Gradient attack (ZG), Sign Flipping attack (SF) and Mean shift attack (MS).
|
72 |
+
nodes follow two different distributions G and F. We would expect systematical differences in their behavior patterns in the ranking matrix R, based on which malicious node detection can be achieved. Theorem 2.1 demonstrates the concrete behaviors of benign nodes and malicious nodes under general Byzantine attacks.
|
73 |
+
|
74 |
+
Theorem 2.1 (Behavior under Byzantine attacks). For a general Byzantine attack, assume that the gradient values from benign nodes and malicious nodes follow two distributions G(·) and F(·) respectively (both G and F are p*-dimensional). We have*
|
75 |
+
|
76 |
+
$$\begin{array}{r c l}{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}e_{i}}}&{{=}}&{{\bar{\mu}_{b}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+\bar{\mu}_{m}\cdot\mathbb{I}(i\in\mathcal{I}_{m})\ a.s.,}}\\ {{}}&{{}}&{{}}\\ {{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}v_{i}}}&{{=}}&{{\bar{s}_{b}^{2}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+\bar{s}_{m}^{2}\cdot\mathbb{I}(i\in\mathcal{I}_{m})\ a.s.,}}\end{array}$$
|
77 |
+
|
78 |
+
where (µ¯b, s¯
|
79 |
+
2 b
|
80 |
+
) and (µ¯m, s¯
|
81 |
+
2 m) are highly non-linearly functions of G(·) and F(·) *whose concrete form is detailed* in the Appendix A, and "a.s." is the abbreviation of "almost surely".
|
82 |
+
|
83 |
+
The proof can be found in the Appendix A. If the attackers can access the exact distribution G, which is very rare, an obvious strategy to evade defense is to let F = G. In this case, the attack will have no impact on the global model. More often, the attackers have little information about distribution G. In this case, it is a rare event for the attackers to design a distribution F satisfying (µ¯b, s¯
|
84 |
+
2 b
|
85 |
+
) = (µ¯m, s¯
|
86 |
+
2 m) for the malicious nodes to follow. In fact, most popular Byzantine attacks never try to make such an effort at all. Thus, the malicious nodes and the benign nodes are distinguishable with respect to their feature vectors {(ei, vi)}1≤i≤n, because
|
87 |
+
(ei, vi) reaches to different limits for begin and malicious nodes. Considering that the standard deviation si =
|
88 |
+
√viis typically of the similar scale of ei, hereinafter we employ (ei, si), instead of (ei, vi), as the feature vector of node i for malicious node detection.
|
89 |
+
|
90 |
+
Figure 3 illustrates the typical scatter plots of (ei, si) for benign and malicious nodes under four typical Byzantine attacks, i.e., GA, SN, ZG and MS. It can be observed that malicious nodes and benign nodes are all well separated in these scatter plots, indicating a proper clustering algorithm will distinguish these two groups. We note that both si and ei are informative for malicious node detection, since in some cases (e.g., under Gaussian attacks) it is difficult to distinguish malicious nodes from benign ones based on ei only.
|
91 |
+
|
92 |
+
## 2.3 Algorithm For Malicious Node Detection Under Byzantine Attacks
|
93 |
+
|
94 |
+
Theorem 2.1 implies that, under general Byzantine attacks, the feature vector (ei, si) of node i converges to two different limits for benign and malicious nodes, respectively. Thus, for a real dataset where Ni's and p are all finite but reasonably large numbers, the scatter plot of {(ei, si)}1≤i≤n would demonstrate a clustering structure: one cluster for the benign nodes and the other cluster for the malicious nodes.
|
95 |
+
|
96 |
+
## Algorithm 1 Mandera
|
97 |
+
|
98 |
+
Input: The message matrix M.
|
99 |
+
|
100 |
+
1: Convert the message matrix M to the ranking matrix R by applying *Rank* operator.
|
101 |
+
|
102 |
+
2: Compute mean and standard deviation of rows in R, i.e., {(ei, si)}1≤i≤n.
|
103 |
+
|
104 |
+
3: Run the clustering algorithm K-means to {(ei, si)}1≤i≤n with K = 2, and predict the set of benign nodes with the larger cluster denoted by Iˆb.
|
105 |
+
|
106 |
+
Output: The predicted benign node set Iˆb.
|
107 |
+
|
108 |
+
Based on this intuition, we propose *MAlicious Node DEtection via RAnking* (MANDERA) to detect the malicious nodes, whose workflow is detailed in Algorithm 1. MANDERA can be applied to either a single epoch or multiple epochs. For a single-epoch mode, the input data M is the message matrix received from a single epoch. For multiple-epoch mode, the data M is the column-concatenation of the message matrices from multiple epochs. By default, the experiments below all use a single epoch to detect the malicious nodes.
|
109 |
+
|
110 |
+
The predicted benign nodes Iˆb obtained by MANDERA naturally leads to an aggregated message mˆ b,: =
|
111 |
+
1
|
112 |
+
\#(Iˆb)
|
113 |
+
Pi∈Iˆb Mi,:. Theorem 2.2 shows that Iˆb and mˆ b lead to consistent estimations of Ib and mb =
|
114 |
+
1 n1 Pi∈Ib Mi,: respectively, indicating that MANDERA enjoys *robustness guarantee* Steinhardt (2018) for Byzantine attacks.
|
115 |
+
|
116 |
+
Theorem 2.2 (Robustness guarantee). *Under Byzantine attacks, we have:*
|
117 |
+
|
118 |
+
$$\operatorname*{lim}_{N^{\star},p\to\infty}\mathbb{P}(\hat{\mathcal{I}}_{b}=\mathcal{I}_{b})=1,\ \operatorname*{lim}_{N^{\star},p\to\infty}\mathbb{E}||\hat{\mathbf{m}}_{b,:}-\mathbf{m}_{b,:}||_{2}=0.$$
|
119 |
+
|
120 |
+
The proof of Theorem 2.2 can be found in Appendix B. As E(mˆ b,:) = mb,:, MANDERA obviously satisfies the (*α, f*)-Byzantine Resilience condition, which is used in Blanchard et al. (2017) and Guerraoui et al. (2018) to measure the robustness of their estimators.
|
121 |
+
|
122 |
+
## 3 Theoretical Analysis For Specific Byzantine Attacks
|
123 |
+
|
124 |
+
Theorem 2.1 provides us general guidance about the behavior of nodes under Byzantine attacks. In this section, we examine the behavior for specific attacks, including Gaussian attacks, zero gradient attacks, sign flipping attacks and mean shift attacks.
|
125 |
+
|
126 |
+
As the behavior of benign nodes does not depend on the type of Byzantine attack, we can study the statistical properties of (ei, vi) for a benign node i ∈ Ib before the specification of a concrete attack type. For any benign node i, the message generated for j th parameter is Mi,j =
|
127 |
+
1 Ni PNi l=1
|
128 |
+
∂f(θ,Di,l)
|
129 |
+
∂θj, where Di,l denotes the l th sample on it. Throughout this paper, we assume that Di,l's are independent and identically distributed
|
130 |
+
(IID) samples drawn from a data distribution D.
|
131 |
+
|
132 |
+
Lemma 3.1. *Under the IID data assumption, further denote* µj = E
|
133 |
+
∂f(θ,Di,l)
|
134 |
+
∂θj and σ 2 j = Var ∂f(θ,Di,l)
|
135 |
+
∂θj
|
136 |
+
<
|
137 |
+
∞, with Ni going to infinity, for ∀ j ∈ {1, · · · , p}, we have Mi,j → µj almost surely (a.s.) and Mi,j →d Nµj , σ2 j
|
138 |
+
/Ni
|
139 |
+
.
|
140 |
+
|
141 |
+
Lemma 3.1 can be proved by using the Kolmogorov's Strong Law of Large Numbers (KSLLN) and Central Limit Theorem. For the rest of this section, we will derive the detailed forms of µ¯b, µ¯m, s¯
|
142 |
+
2 b and s¯
|
143 |
+
2 m, as defined in Theorem 2.1, under four specific Byzantine attacks.
|
144 |
+
|
145 |
+
## 3.1 Gaussian Attack
|
146 |
+
|
147 |
+
Definition 3.2 (Gaussian attack). In a Gaussian attack, the attacker generates malicious gradient values as follows: {Mi,:}i∈Im *∼ MVN* (mb,:, Σ), where mb,: =1 n1 Pi∈Ib Mi,:is the mean vector of Gaussian distribution and Σ is the covariance matrix determined by the attacker.
|
148 |
+
|
149 |
+
![5_image_0.png](5_image_0.png)
|
150 |
+
|
151 |
+
Figure 4: Independence test for 100,000 column pairs randomly chosen from message matrix M generated from FASHION-MNIST data.
|
152 |
+
Considering that Mi,j → µj a.s. with Ni going to infinity for all i ∈ Ib based on Definition 3.2, it is straightforward to see that limN∗→∞ mb,j = µj *a.s.,* and the distribution of Mi,j for each i ∈ Im converges to the Gaussian distribution centered at µj . Based on this fact, the limiting behavior of the feature vector
|
153 |
+
(ei, vi) can be established for both benign and malicious nodes. Theorem 3.3 summarizes the results, with the proof detailed in Appendix C.
|
154 |
+
|
155 |
+
Theorem 3.3 (Behavior under Gaussian attacks). Assuming {R:,j}1≤j≤p are independent of each other, under the Gaussian attack, the behaviors of benign and malicious nodes are as follows:
|
156 |
+
|
157 |
+
$$\bar{\mu}_{b}=\bar{\mu}_{m}=\frac{n+1}{2},\quad\bar{s}_{b}^{2}=\frac{1}{p}\sum_{j=1}^{p}s_{b,j}^{2},\quad\bar{s}_{m}^{2}=\frac{1}{p}\sum_{j=1}^{p}s_{m,j}^{2},$$
|
158 |
+
|
159 |
+
where s 2 b,j and s 2 m,j *are both complex functions of* n0, n1, σ 2 j
|
160 |
+
, Σj,j and N∗ *whose concrete form is detailed in* the Appendix C.
|
161 |
+
|
162 |
+
Considering that s¯
|
163 |
+
2 b = s¯
|
164 |
+
2 m if and only if Σj,j 's fall into a lower dimensional manifold whose measurement is zero under the Lebesgue measure, we have P(s¯
|
165 |
+
2 b = s¯
|
166 |
+
2 m) = 0 if the attacker specifies the Gaussian variance Σj,j 's arbitrarily in the Gaussian attack. Thus, Theorem 3.3 in fact suggests that the benign nodes and the malicious nodes are different on the value of vi, and therefore provides a guideline to detect the malicious nodes. Although the we do need N∗ and p to go to infinity for getting the theoretical results in Theorem 3.3, in practice the malicious node detection algorithm based on the theorem typically works very well when N∗
|
167 |
+
and p are reasonably large and Ni's are not dramatically far away from each other.
|
168 |
+
|
169 |
+
The independent ranking assumption in Theorem 3.3, which assumes that {R:,j}1≤j≤p are independent of each other, may look restrictive. However, in fact it is a mild condition that can be easily satisfied in practice due to the following reasons. First, for a benign node i ∈ Ib, Mi,j and Mi,k are often nearly independent, as the correlation between two model parameters θj and θk is often very weak in a large deep neural network with a huge number of parameters. To verify the statement, we implemented independence tests for 100,000 column pairs randomly chosen from the message matrix M generated from the FASHION-MNIST data.
|
170 |
+
|
171 |
+
Distribution of the p-values of these tests are demonstrated in Figure 4 via a histogram, which is very close to a uniform distribution, indicating that Mi,j and Mi,k are indeed nearly independent in practice. Second, even some M:,j and M:,k show a strong correlation, the magnitude of the correlation would be reduced greatly during the transformation from M to R, as the final ranking Ri,j also depends on many other factors.
|
172 |
+
|
173 |
+
Actually, the independent ranking assumption could be relaxed to be an uncorrelated ranking assumption which assumes the rankings are uncorrelated with each other. Adopting the weaker assumption will result in a change in the convergence type of our theorems from the "almost surely convergence" to "convergence in probability".
|
174 |
+
|
175 |
+
## 3.2 Sign Flipping Attack
|
176 |
+
|
177 |
+
Definition 3.4 (Sign flipping attack). Sign flipping attack aims to generate the gradient values of malicious nodes by flipping the sign of the average of all the benign nodes' gradient at each epoch, i.e., specifying Mi,: = −rmb,: for any i ∈ Im, where r > 0,mb =
|
178 |
+
1 n1 Pk∈Ib Mk,:.
|
179 |
+
|
180 |
+
Based on the above definition, the update message of a malicious node i under the sign flipping attack is Mi,: = −rmb,: = −
|
181 |
+
r n1 Pk∈Ib Mk,:. The theorem 3.5 summarizes the behavior of malicious nodes and benign nodes respectively, with the detailed proof provided in Appendix D.
|
182 |
+
|
183 |
+
Theorem 3.5 (Behavior under sign flipping attacks). With the same assumption as posed in Theorem 3.3, under the sign flipping attack, the behaviors of benign and malicious nodes are as follows:
|
184 |
+
|
185 |
+
$$\begin{array}{l l}{{\bar{\mu}_{b}=\frac{n+n_{0}+1}{2}-n_{0}\rho,}}&{{\bar{\mu}_{m}=n_{1}\rho+\frac{n_{0}+1}{2},}}\\ {{\bar{s}_{b}^{2}=\rho S_{[1,n_{1}]}^{2}+(1-\rho)S_{[n_{0}+1,n]}^{2}-(\bar{\mu}_{b})^{2},}}\\ {{\bar{s}_{m}^{2}=\rho S_{[n_{1}+1,n]}^{2}+(1-\rho)S_{[1,n_{0}]}^{2}-(\bar{\mu}_{m})^{2},}}\end{array}$$
|
186 |
+
|
187 |
+
where ρ = limp→∞
|
188 |
+
Pp j=1 I(µj>0)
|
189 |
+
pwhich depends on n0 and n1, S
|
190 |
+
2
|
191 |
+
[a,b] =1 b−a+1 Pbk=a k 2*. And* s¯
|
192 |
+
2 m and s¯
|
193 |
+
2 b are both quadratic functions of ρ.
|
194 |
+
|
195 |
+
Considering that µ¯b = µ¯m if and only if ρ =
|
196 |
+
1 2
|
197 |
+
, and s¯
|
198 |
+
2 b = s¯
|
199 |
+
2 m if and only if ρ is the solution of a quadratic function, the probability of (µ¯b, s¯
|
200 |
+
2 b
|
201 |
+
) = (µ¯m, s¯
|
202 |
+
2 m) is zero as p → ∞. Such a phenomenon suggests that we can detect the malicious nodes based on the moments (ei, vi) to defense the sign flipping attack as well.
|
203 |
+
|
204 |
+
Noticeably, we note that the limit behavior of ei and vi does not dependent on the specification of r, which defines the sign flipping attack. Although such a fact looks a bit abnormal at the first glance, it is totally understandable once we realize that with the variance of Mi,j shrinks to zero with Ni goes to infinity for each benign node i, any different between µj and µj (r) would result in the same ranking vector R:,j in the ranking space.
|
205 |
+
|
206 |
+
## 3.3 Zero Gradient Attack
|
207 |
+
|
208 |
+
Definition 3.6 (Zero gradient attack). Zero gradient attack aims to make the aggregated message to be zero, i.e., Pn i=1 Mi,: = 0, at each epoch, by specifying Mi,: = −
|
209 |
+
n1 n0mb,: for all i ∈ Im.
|
210 |
+
|
211 |
+
Apparently, the zero gradient attack defined above is a special case of sign flipping attack by specifying r =
|
212 |
+
n1 n0
|
213 |
+
. The conclusions of Theorem 3.5 keep unchanged for different specifications of r. Therefore, the behavior follows the same limiting behaviors as described in Theorem 3.5.
|
214 |
+
|
215 |
+
## 3.4 Mean Shift Attack
|
216 |
+
|
217 |
+
Definition 3.7 (Mean shift attack). Mean shift attack (Baruch et al., 2019) manipulates the updates of the malicious nodes in the following fashion, mi,j = µj − z · σj for i ∈ Im and 1 ≤ j ≤ p, where µj =
|
218 |
+
1 n1 Pi∈Ib Mi,j , σj =
|
219 |
+
q 1 n1 Pi∈Ib
|
220 |
+
(Mi,j − µj )
|
221 |
+
2 and z = arg maxt φ(t) <n−2 2(n−n0)
|
222 |
+
.
|
223 |
+
|
224 |
+
Mean shift attacks aim to generate malicious gradients which are not well separated, but of different distributions, from the benign nodes. Theorem 3.8 details the behavior of malicious nodes and benign nodes under mean shift attacks. The proof can be found in Appendix E
|
225 |
+
Theorem 3.8. With the same assumption as posed in Theorem 3.3 and additionally n is relatively large, under the mean shift attack, the behaviors of benign and malicious nodes are as follows:
|
226 |
+
|
227 |
+
$$\begin{array}{c}{{\bar{\mu}_{b}=\frac{n+1}{2}+\frac{n_{0}}{n_{1}}(n_{1}-\alpha),\quad\bar{\mu}_{m}=\alpha+\frac{n_{0}+1}{2},}}\\ {{\bar{s}_{b}^{2}=\frac{1}{n_{1}}\left(\tau(n)+\tau(\alpha)-\tau(\alpha+1+n_{0})\right)-\bar{\mu}_{b}^{2},\ \bar{s}_{m}^{2}=0,}}\end{array}$$
|
228 |
+
|
229 |
+
where b·c denotes the floor function, α = bn1Φ(z)c, Φ(z) *is the cumulative density function of the standard* normal distribution and τ (·) *is the function of 'sum of squares', i.e.,* τ (n) = Pn k=1 k 2.
|
230 |
+
|
231 |
+
## 4 Experiments
|
232 |
+
|
233 |
+
In these experiments we extend the data poisoning experimental framework of Tolpegin et al. (2020); Wu et al. (2020a), integrating Byzantine attack implementations released by Wu et al. (2020b) and the mean shift attack Baruch et al. (2019). The mean shift attack was designed to poison gradients by adding 'a little' amount of noise, and shown to be effective in defeating Krum (Blanchard et al., 2017) and Bulyan
|
234 |
+
(Guerraoui et al., 2018) defenses. The mean shift attack is defined in Definition 3.7. In our experiments, we set Σ = 30I for the Gaussian attack and r = 3 for the sign flipping attack, where I is the identity matrix.
|
235 |
+
|
236 |
+
For all experiments we fix n = 100 participating nodes, of which a variable number of nodes are poisoned |n0*| ∈ {*5, 10, 15, 20, 25, 30}. The training process is run until 25 epochs have elapsed. We have described the structure of these networks in Appendix F.
|
237 |
+
|
238 |
+
## 4.1 Defense By Mandera For Iid Settings
|
239 |
+
|
240 |
+
We evaluate the efficacy in detecting malicious nodes within the federated learning framework with the use of three IID datasets. The first is the FASHION-MNIST dataset Xiao et al. (2017), a dataset of 60,000 and 10,000 training and testing samples respectively divided into 10 classes of apparel. The second is CIFAR-10 Krizhevsky et al. (2009), a dataset of 60,000 small object images also containing 10 object classes.
|
241 |
+
|
242 |
+
The third is the MNIST Deng (2012) dataset. The MNIST dataset is a dataset of 60,000 and 10,000 training and testing samples respectively divided into 10 classes of handwritten digits from multiple authors.
|
243 |
+
|
244 |
+
We test the performance of MANDERA on the update gradients of a model under attacks. In this section, MANDERA acts as an observer without intervening in the learning process to identify malicious nodes with a set of gradients from a single epoch. Each configuration of 25 training epochs, with a given number of malicious nodes was repeated 20 times. Figure 5 demonstrates the classification performance (Metrics defined in Appendix G) of MANDERA with different settings of participating malicious nodes and the four poisoning attacks, i.e., GA, ZG, SF and MS.
|
245 |
+
|
246 |
+
While we have formally demonstrated the efficacy of MANDERA in accurately detecting potentially malicious nodes participating in the federated learning process. In practice, to leverage an unsupervised K-means clustering algorithm, we must also identify the correct group of nodes as the malicious group. Our strategy is to identify the group with the most exact gradients, or otherwise the smaller group (we regard a system with over 50% of their nodes compromised as having larger issues than just poisoning attacks).1 We also test other clustering algorithms, such as hierarchical clustering and Gaussian mixture models Fraley & Raftery
|
247 |
+
(2002). It turns out that the performance of MANDERA is quite robust with different choices of clustering methods. Detailed results can be found in Appendix I. From Figure 5, it is immediately evident that the recall of the malicious nodes for the Byzantine attacks is exceptional. However, occasionally benign nodes have also been misclassified as malicious under SF attacks. On all attacks, in the presence of more malicious nodes, the recall of malicious nodes trends down.
|
248 |
+
|
249 |
+
We encapsulate MANDERA into a module prior to the aggregation step, MANDERA has the sole objective of identifying malicious nodes, and excluding their updates from the global aggregation step. Each configuration of 25 training epochs, a given poisoning attack, defense method, and a given number of malicious nodes was repeated 10 times. We compare MANDERA against 5 other robust aggregation defense methods, Krum Blanchard et al. (2017), Bulyan Guerraoui et al. (2018), Trimmed Mean Yin et al. (2018), Median Yin et al. (2018) and FLTrust Cao et al. (2020). Of which the first 2 require an assumed number of malicious nodes, and the latter 3 only aggregate robustly.
|
250 |
+
|
251 |
+
Table 1 demonstrates the accuracy of the global model at the 25th epoch under four Byzantine attacks and six defense strategies, using the MNIST-Digits data set. It shows MANDERA universally outperforms all the other competing defence strategies for the MNIST-Digits data set. Note that MANDERA is approaching
|
252 |
+
(sometimes even better than) the performance of a model which is not attacked. Interestingly, FLTrust as a standalone defense is weak in protecting against the most extreme Byzantine attacks. However, we highlight that FLtrust is a robust aggregation method against specific attacks that may thwart defences like Krum, Trimmed mean. We see FLTrust as a complementary defence that relies on a base method of defence against Byzantine attacks, but expands the protection coverage of the FL system against adaptive attacks.
|
253 |
+
|
254 |
+
1More informed approaches to selecting the malicious cluster can be tested in future work. E.g. Figure 3 displays less variation of ranking variance in malicious cluster compared to benign nodes. This could robust selection of the malicious group, and enabling selection of malicious groups larger than 50%.
|
255 |
+
|
256 |
+
![8_image_0.png](8_image_0.png)
|
257 |
+
|
258 |
+
Metric Accuracy Recall Precision F1
|
259 |
+
|
260 |
+
![8_image_1.png](8_image_1.png)
|
261 |
+
|
262 |
+
Figure 5: Classification performance of our proposed approach MANDERA under four types of attack for three IID settings.
|
263 |
+
The performance of all the epochs for MNIST-Digits can be found in Figure 6. It consistently shows MANDERA outperforms the other competing strategies at each epoch. For the performance of the other two data sets, see Appendix H, where MANDERA also performs better than other defence strategies. The corresponding model losses can be found in Appendix J.
|
264 |
+
|
265 |
+
## 4.2 Defense By Mandera For Non-Iid Settings
|
266 |
+
|
267 |
+
In this section, we evaluate the applicability of MANDERA when applied in a non-IID setting in Federated learning to validate its effectiveness. The batch size present through the existing evaluations of Section 4.1 is 10. This low setting practically yields gradient values at each local worker node as if they were derived from non-IID samples. This is a strong indicator that MANDERA could be effective for non-IID settings. We reinforce MANDERA's applicability in the non-IID setting by repeating the experiment on QMNIST Yadav
|
268 |
+
& Bottou (2019), a dataset that is per-sample equivalent to MNIST Deng (2012). QMNIST, however, additionally provides us with writer identification information. This identity is leveraged to ensure that each local node only trains on digits written by a set of unique users not seen by other workers. Such a setting is widely recognized as non-IID setting in the community (Kairouz et al., 2021). For 100 nodes, this works
|
269 |
+
|
270 |
+
Table 1: MNIST-Digits model accuracy at 25th epoch. The **bold** highlights the best defense strategy under attack. "NO-attack" is the baseline, where no attack is conducted. And n0 denotes the number of malicious nodes among 100 nodes.
|
271 |
+
|
272 |
+
| Attack | Defence | n0 = 5 | n0 = 10 | n0 = 15 | n0 = 20 | n0 = 25 | n0 = 30 |
|
273 |
+
|-----------|-----------|----------|-----------|-----------|-----------|-----------|-----------|
|
274 |
+
| GA | Krum | 96.77 | 96.63 | 96.78 | 96.89 | 96.90 | 96.90 |
|
275 |
+
| NO-attack | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | |
|
276 |
+
| Bulyan | 98.46 | 98.43 | 98.40 | 98.36 | 98.35 | 98.29 | |
|
277 |
+
| Median | 98.33 | 98.31 | 98.32 | 98.31 | 98.31 | 98.34 | |
|
278 |
+
| Trim-mean | 98.45 | 98.43 | 98.41 | 98.38 | 98.38 | 98.35 | |
|
279 |
+
| MANDERA | 98.48 | 98.46 | 98.44 | 98.43 | 98.44 | 98.42 | |
|
280 |
+
| FLTrust | 95.33 | 65.22 | 61.02 | 37.45 | 11.37 | 12.17 | |
|
281 |
+
| ZG | Krum | 96.95 | 96.35 | 96.93 | 96.96 | 97.07 | 96.50 |
|
282 |
+
| NO-attack | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | |
|
283 |
+
| Bulyan | 97.97 | 98.19 | 98.25 | 98.24 | 98.17 | 98.13 | |
|
284 |
+
| Median | 98.17 | 98.00 | 97.74 | 97.36 | 96.77 | 96.10 | |
|
285 |
+
| Trim-mean | 98.12 | 97.89 | 97.54 | 97.06 | 96.55 | 95.69 | |
|
286 |
+
| MANDERA | 98.47 | 98.35 | 98.44 | 98.46 | 98.44 | 98.41 | |
|
287 |
+
| FLTrust | 97.78 | 95.42 | 94.09 | 89.74 | 87.33 | 93.08 | |
|
288 |
+
| SF | Krum | 96.82 | 96.73 | 96.79 | 96.77 | 96.78 | 96.69 |
|
289 |
+
| NO-attack | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | |
|
290 |
+
| Bulyan | 98.38 | 98.35 | 98.30 | 98.25 | 98.19 | 98.13 | |
|
291 |
+
| Median | 98.16 | 98.00 | 97.75 | 97.33 | 96.78 | 96.14 | |
|
292 |
+
| Trim-mean | 98.24 | 98.03 | 97.69 | 97.17 | 96.58 | 95.56 | |
|
293 |
+
| MANDERA | 98.51 | 98.47 | 98.44 | 98.43 | 98.41 | 98.40 | |
|
294 |
+
| FLTrust | 98.28 | 98.02 | 97.55 | 97.02 | 90.58 | 84.53 | |
|
295 |
+
| MS | Krum | 98.45 | 98.40 | 98.34 | 98.33 | 98.29 | 98.24 |
|
296 |
+
| NO-attack | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | 98.45 | |
|
297 |
+
| Bulyan | 98.42 | 98.38 | 98.38 | 98.33 | 98.27 | 98.23 | |
|
298 |
+
| Median | 98.41 | 98.39 | 98.33 | 98.28 | 98.25 | 98.23 | |
|
299 |
+
| Trim-mean | 98.46 | 98.41 | 98.38 | 98.34 | 98.29 | 98.26 | |
|
300 |
+
| MANDERA | 98.48 | 98.45 | 98.46 | 98.43 | 98.44 | 98.44 | |
|
301 |
+
| FLTrust | 98.46 | 98.44 | 98.45 | 98.42 | 98.42 | 98.38 | |
|
302 |
+
|
303 |
+
out to be approximately 5 writers in each node. All other experimental configurations remain the same as Section 4.1. Figure 7 demonstrates the effectiveness of MANDERA in malicious node detection for the non-IID setting.
|
304 |
+
|
305 |
+
These results are very similar to the results where data is IID settings. Except for sign-flipping attacks, MANDERA can perfectly distinguish malicious nodes from benign nodes. when the number of malicious nodes is less than 25, MANDERA mis-classifies some benign nodes as malicious under sign-flipping attacks.
|
306 |
+
|
307 |
+
It is noticeable that even though MANDERA does not perform perfectly for SF attacks, the recall is always equal to 1. This indicates that all the malicious nodes are correctly identified, but a few of benign nodes are misclassified as malicious nodes. This is important to understand why MANDERA outperforms the completing defence strategies, as shown in Table 2.
|
308 |
+
|
309 |
+
Table 2 shows the global model training accuracy with different defense strategies for a non-IID setting. It indicates that MANDERA almost universally outperforms the other defensing strategies and achieves the best performance. Considering the performance of malicious detection under GA, ZG and MS, shown in Figure 7, it is natural to expect a good performance of MANDERA in terms of the accuracy of the global model. At the first glance, it is puzzling to observe MANDERA outperforms the others under SF attacks, considering the 'bad' performance of malicious node detection under SF attacks. To explain this phenomenon, we should pay special attention to the recall in Figure 7. A recall of 1 indicates all the malicious nodes are identified. Low values of accuracy and precision mean that some 'extreme' benign nodes are identified as malicious nodes. Therefore, the aggregated gradient values using MANDERA are close to the true gradient values, resulting in high accuracy. The results for all the epochs can be found in Figure 8. The corresponding model losses can be found in Appendix K.
|
310 |
+
|
311 |
+
![10_image_0.png](10_image_0.png)
|
312 |
+
|
313 |
+
Figure 6: Model Accuracy at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks. Shown above is the result for MNIST-Digits, figures for CIFAR and FASHION-MNIST
|
314 |
+
can be found in the appendix.
|
315 |
+
|
316 |
+
## 4.3 Computational Speed
|
317 |
+
|
318 |
+
MANDERA enjoys super-fast computation. We have previously been able to observe that MANDERA can perform at par with the current highest-performing poisoning attack defenses. Another benefit arises with the simplification of the mitigation strategy with the introduction of ranking at the core of the algorithm.
|
319 |
+
|
320 |
+
Sorting and Ranking algorithms are fast. Additionally, we only apply clustering on the two dimensions (mean and standard deviation of the ranking), in contrast to other works that seek to cluster on the entire node update Chen et al. (2021). The times in Table 3 for MANDERA, Krum and Bulyan do not include the parameter/gradient aggregation step. These times were computed on 1 core of a Dual Xeon 14-core E5-2690, with 8 Gb of system RAM and a single Nvidia Tesla P100. Table 3 demonstrates that MANDERA is able to achieve a faster speed than that of single Krum 2(by more than half) and Bulyan (by an order of magnitude).
|
321 |
+
|
322 |
+
We have listed the computational times of state-of-art methods in Table 3.
|
323 |
+
|
324 |
+
2The use of multi-krum would have yielded better protection (c.f. Section 4) at the behest of speed.
|
325 |
+
Metric Accuracy Recall Precision F1
|
326 |
+
|
327 |
+
![11_image_0.png](11_image_0.png)
|
328 |
+
|
329 |
+
Figure 7: Malicious node detection by MANDERA for a Non-IID data set: QMNIST under four different Byzantine attacks.
|
330 |
+
|
331 |
+
## 5 Discussion And Conclusion
|
332 |
+
|
333 |
+
Theorem 2.1 indicates that Byzantine attacks can only evade MANDERA when the attackers know the distribution of benign nodes and at the same time huge computational resources are required. This makes MANDERA a strategy which is challenging for attackers to evade.
|
334 |
+
|
335 |
+
We acknowledge FL framework may learn the global model only using subset of nodes at each round. In these settings MANDERA would still function, as we would rank and cluster on the parameters of the participating nodes, without assuming any number of poisoned nodes. In Algorithm 1, performance could be improved by incorporating higher order moments. MANDERA is unable to function when gradients are securely aggregated in its current form. However, malicious nodes can be identified and excluded from the secure aggregation step, while still protecting the privacy of participating nodes by performing MANDERA
|
336 |
+
through secure ranking Zhang et al. (2013); Lin & Tzeng (2005) (recall that MANDERA only requires the ranking matrix to detect poisoned nodes). In conclusion, we proposed a novel way to tackle the challenges for malicious node detection when using the gradient values. Our method transfers the gradient values to a ranking space. We have provided theoretical guarantees and experimentally shown efficacy in MANDERA for the detection of malicious nodes performing poisoning attacks against federated learning. Our proposed method MANDERA, is able to achieve excellent detection accuracy and maintain a higher model accuracy than other seminal.
|
337 |
+
|
338 |
+
Table 2: QMNIST model accuracy at 25th epoch. The **bold** highlights the best defense strategy under attack.
|
339 |
+
|
340 |
+
"NO-attack" is the baseline, where no attack is conducted. And n0 denotes the number of malicious nodes among 100 nodes.
|
341 |
+
|
342 |
+
| Attack | Defence | n0 = 5 | n0 = 10 | n0 = 15 | n0 = 20 | n0 = 25 | n0 = 30 |
|
343 |
+
|-----------|-----------|----------|-----------|-----------|-----------|-----------|-----------|
|
344 |
+
| GA | Krum | 94.16 | 93.87 | 93.95 | 94.10 | 94.27 | 93.89 |
|
345 |
+
| NO-attack | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | |
|
346 |
+
| Bulyan | 98.09 | 98.07 | 98.06 | 98.02 | 97.99 | 97.88 | |
|
347 |
+
| Median | 97.76 | 97.76 | 97.77 | 97.78 | 97.75 | 97.77 | |
|
348 |
+
| Trim-mean | 98.08 | 98.04 | 98.00 | 97.96 | 97.91 | 97.85 | |
|
349 |
+
| MANDERA | 98.11 | 98.11 | 98.12 | 98.10 | 98.10 | 98.08 | |
|
350 |
+
| FLTrust | 83.48 | 57.32 | 25.75 | 18.80 | 15.43 | 9.75 | |
|
351 |
+
| ZG | Krum | 94.21 | 93.90 | 93.92 | 94.11 | 93.84 | 93.95 |
|
352 |
+
| NO-attack | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | |
|
353 |
+
| Bulyan | 97.58 | 97.83 | 97.90 | 97.87 | 97.79 | 97.71 | |
|
354 |
+
| Median | 97.59 | 97.27 | 96.84 | 96.33 | 95.54 | 94.45 | |
|
355 |
+
| Trim-mean | 97.66 | 97.20 | 96.67 | 96.02 | 95.04 | 93.97 | |
|
356 |
+
| MANDERA | 97.85 | 97.78 | 97.64 | 98.21 | 98.13 | 98.09 | |
|
357 |
+
| FLTrust | 91.60 | 95.65 | 92.15 | 85.53 | 88.85 | 89.58 | |
|
358 |
+
| SF | Krum | 94.22 | 93.92 | 94.01 | 94.20 | 93.89 | 93.84 |
|
359 |
+
| NO-attack | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | |
|
360 |
+
| Bulyan | 98.01 | 97.96 | 97.98 | 97.93 | 97.81 | 97.66 | |
|
361 |
+
| Median | 97.61 | 97.29 | 96.84 | 96.33 | 95.58 | 94.55 | |
|
362 |
+
| Trim-mean | 97.82 | 97.52 | 96.97 | 96.21 | 94.98 | 93.75 | |
|
363 |
+
| MANDERA | 98.20 | 98.23 | 98.22 | 98.19 | 98.15 | 98.14 | |
|
364 |
+
| FLTrust | 97.75 | 97.21 | 96.65 | 88.25 | 89.99 | 88.29 | |
|
365 |
+
| MS | Krum | 95.97 | 94.09 | 94.17 | 94.28 | 95.23 | 95.80 |
|
366 |
+
| NO-attack | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | 98.12 | |
|
367 |
+
| Bulyan | 98.07 | 98.01 | 97.97 | 97.92 | 97.84 | 97.82 | |
|
368 |
+
| Median | 97.88 | 97.96 | 97.96 | 97.90 | 97.79 | 97.70 | |
|
369 |
+
| Trim-mean | 98.05 | 97.98 | 97.94 | 97.92 | 97.88 | 97.81 | |
|
370 |
+
| MANDERA | 98.11 | 98.12 | 98.10 | 98.08 | 98.08 | 98.06 | |
|
371 |
+
| FLTrust | 98.13 | 98.11 | 98.12 | 98.10 | 98.09 | 98.06 | |
|
372 |
+
|
373 |
+
| Defense (Detection) | Mean ± SD (ms) | Defense (Aggregation) | Mean ± SD (ms) | |
|
374 |
+
|-----------------------|------------------|-------------------------|------------------|-------------|
|
375 |
+
| MANDERA | 643 | ± 8.646 | Trimmed Mean | 3.96 ± 0.41 |
|
376 |
+
| Krum (Single) | 1352 | ± 10.09 | Median | 9.81 ± 3.88 |
|
377 |
+
| Bulyan | 27209 ± 233.4 | FLTrust | 361 ± 4.07 | |
|
378 |
+
|
379 |
+
Table 3: Mean and standard deviation of computational times for defense function given the same set of gradients from 100 nodes, of which 30 were malicious. Each function was repeated 100 times.
|
380 |
+
|
381 |
+
![13_image_0.png](13_image_0.png)
|
382 |
+
|
383 |
+
Figure 8: Model accuracy under different defences strategies for Non-IID data set: QMNIST
|
384 |
+
Proof. Let Fj (x) and Gj (x) be the cumulative distribution functions of Fj (·) and Gj (·), fj (x) and gj (x) be the corresponding density functions, and rj (x) = n1 − n1Gj (x) + n0 − n0Fj (x) + 1 be the expected ranking of value x among all entries in the j th column of the gradient value matrix.
|
385 |
+
|
386 |
+
Further define
|
387 |
+
|
388 |
+
$$\begin{array}{l}{{E_{b j}=\int_{-\infty}^{\infty}r_{j}(x)g_{j}(x)d x,\ V_{b j}=\int_{-\infty}^{\infty}\left(r_{j}(x)-E_{b j}\right)^{2}g_{j}(x)d x,}}\\ {{E_{m j}=\int_{-\infty}^{\infty}r_{j}(x)f_{j}(x)d x,\ V_{m j}=\int_{-\infty}^{\infty}(r_{j}(x)-E_{m j})^{2}f_{j}(x)d x.}}\end{array}$$
|
389 |
+
|
390 |
+
It can be shown for any 1 ≤ j ≤ p that
|
391 |
+
|
392 |
+
$$\begin{array}{r c l}{{E_{i j}}}&{{=}}&{{\mathbb{E}(\mathbf{R}_{i,j})=E_{b j}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+E_{m j}\cdot\mathbb{I}(i\in\mathcal{I}_{m}),}}\\ {{V_{i j}}}&{{=}}&{{\mathbb{V}(\mathbf{R}_{i,j})=V_{b j}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+V_{m j}\cdot\mathbb{I}(i\in\mathcal{I}_{m}).}}\end{array}$$
|
393 |
+
|
394 |
+
Thus, we would have according to Kolmogorov's strong law of large numbers (KSLLN) that
|
395 |
+
|
396 |
+
$$\begin{array}{r c l}{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}e_{i}}}&{{=}}&{{\bar{\mu}_{b}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+\bar{\mu}_{m}\cdot\mathbb{I}(i\in\mathcal{I}_{m})\ a.s.,}}\\ {{}}&{{}}&{{}}\\ {{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}v_{i}}}&{{=}}&{{\bar{s}_{b}^{2}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+\bar{s}_{m}^{2}\cdot\mathbb{I}(i\in\mathcal{I}_{m})\ a.s.,}}\end{array}$$
|
397 |
+
|
398 |
+
where the moments (µ¯b, s¯
|
399 |
+
2 b
|
400 |
+
) and (µ¯m, s¯
|
401 |
+
2 m) are deterministic functions of (Ebj , Vbj ) and (Emj , Vmj ) of the following form:
|
402 |
+
|
403 |
+
$$\bar{\mu}_{b}=\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}E_{b j},\qquad\bar{\mu}_{m}=\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}E_{m j},$$ $$\bar{s}_{b}^{2}=\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{b j},\qquad\bar{s}_{m}^{2}=\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{m j}.$$
|
404 |
+
|
405 |
+
It completes the proof.
|
406 |
+
|
407 |
+
Proof. According to Theorem 2.1, when both N∗ and p are large enough, with probability 1 there exist
|
408 |
+
(eb, vb), (em, vm) and δ > 0 such that ||(eb, vb) − (em, vm)||2 > δ, and
|
409 |
+
|
410 |
+
$||(e_{i},v_{i})-(e_{b},v_{b})||_{2}\leq\frac{\delta}{2}$ for $\forall\ i\in\mathcal{I}_{b}\quad$ and $||(e_{i},v_{i})-(e_{m},v_{m})||_{2}\leq\frac{\delta}{2}$ for $\forall\ i\in\mathcal{I}_{m}$.
|
411 |
+
Therefore, with a reasonable clustering algorithm such as K-mean with K = 2, we would expect Iˆb = Ib with probability 1.
|
412 |
+
|
413 |
+
Because we can always find a ∆ > 0 such that ||Mi,: − Mj,:||2 ≤ ∆ for any node pair (*i, j*) in a fixed dataset with a finite number of nodes, and mˆ b,: = mb,: when Iˆb = Ib, we have
|
414 |
+
|
415 |
+
$$\mathbb{E}||{\hat{\mathbf{m}}}_{b,:}-\mathbf{m}_{b,:}||_{2}\leq\Delta\cdot\mathbb{P}({\hat{\mathcal{I}}}_{b}\neq{\mathcal{I}}_{b}),$$
|
416 |
+
|
417 |
+
and thus
|
418 |
+
|
419 |
+
$$\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}\mathbb{E}||\hat{\mathbf{m}}_{b,:}-\mathbf{m}_{b,:}||_{2}=0.$$
|
420 |
+
|
421 |
+
It completes the proof.
|
422 |
+
|
423 |
+
Proof. According to Theorem 2.1, we only need to compute µ¯b, µ¯m, s¯
|
424 |
+
2 b and s¯
|
425 |
+
2 m under the Gaussian attacks.
|
426 |
+
|
427 |
+
Because Mi,j →d Nµj , Σj,j for ∀ i ∈ Im and Mi,j →d Nµj , σ2 j
|
428 |
+
/Ni for ∀ i ∈ Ib when N∗ → ∞, it is straightforward to see due to the symmetry of Gaussian distribution that
|
429 |
+
|
430 |
+
$$\lim_{N^{1}\rightarrow\infty}E_{bj}=\lim_{N^{1}\rightarrow\infty}E_{mj}=\lim_{N^{1}\rightarrow\infty}\mathbb{E}(\mathbf{R}_{i,j})=\frac{n+1}{2},\ 1\leq i\leq n,\ 1\leq j\leq p.\tag{1}$$
|
431 |
+
|
432 |
+
Therefore, we have
|
433 |
+
|
434 |
+
$$\bar{\mu}_{b}=\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}{\frac{1}{p}}\sum_{j=1}^{p}E_{b j}={\frac{n+1}{2}},$$ $$\bar{\mu}_{m}=\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}{\frac{1}{p}}\sum_{j=1}^{p}E_{m j}={\frac{n+1}{2}}.$$
|
435 |
+
$$\left(2\right)$$
|
436 |
+
$\quad(3)$ .
|
437 |
+
Moreover, assuming that the sample sizes of different benign nodes approach to each other with N∗ going to
|
438 |
+
infinity, i.e.,
|
439 |
+
$$\operatorname*{lim}_{N^{*}\to\infty}{\frac{1}{N^{*}}}\operatorname*{max}_{i,k\in{\mathcal{I}}_{b}}|N_{i}-N_{k}|=0,$$
|
440 |
+
|Ni − Nk| = 0, (2)
|
441 |
+
for each parameter dimension j, {Mi,j}i∈Ib would converge to the same Gaussian distribution N (µj , σ2 j
|
442 |
+
/N∗)
|
443 |
+
with the increase of N∗. Thus, due to the exchangeability of {Mi,j}i∈Ib and {Mi,j}i∈Im, it is easy to see that that
|
444 |
+
|
445 |
+
$ \lim_{N^*\to\infty}V_{bj}=s^2_{b,j},\quad\lim_{N^*\to\infty}V_{mj}=s^2_{m,j},$ plex functions of $ n_0$, $ n_1$, $ \sigma^2_i$, $ \Sigma_{j,j}$ and $ N^*_{\sigma^2_i}$, and $ s^2_{b,j}=s^2_{m,j}$ if and ...
|
446 |
+
where s 2 b,j and s 2 m,j are both complex functions of n0, n1, σ j b,j = s m,j if and only if σ 2 j
|
447 |
+
/N∗ = Σj,j . According to Theorem 2.1, s¯
|
448 |
+
2 b = limp→∞
|
449 |
+
1 p Pp j=1 Vbj = limN∗→∞
|
450 |
+
1 p Pp j=1 s 2 b,j and s¯
|
451 |
+
2 m =
|
452 |
+
limp→∞
|
453 |
+
1 p Pp j=1 Vmj = limp→∞
|
454 |
+
1 p Pp j=1 s 2 m,j . The proof is complete.
|
455 |
+
|
456 |
+
Proof. According to Theorem 2.1, we only need to compute µ¯b, µ¯m, s¯
|
457 |
+
2 b and s¯
|
458 |
+
2 m under the sign flipping attacks.
|
459 |
+
|
460 |
+
Lemma D.1. Under the sign flipping attack, for each malicious node i ∈ Im *and any parameter dimension*
|
461 |
+
j, we have Mi,j = −
|
462 |
+
r
|
463 |
+
n1 when N∗goes to infinity is Mi,j →d Nµj (r), σ2 j (r), 1 ≤ j ≤ p, (4) where µj (r) = −rµj , σ 2 j (r) = r 2·σ 2 j n1·N¯b , and N¯b = P n1 k∈Ib1 Nk is the harmonic mean of {Nk}k∈Ib .
|
464 |
+
Pk∈Ib Mk,j is a deterministic function of {Mk,j}k∈Ib
|
465 |
+
, whose limiting distribution Lemma 3.1 and Lemma D.1 tell us that for each parameter dimension j, the distribution of {Mi,j}
|
466 |
+
n i=1 is a mixture of Gaussian components {N µj , σ2 j
|
467 |
+
/Ni
|
468 |
+
}i∈Ib centered at µj plus a point mass located at µj (r) = −rµj . If Ni's are reasonably large, variances σ 2 j
|
469 |
+
/Ni's would be very close to zero, and the probability mass of the mixture distribution would concentrate to two local centers µj and µj (r) = −rµj , one for the benign nodes and the other one for the malicious nodes.
|
470 |
+
|
471 |
+
Under the sign flipping attack, because Mi,j →d Nµj (r), σ2 j
|
472 |
+
(r)for ∀ i ∈ Im and Mi,j →d Nµj , σ2 j
|
473 |
+
/Ni for
|
474 |
+
|
475 |
+
∀ i ∈ Ib when N∗ → ∞, and
|
476 |
+
$$\operatorname*{lim}_{N^{*}\to\infty}(\sigma_{j}^{2}/N_{i})=\operatorname*{lim}_{N^{*}\to\infty}\sigma_{j}^{2}(r)=0.$$
|
477 |
+
It is straightforward to see that
|
478 |
+
|
479 |
+
$$\operatorname*{lim}_{N^{*}\to\infty}P(M_{i,j}>M_{k,j})=\mathbb{I}(\mu_{j}>0),\ \forall\ i\in\mathcal{I}_{b},\forall\ k\in\mathcal{I}_{m},$$
|
480 |
+
$$\left(4\right)$$
|
481 |
+
which further indicates that lim N∗→∞ Ebj = lim N∗→∞ E(Ri,j ) = n1 + 1 2, if µj > 0, lim N∗→∞ Emj = lim N∗→∞ E(Ri,j ) = n + n1 + 1 2, if µj > 0, (5) lim N∗→∞ Ebj = lim N∗→∞ E(Ri,j ) = n + n0 + 1 2if µj < 0 lim N∗→∞ Emj = lim N∗→∞ E(Ri,j ) = n0 + 1 2if µj < 0, lim N∗→∞ E(R2 i,j ) = S 2 [1,n1] · I(i ∈ Ib) + S 2 [n1+1,n] · I(i ∈ Im) if µj > 0, lim N∗→∞ E(R2 i,j ) = S 2 [1,n0] · I(i ∈ Im) + S 2 [n0+1,n] · I(i ∈ Ib) if µj < 0,(6) where S 2 [a,b] =1 b−a+1 Pbk=a k 2.
|
482 |
+
Therefore, we have
|
483 |
+
|
484 |
+
(µ¯m = limN∗→∞ limp→∞ 1 p Pp j=1 Ebj = ρ · n+n1+1 2 + (1 − ρ) · n0+1 2, µ¯b = limN∗→∞ limp→∞ 1 p Pp j=1 Emjρ · n1+1 2 + (1 − ρ) · n+n0+1 2, where ρ = limp→∞ Pp j=1 I(µj>0) p.
|
485 |
+
Define µ¯i = ¯µm · I(i ∈ Im) + ¯µb · I(i ∈ Ib). Considering that
|
486 |
+
lim N∗→∞ limp→∞ 1 p Xp j=1 Vij = lim N∗→∞ limp→∞ 1 p Xp j=1 E(Ri,j − µ¯i) 2 = limp→∞lim N∗→∞ 1 p Xp j=1 E(R2 i,j ) − 2¯µiE(Ri,j ) + (¯µi) 2 =-τ¯m − (¯µm) 2· I(i ∈ Im) + -τ¯b − (¯µb) 2· I(i ∈ Ib),
|
487 |
+
where
|
488 |
+
$$\begin{array}{c}{{\bar{\tau}_{b}=\rho\cdot S_{[1,n_{1}]}^{2}+(1-\rho)\cdot S_{[n_{0}+1,n]}^{2},}}\\ {{\bar{\tau}_{m}=\rho\cdot S_{[n_{1}+1,n]}^{2}+(1-\rho)\cdot S_{[1,n_{0}]}^{2}.}}\end{array}$$
|
489 |
+
According to Theorem 2.1,
|
490 |
+
|
491 |
+
$$\begin{array}{r c l}{{\bar{s}_{b}^{2}}}&{{=}}&{{\operatorname*{lim}_{p\to\infty}\operatorname*{lim}_{N^{*}\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{b j}=\bar{\tau}_{b}-(\bar{\mu}_{b})^{2},}}\\ {{}}&{{}}&{{}}\\ {{\bar{s}_{m}^{2}}}&{{=}}&{{\operatorname*{lim}_{p\to\infty}\operatorname*{lim}_{N^{*}\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{m j}=\bar{\tau}_{m}-(\bar{\mu}_{m})^{2}.}}\end{array}$$
|
492 |
+
|
493 |
+
It completes the proof.
|
494 |
+
|
495 |
+
Proof. According to Theorem 2.1, we only need to compute µ¯b, µ¯m, s¯
|
496 |
+
2 b and s¯
|
497 |
+
2 m under the mean shift attacks.
|
498 |
+
|
499 |
+
Under the mean shift attack, all the malicious gradient will be inserted at a position which is dependent on z. More specifically, for a relatively large n, the samples from benign nodes are normally distributed. Therefore, on average, with proportion Φ(z) of the benign nodes having higher values of gradient than the malicious nodes.
|
500 |
+
|
501 |
+
First of all, we derive the property in term of the first moment. Denote α = bn1Φ(z)c. For a benign node, we have
|
502 |
+
|
503 |
+
$$\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}E_{b j}=\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}\mathbb{E}(\mathbf{R}_{i,j})=\frac{1}{n_{1}}\left(\sum_{k=1}^{\alpha}k+\sum_{s=n_{0}+1+\alpha}^{n}s\right)=\frac{n+1}{2}+\frac{n_{0}}{n_{1}}(n_{1}-\alpha).$$
|
504 |
+
|
505 |
+
For a malicious node, we have
|
506 |
+
|
507 |
+
$$\operatorname*{lim}_{N^{t}\to\infty}\operatorname*{lim}_{n\to\infty}E_{m j}=\operatorname*{lim}_{N^{t}\to\infty}\operatorname*{lim}_{n\to\infty}\mathbb{E}(\mathbf{R}_{i,j})={\frac{\alpha+1+\alpha+n_{0}}{2}}=\alpha+{\frac{1+n_{0}}{2}}.$$
|
508 |
+
|
509 |
+
Therefore, according to Theorem 2.1,
|
510 |
+
|
511 |
+
$$\begin{array}{r c l}{{\bar{\mu}_{b}}}&{{=}}&{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}E_{b j}=\frac{n+1}{2}+\frac{n_{0}}{n_{1}}(n_{1}-\alpha),}}\\ {{}}&{{}}&{{}}\\ {{\bar{\mu}_{m}}}&{{=}}&{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}E_{m j}=\alpha+\frac{1+n_{0}}{2}.}}\end{array}$$
|
512 |
+
|
513 |
+
Now, we derive the property in term of the second moment. For a benign node, we have
|
514 |
+
|
515 |
+
$$\lim_{N^{1}\to\infty}\lim_{n\to\infty}\mathbb{E}(\mathbf{R}_{i,j}^{2})=\frac{1}{n_{1}}\left(\sum_{k=1}^{\infty}k^{2}+\sum_{s=n_{0}+1+\alpha}^{n}s^{2}\right)=\frac{1}{n_{1}}\left(\tau(n)+\tau(\alpha)-\tau(\alpha+1+n_{0})\right),$$
|
516 |
+
|
517 |
+
where τ (·) is the function of 'sum of squares', i.e., τ (n) = Pn k=1 k 2.
|
518 |
+
|
519 |
+
For a malicious node, we have
|
520 |
+
|
521 |
+
$$\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{n\to\infty}\mathbb{E}({\boldsymbol{R}}_{i,j}^{2})=\left(\alpha+{\frac{1+n_{0}}{2}}\right)^{2},$$
|
522 |
+
|
523 |
+
Therefore, according to Theorem 2.1,
|
524 |
+
|
525 |
+
$$\begin{array}{r c l}{{\tilde{s}_{b}^{2}}}&{{=}}&{{\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{b j}=\frac{1}{n_{1}}\left(\tau(n)+\tau(\alpha)-\tau(\alpha+1+n_{0})\right)-\bar{\mu}_{b}^{2},}}\\ {{}}&{{}}&{{}}\\ {{\tilde{s}_{m}^{2}}}&{{=}}&{{\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{m j}=0.}}\end{array}$$
|
526 |
+
|
527 |
+
It completes the proof.
|
528 |
+
|
529 |
+
## F Neural Network Configurations
|
530 |
+
|
531 |
+
We train these models with a batch size of 10, an SGD optimizer operates with a learning rate of 0.01, and 0.5 momentum for 25 epochs. The accuracy of the model is evaluated on a holdout set of 1000 samples.
|
532 |
+
|
533 |
+
## F.1 Fashion-Mnist, Mnist And Qmnist
|
534 |
+
|
535 |
+
- Layer 1: 1 ∗ 16 ∗ 5, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.
|
536 |
+
|
537 |
+
- Layer 2: 16 ∗ 32 ∗ 5, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.
|
538 |
+
|
539 |
+
- Output: 10 Classes, Linear.
|
540 |
+
|
541 |
+
## F.2 Cifar-10
|
542 |
+
|
543 |
+
- Layer 1: 1 ∗ 32 ∗ 3, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling. - Layer 2: 32 ∗ 32 ∗ 3, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.
|
544 |
+
|
545 |
+
- Output: 10 Classes, Linear.
|
546 |
+
|
547 |
+
## G Metrics
|
548 |
+
|
549 |
+
The metrics observed in Section 4 to evaluate the performance of the defense mechanisms are defined as follows:
|
550 |
+
|
551 |
+
$$\begin{array}{r l}{{}}&{{\mathrm{TP}}}\\ {{}}&{{\mathrm{TP}+\mathrm{FP}\,,}}\\ {{}}&{{\mathrm{Accuracy}={\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{FP}+\mathrm{FN}+\mathrm{TN}}},}}\\ {{}}&{{\mathrm{Recall}={\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}},}}\\ {{}}&{{\mathrm{Precision}\times\mathrm{Recall}}}\\ {{}}&{{\mathrm{Precision}+\mathrm{Recall}}}\end{array}$$
|
552 |
+
|
553 |
+
## H Accuracy Of The Global Model Under Different Attacks
|
554 |
+
|
555 |
+
In Table 4 and 5 the numeric accuracies of each experimental configuration at the 25th epoch are presented.
|
556 |
+
|
557 |
+
## I Mandera Performance With Different Clustering Algorithms
|
558 |
+
|
559 |
+
In this section, Figure 10 demonstrate that the discriminating performance of MANDERA when hierarchical clustering and Gaussian mixture models are used in-place of K-means for FASHION-MNIST data set remain robust.
|
560 |
+
|
561 |
+
## J Model Losses On Cifar-10, Fashion-Mnist And Mnist Data
|
562 |
+
|
563 |
+
Figure 11 - 13 present the model loss to accompany the model prediction performance for CIFAR-10, FASHION-MNIST and MNIST-Digits respectively, which are previously seen in Section 4.
|
564 |
+
|
565 |
+
## K Model Losses On Qmnist Data
|
566 |
+
|
567 |
+
Figure 14 presents the model loss to accompany the model prediction performance of QMNIST previously seen in Section 4.
|
568 |
+
|
569 |
+
Table 4: FASHION-MNIST model accuracy at 25th epoch. The **bold** highlights the best defense strategy under attack. Note "NO-attack" is the baseline, where no attack is conducted. And n0 denotes the number of malicious nodes among 100 nodes.
|
570 |
+
|
571 |
+
| Attack | Defence | n0 = 5 | n0 = 10 | n0 = 15 | n0 = 20 | n0 = 25 | n0 = 30 |
|
572 |
+
|-----------|-----------|----------|-----------|-----------|-----------|-----------|-----------|
|
573 |
+
| GA | Krum | 83.66 | 84.13 | 84.09 | 83.30 | 84.22 | 82.32 |
|
574 |
+
| NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | |
|
575 |
+
| Bulyan | 87.80 | 87.80 | 87.79 | 87.73 | 87.67 | 87.69 | |
|
576 |
+
| Median | 87.73 | 87.76 | 87.73 | 87.70 | 87.72 | 87.70 | |
|
577 |
+
| Trim-mean | 87.85 | 87.78 | 87.75 | 87.74 | 87.72 | 87.73 | |
|
578 |
+
| MANDERA | 87.81 | 87.83 | 87.82 | 87.77 | 87.80 | 87.76 | |
|
579 |
+
| FLTrust | 66.13 | 36.35 | 50.20 | 17.85 | 16.00 | 9.66 | |
|
580 |
+
| ZG | Krum | 83.56 | 83.57 | 84.11 | 84.33 | 84.10 | 84.30 |
|
581 |
+
| NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | |
|
582 |
+
| Bulyan | 86.88 | 87.38 | 87.49 | 87.45 | 87.48 | 87.38 | |
|
583 |
+
| Median | 87.36 | 86.91 | 86.20 | 85.33 | 84.07 | 82.45 | |
|
584 |
+
| Trim-mean | 87.13 | 86.57 | 85.67 | 84.61 | 83.06 | 81.48 | |
|
585 |
+
| MANDERA | 87.79 | 87.81 | 87.84 | 87.72 | 87.76 | 87.78 | |
|
586 |
+
| FLTrust | 81.59 | 83.58 | 79.41 | 80.62 | 79.00 | 74.01 | |
|
587 |
+
| SF | Krum | 84.49 | 84.71 | 84.43 | 83.58 | 83.61 | 83.72 |
|
588 |
+
| NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | |
|
589 |
+
| Bulyan | 87.60 | 87.64 | 87.62 | 87.50 | 87.47 | 87.35 | |
|
590 |
+
| Median | 87.40 | 86.91 | 86.21 | 85.36 | 84.11 | 82.31 | |
|
591 |
+
| Trim-mean | 87.48 | 86.97 | 86.20 | 84.92 | 83.08 | 81.20 | |
|
592 |
+
| MANDERA | 87.85 | 87.79 | 87.82 | 87.79 | 87.77 | 87.74 | |
|
593 |
+
| FLTrust | 86.96 | 85.97 | 84.55 | 76.92 | 75.72 | 76.90 | |
|
594 |
+
| MS | Krum | 87.82 | 87.77 | 87.66 | 87.50 | 87.36 | 86.89 |
|
595 |
+
| NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | |
|
596 |
+
| Bulyan | 87.81 | 87.78 | 87.75 | 87.75 | 87.60 | 87.21 | |
|
597 |
+
| Median | 87.75 | 87.78 | 87.69 | 87.52 | 87.26 | 86.99 | |
|
598 |
+
| Trim-mean | 87.81 | 87.79 | 87.76 | 87.73 | 87.61 | 87.33 | |
|
599 |
+
| MANDERA | 87.81 | 87.78 | 87.78 | 87.79 | 87.71 | 87.79 | |
|
600 |
+
| FLTrust | 87.77 | 87.75 | 87.78 | 87.77 | 87.73 | 87.73 | |
|
601 |
+
|
602 |
+
Table 5: CIFAR-10 model accuracy at 25 th epoch. The **bold** highlights the best defense strategy under attack. Note "NO-attack" is the baseline, where no attack is conducted. And n0 denotes the number of malicious nodes among 100 nodes.
|
603 |
+
|
604 |
+
| Attack | Defence | n0 = 5 | n0 = 10 | n0 = 15 | n0 = 20 | n0 = 25 | n0 = 30 |
|
605 |
+
|-----------|-----------|----------|-----------|-----------|-----------|-----------|-----------|
|
606 |
+
| GA | Krum | 47.66 | 47.16 | 47.18 | 47.26 | 47.25 | 46.77 |
|
607 |
+
| NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | |
|
608 |
+
| Bulyan | 55.69 | 55.85 | 55.67 | 55.63 | 55.46 | 55.22 | |
|
609 |
+
| Median | 55.47 | 55.53 | 55.47 | 55.40 | 55.29 | 55.22 | |
|
610 |
+
| Trim-mean | 55.77 | 55.72 | 55.56 | 55.50 | 55.43 | 55.31 | |
|
611 |
+
| MANDERA | 55.74 | 55.69 | 55.63 | 55.65 | 55.76 | 55.69 | |
|
612 |
+
| FLTrust | 19.66 | 27.54 | 11.99 | 9.21 | 9.73 | 9.96 | |
|
613 |
+
| ZG | Krum | 46.85 | 46.84 | 47.96 | 47.13 | 47.12 | 47.53 |
|
614 |
+
| NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | |
|
615 |
+
| Bulyan | 52.30 | 53.87 | 54.28 | 54.36 | 54.35 | 54.10 | |
|
616 |
+
| Median | 54.06 | 52.18 | 50.18 | 48.01 | 44.89 | 38.08 | |
|
617 |
+
| Trim-mean | 53.34 | 51.22 | 49.14 | 46.45 | 42.02 | 34.36 | |
|
618 |
+
| MANDERA | 55.77 | 55.69 | 55.78 | 55.65 | 55.72 | 55.56 | |
|
619 |
+
| FLTrust | 48.05 | 39.21 | 39.44 | 44.25 | 40.27 | 39.49 | |
|
620 |
+
| SF | Krum | 48.11 | 47.79 | 46.93 | 47.89 | 47.59 | 47.13 |
|
621 |
+
| NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | |
|
622 |
+
| Bulyan | 55.30 | 54.99 | 54.86 | 54.68 | 54.43 | 54.05 | |
|
623 |
+
| Median | 53.96 | 52.29 | 50.49 | 47.89 | 44.93 | 37.22 | |
|
624 |
+
| Trim-mean | 54.37 | 52.40 | 49.97 | 47.30 | 42.32 | 33.76 | |
|
625 |
+
| MANDERA | 55.78 | 55.69 | 55.62 | 55.55 | 55.67 | 55.56 | |
|
626 |
+
| FLTrust | 54.18 | 50.21 | 46.39 | 44.45 | 36.19 | 34.39 | |
|
627 |
+
| MS | Krum | 55.60 | 55.23 | 54.51 | 53.79 | 52.31 | 50.54 |
|
628 |
+
| NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | |
|
629 |
+
| Bulyan | 55.68 | 55.62 | 55.37 | 54.98 | 54.26 | 52.10 | |
|
630 |
+
| Median | 55.47 | 55.20 | 54.55 | 53.72 | 52.17 | 50.55 | |
|
631 |
+
| Trim-mean | 55.64 | 55.59 | 55.38 | 55.09 | 54.29 | 52.32 | |
|
632 |
+
| MANDERA | 55.65 | 55.77 | 55.72 | 55.62 | 55.66 | 55.63 | |
|
633 |
+
| FLTrust | 55.81 | 55.64 | 55.62 | 55.42 | 55.09 | 54.65 | |
|
634 |
+
|
635 |
+
![21_image_0.png](21_image_0.png)
|
636 |
+
|
637 |
+
![21_image_1.png](21_image_1.png)
|
638 |
+
|
639 |
+
(b) FASHION-MNIST accuracy Figure 9: Model Accuracy at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks.
|
640 |
+
|
641 |
+
![22_image_0.png](22_image_0.png)
|
642 |
+
|
643 |
+
(a) Gaussian mixture model.
|
644 |
+
|
645 |
+
![22_image_1.png](22_image_1.png)
|
646 |
+
|
647 |
+
(b) Hierarchical clustering.
|
648 |
+
|
649 |
+
Figure 10: Classification performance of our proposed approach MANDERA (Algorithm 1) with other clustering algorithms under four types of attack for FASHION-MNIST data. GA: Gaussian attack; ZG:
|
650 |
+
Zero-gradient attack; SF: Sign-flipping; and MS: mean shift attack. The boxplot bounds the 25th (Q1) and 75th (Q3) percentile, with the central line representing the 50th quantile (median). The end points of the whisker represent the Q1-1.5(Q3-Q1) and Q3+1.5(Q3-Q1) respectively.
|
651 |
+
|
652 |
+
![23_image_0.png](23_image_0.png)
|
653 |
+
|
654 |
+
Figure 11: Model Loss for CIFAR-10 data at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks.
|
655 |
+
|
656 |
+
![23_image_2.png](23_image_2.png)
|
657 |
+
|
658 |
+
![23_image_1.png](23_image_1.png)
|
659 |
+
|
660 |
+
Figure 12: Model Loss for FASHION-MNIST data at each epoch of training, each line of the curve represents a different defense against Byzantine attacks.
|
661 |
+
|
662 |
+
![24_image_0.png](24_image_0.png)
|
663 |
+
|
664 |
+
Figure 13: Model Loss for MNIST-Digits data at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks.
|
665 |
+
|
666 |
+
![24_image_1.png](24_image_1.png)
|
667 |
+
|
668 |
+
Figure 14: QMNIST model loss.
|
669 |
+
|
670 |
+
Figure 15: Model Loss at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks.
|
671 |
+
|
672 |
+
## References
|
673 |
+
|
674 |
+
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In *International Conference on Artificial Intelligence and Statistics*, pp. 2938–2948.
|
675 |
+
|
676 |
+
PMLR, 2020.
|
677 |
+
|
678 |
+
Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. *Advances in Neural Information Processing Systems*, 32, 2019.
|
679 |
+
|
680 |
+
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*,
|
681 |
+
volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/
|
682 |
+
f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf.
|
683 |
+
|
684 |
+
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. *arXiv preprint arXiv:2012.13995*, 2020.
|
685 |
+
|
686 |
+
Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Provably secure federated learning against malicious clients. In *Proceedings of the AAAI Conference on Artificial Intelligence*, 2021.
|
687 |
+
|
688 |
+
Yudong Chen, Lili Su, and Jiaming Xu. Distributed statistical machine learning in adversarial settings:
|
689 |
+
Byzantine gradient descent. *Proc. ACM Meas. Anal. Comput. Syst.*, 1(2), December 2017. doi: 10.1145/
|
690 |
+
3154503. URL https://doi.org/10.1145/3154503.
|
691 |
+
|
692 |
+
Zheyi Chen, Pu Tian, Weixian Liao, and Wei Yu. Zero knowledge clustering based adversarial mitigation in heterogeneous federated learning. *IEEE Transactions on Network Science and Engineering*, 8(2):1070–1083, 2021. doi: 10.1109/TNSE.2020.3002796.
|
693 |
+
|
694 |
+
Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
|
695 |
+
|
696 |
+
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to byzantine-robust federated learning. In 29th {USENIX} Security Symposium ({USENIX} *Security 20)*, pp. 1605–1622, 2020.
|
697 |
+
|
698 |
+
Chris Fraley and Adrian E Raftery. Model-based clustering, discriminant analysis, and density estimation.
|
699 |
+
|
700 |
+
Journal of the American statistical Association, 97(458):611–631, 2002.
|
701 |
+
|
702 |
+
Rachid Guerraoui, Sébastien Rouault, et al. The hidden vulnerability of distributed learning in byzantium.
|
703 |
+
|
704 |
+
In *International Conference on Machine Learning*, pp. 3521–3530. PMLR, 2018.
|
705 |
+
|
706 |
+
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® *in Machine Learning*, 14(1–2):1–210, 2021.
|
707 |
+
|
708 |
+
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. In Concurrency: the works of leslie lamport, pp. 203–226. ACM, 2019.
|
709 |
+
|
710 |
+
Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. Learning to detect malicious clients for robust federated learning. *arXiv preprint arXiv:2002.00211*, 2020.
|
711 |
+
|
712 |
+
Hsiao-Ying Lin and Wen-Guey Tzeng. An efficient solution to the millionaires' problem based on homomorphic encryption. In *International Conference on Applied Cryptography and Network Security*, pp. 456–466.
|
713 |
+
|
714 |
+
Springer, 2005.
|
715 |
+
|
716 |
+
Jinhyun So, Başak Güler, and A. Salman Avestimehr. Byzantine-resilient secure federated learning. IEEE
|
717 |
+
Journal on Selected Areas in Communications, 39(7):2168–2181, 2021. doi: 10.1109/JSAC.2020.3041404.
|
718 |
+
|
719 |
+
Jacob Steinhardt. *Robust learning: Information theory and algorithms*. PhD thesis, Stanford University, 2018.
|
720 |
+
|
721 |
+
Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. Data poisoning attacks against federated learning systems. In *European Symposium on Research in Computer Security*, pp. 480–501. Springer, 2020.
|
722 |
+
|
723 |
+
Zhaoxian Wu, Qing Ling, Tianyi Chen, and Georgios B Giannakis. Byrd-saga - github.
|
724 |
+
|
725 |
+
https://github.com/MrFive5555/Byrd-SAGA, 2020a.
|
726 |
+
|
727 |
+
Zhaoxian Wu, Qing Ling, Tianyi Chen, and Georgios B Giannakis. Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. *IEEE Transactions on Signal Processing*, 68:
|
728 |
+
4583–4596, 2020b.
|
729 |
+
|
730 |
+
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017.
|
731 |
+
|
732 |
+
Cong Xie, Sanmi Koyejo, and Indranil Gupta. Zeno: Distributed stochastic gradient descent with suspicionbased fault-tolerance. In *International Conference on Machine Learning*, pp. 6893–6901. PMLR, 2019.
|
733 |
+
|
734 |
+
Cong Xie, Sanmi Koyejo, and Indranil Gupta. Zeno++: Robust fully asynchronous sgd. In *International* Conference on Machine Learning, pp. 10495–10503. PMLR, 2020.
|
735 |
+
|
736 |
+
Chhavi Yadav and Léon Bottou. Cold case: The lost mnist digits. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 2019.
|
737 |
+
|
738 |
+
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning:
|
739 |
+
Towards optimal statistical rates. In *International Conference on Machine Learning*, pp. 5650–5659. PMLR,
|
740 |
+
2018.
|
741 |
+
|
742 |
+
Lan Zhang, Xiang-Yang Li, Yunhao Liu, and Taeho Jung. Verifiable private multi-party computation: ranging and ranking. In *2013 Proceedings IEEE INFOCOM*, pp. 605–609. IEEE, 2013.
|
ptZiZAli6D/ptZiZAli6D_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 27,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 27,
|
14 |
+
"code": 0,
|
15 |
+
"table": 5,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 31,
|
18 |
+
"unsuccessful_ocr": 4,
|
19 |
+
"equations": 35
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|