# Mandera: Malicious Node Detection In Federated Learning Via Ranking Anonymous authors Paper under double-blind review ## Abstract Byzantine attacks aim to hinder the deployment of federated learning algorithms by sending malicious gradients to degrade the model. Although the benign gradients and Byzantine gradients are distributed differently, identifying the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution, and (2) the benign gradients and the malicious gradients are mixed (two-sample test methods cannot apply directly). To address these issues, we propose MANDERA which is theoretically guaranteed to efficiently detect all malicious gradients under Byzantine attacks with no prior knowledge or history about the number of attacked nodes. More specifically, we proposed to transfer the original updating gradient space into a ranking matrix. By such an operation, the scales of different dimensions of the gradients in the ranking space become identical. Then the high-dimensional benign gradients and the malicious gradients can be easily separated in the ranking space. The effectiveness of MANDERA is further confirmed by experimentation on *four* Byzantine attack implementations (Gaussian, Zero Gradient, Sign Flipping, Shifted Mean), compared with state-of-the-art defences. The experiments cover both IID and Non-IID datasets. ## 1 Introduction Federated Learning (FL) is a decentralized learning framework that allows multiple participating nodes to learn on a local collection of training data. The updating gradient values of each respective node are sent to a global coordinator for aggregation. The global model collectively learns from each of these individual nodes by aggregating the gradient updates before relaying the updated global model back to the participating nodes. The aggregation of multiple nodes allows the model to learn from a larger dataset which will result in a model having greater performance than the ones only learning on their local subset of data. FL presents two key advantages: (1) the increase of privacy for the contributing node as local data is not communicating with the global coordinator, and (2) a reduction in computation by the global node as the computation is offloaded to contributing nodes. However, FL is vulnerable to various attacks, including data poisoning attacks (Tolpegin et al., 2020) and Byzantine attacks (Lamport et al., 2019). The presence of malicious actors in the collaborative process may seek to poison the performance of the global model, to reduce the output performance of the model (Chen et al., 2017; Baruch et al., 2019; Fang et al., 2020; Tolpegin et al., 2020), or to embed hidden back-doors within the model (Bagdasaryan et al., 2020). Byzantine attack aims to devastate the performance of the global model by manipulating the gradient values. These gradient values that have been manipulated are sent from malicious nodes which are unknown to the global node. The Byzantine attacks can result in a global model which produces an undesirable outcome (Lamport et al., 2019). Researchers seek to defend FL from the negative impacts of these attacks. This can be done by either identifying the malicious nodes or making the global model more robust to these types of attacks. In our paper, we focus on identifying the malicious nodes to exclude the nodes which are deemed to be malicious in the aggregation step to mitigate the impact of malicious nodes. Most of the existing methods rely on the gradient values to determine whether a node is malicious or not, for example, Blanchard et al. (2017); Yin et al. (2018); Guerraoui et al. (2018); Li et al. (2020); Fang et al. (2020); Cao et al. (2020); Wu et al. (2020b); Xie et al. (2019; 2020); Cao et al. (2021) and So et al. (2021). All the above methods are effective in certain scenarios. ![1_image_0.png](1_image_0.png) Figure 1: Patterns of nodes in gradient space and ranking space respectively under mean shift attacks. The columns of the figure represent the number of malicious nodes among 100 nodes: 10, 20 and 30. There is a lack of theoretical guarantee to detect all the malicious nodes in the literature. Although the extreme malicious gradients can be excluded by the above approaches, some malicious nodes could be mis-classified as benign nodes and vice versa. The challenging issues in the community are caused by the following two phenomena: [F1] the gradient values of benign nodes and malicious nodes are often non-distinguishable; [F2] the gradient matrix is always high-dimensional (large column numbers) and each dimension follows its unique distribution. The phenomenon [F1] indicates that it is not reliable to detect malicious nodes only using a single column from the gradient matrix. And the phenomenon [F2] hinders us from using all the columns of the gradient matrix, because it requires a scientific way to accommodate a large number of columns which are distributed considerably differently. In this paper, we propose to resolve these critical challenges from a novel perspective. Instead of working on the node updates directly, we propose to extract information about malicious nodes indirectly by transforming the node updates from numeric gradient values to the ranking space. Compared to the original numeric gradient values, whose distribution is difficult to model, the rankings are much easier to handle both theoretically and practically. Moreover, as rankings are scale-free, we no longer need to worry about the scale difference across different dimensions. We proved under mild conditions that the first two moments of the transformed ranking vectors carry key information to detect the malicious nodes under Byzantine attacks. Based on these theoretical results, a highly efficient method called MANDERA is proposed to separate the malicious nodes from the benign ones by clustering all local nodes into two groups based on the ranking vectors. Figure 1 shows an illustrative motivation for our method. It demonstrates the behaviors of malicious and benign nodes under mean shift attacks. Obviously, the malicious and benign nodes are not distinguishable in the gradient space due to the challenges we mentioned above, while they are well separated in the ranking space. The contributions of this work are as follows: (1) we propose the first algorithm leveraging the ranking space of model updates to detect malicious nodes (Figure 2); (2) we provide a theoretical guarantee for the detection of malicious nodes based on the ranking space under Byzantine attacks; (3) our method does not assume knowledge of the number of malicious nodes, which is required in the learning process of most of the prior methods; (4) we experimentally demonstrate the effectiveness and robustness of our defense on Byzantine attacks, including Gaussian attack (GA), Sign Flipping attack (SF), Zero Gradient attack (ZG) and Mean Shift attack (MF); (5) an experimental comparison between MANDERA and a collection of robust aggregation techniques is provided. Related works. In the literature, there have been a collection of efforts along the research on defensing Byzantine attacks. Blanchard et al. (2017) propose a defense referred to as Krum that treats local nodes whose update vector is too far away from the aggregated barycenter as malicious nodes and precludes them from the downstream aggregation. Guerraoui et al. (2018) propose Bulyan, a process that performs aggregation on subsets of node updates (by iteratively leaving each node out) to find a set of nodes with the most aligned updates given an aggregation rule. Cao et al. (2020) maintains a trusted model and dataset on ![2_image_0.png](2_image_0.png) Figure 2: An overview of MANDERA. which submitted node updates may be bootstrapped by weighting each node's update in the aggregation step based on it's cosine similarity to the trusted update. Xie et al. (2019) compute a *Stochastic Descendant Score* (SDS) based on the estimated descendant of the loss function and the magnitude of the update submitted to the global node, and only include a predefined number of nodes with the highest SDS in the aggregation. On the other hand, Chen et al. (2021) propose a zero-knowledge approach to detect and remove malicious nodes by solving a weighted clustering problem. The resulting clusters update the model individually and accuracy against a validation set is checked. All nodes in a cluster with significant negative accuracy impact are rejected and removed from the aggregation step. ## 2 Defense Against Byzantine Attacks Via Ranking In this section, notations are first introduced and an algorithm to detect malicious nodes is proposed. ## 2.1 Notations Suppose there are n local nodes in the federated learning framework, where n1 nodes are benign nodes whose indices are denoted by Ib and the other n0 = n − n1 nodes are malicious nodes whose indices are denoted by Im. The training model is denoted by f(θ, D), where θ ∈ R p×1is a p-dimensional parameter vector and D is a data matrix. Denote the message matrix received by the central server from all local nodes as M ∈ R n×p, where Mi,: denotes the message received from node i. For a benign node i, let Di be the data matrix on it with Ni as the sample size, we have Mi,: = ∂f(θ,Di) ∂θ|θ=θ∗ , where θ ∗is the parameter value from the global model. In the rest of the paper, we suppress ∂f(θ,Di) ∂θ|θ=θ∗ to ∂f(θ,Di) ∂θto denote the gradient value for simplicity purpose. A malicious node j ∈ Im, however, tends to attack the learning system by manipulating Mj,:in some way. Hereinafter, we denote N∗ = min({Ni}i∈Ib ) to be the minimal sample size of the benign nodes. Given a vector of real numbers a ∈ R n×1, define its ranking vector as b = Rank(a) ∈ perm{1, · · · , n}, where the ranking operator *Rank* maps the vector a to an element in permutation space perm{1, · · · , n} which is the set of all the permutations of {1, · · · , n}. For example, *Rank*(1.1, −2, 3.2) = (2, 3, 1), it ranks the values from largest to smallest. We adopt average ranking, when there are ties. With the *Rank* operator, we can transfer the message matrix M to a ranking matrix R by replacing its column M:,j by the corresponding ranking vector R:,j = *Rank*(M:,j ). Further, define $$e_{i}\triangleq{\frac{1}{p}}\sum_{j=1}^{p}\mathbf{R}_{i,j}\qquad{\mathrm{and}}\qquad v_{i}\triangleq{\frac{1}{p}}\sum_{j=1}^{p}(\mathbf{R}_{i,j}-e_{i})^{2}$$ to be the mean and variance of Ri,:, respectively. As it is shown in later subsections, we can judge whether node i is a malicious node based on (ei, vi) under various attack types. In the following, we will highlight the behavior of the benign nodes first, and then discuss the behavior of malicious nodes and their difference with the benign nodes under Byzantine attacks. ## 2.2 Behaviors Of Nodes Under Byzantine Attacks Byzantine attacks aim to devastate the global model by manipulating the gradient values of some local nodes. For a general Byzantine attack, we assume that the gradient vectors of benign nodes and malicious ![3_image_0.png](3_image_0.png) si Figure 3: The scatter plots of (ei, si) for the 100 nodes under four types of attack as illustrative examples demonstrating ranking mean and standard deviation from the 1st epoch of training for the FASHION-MNIST dataset. Four attacks are Gaussian Attack (GA), Zero Gradient attack (ZG), Sign Flipping attack (SF) and Mean shift attack (MS). nodes follow two different distributions G and F. We would expect systematical differences in their behavior patterns in the ranking matrix R, based on which malicious node detection can be achieved. Theorem 2.1 demonstrates the concrete behaviors of benign nodes and malicious nodes under general Byzantine attacks. Theorem 2.1 (Behavior under Byzantine attacks). For a general Byzantine attack, assume that the gradient values from benign nodes and malicious nodes follow two distributions G(·) and F(·) respectively (both G and F are p*-dimensional). We have* $$\begin{array}{r c l}{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}e_{i}}}&{{=}}&{{\bar{\mu}_{b}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+\bar{\mu}_{m}\cdot\mathbb{I}(i\in\mathcal{I}_{m})\ a.s.,}}\\ {{}}&{{}}&{{}}\\ {{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}v_{i}}}&{{=}}&{{\bar{s}_{b}^{2}\cdot\mathbb{I}(i\in\mathcal{I}_{b})+\bar{s}_{m}^{2}\cdot\mathbb{I}(i\in\mathcal{I}_{m})\ a.s.,}}\end{array}$$ where (µ¯b, s¯ 2 b ) and (µ¯m, s¯ 2 m) are highly non-linearly functions of G(·) and F(·) *whose concrete form is detailed* in the Appendix A, and "a.s." is the abbreviation of "almost surely". The proof can be found in the Appendix A. If the attackers can access the exact distribution G, which is very rare, an obvious strategy to evade defense is to let F = G. In this case, the attack will have no impact on the global model. More often, the attackers have little information about distribution G. In this case, it is a rare event for the attackers to design a distribution F satisfying (µ¯b, s¯ 2 b ) = (µ¯m, s¯ 2 m) for the malicious nodes to follow. In fact, most popular Byzantine attacks never try to make such an effort at all. Thus, the malicious nodes and the benign nodes are distinguishable with respect to their feature vectors {(ei, vi)}1≤i≤n, because (ei, vi) reaches to different limits for begin and malicious nodes. Considering that the standard deviation si = √viis typically of the similar scale of ei, hereinafter we employ (ei, si), instead of (ei, vi), as the feature vector of node i for malicious node detection. Figure 3 illustrates the typical scatter plots of (ei, si) for benign and malicious nodes under four typical Byzantine attacks, i.e., GA, SN, ZG and MS. It can be observed that malicious nodes and benign nodes are all well separated in these scatter plots, indicating a proper clustering algorithm will distinguish these two groups. We note that both si and ei are informative for malicious node detection, since in some cases (e.g., under Gaussian attacks) it is difficult to distinguish malicious nodes from benign ones based on ei only. ## 2.3 Algorithm For Malicious Node Detection Under Byzantine Attacks Theorem 2.1 implies that, under general Byzantine attacks, the feature vector (ei, si) of node i converges to two different limits for benign and malicious nodes, respectively. Thus, for a real dataset where Ni's and p are all finite but reasonably large numbers, the scatter plot of {(ei, si)}1≤i≤n would demonstrate a clustering structure: one cluster for the benign nodes and the other cluster for the malicious nodes. ## Algorithm 1 Mandera Input: The message matrix M. 1: Convert the message matrix M to the ranking matrix R by applying *Rank* operator. 2: Compute mean and standard deviation of rows in R, i.e., {(ei, si)}1≤i≤n. 3: Run the clustering algorithm K-means to {(ei, si)}1≤i≤n with K = 2, and predict the set of benign nodes with the larger cluster denoted by Iˆb. Output: The predicted benign node set Iˆb. Based on this intuition, we propose *MAlicious Node DEtection via RAnking* (MANDERA) to detect the malicious nodes, whose workflow is detailed in Algorithm 1. MANDERA can be applied to either a single epoch or multiple epochs. For a single-epoch mode, the input data M is the message matrix received from a single epoch. For multiple-epoch mode, the data M is the column-concatenation of the message matrices from multiple epochs. By default, the experiments below all use a single epoch to detect the malicious nodes. The predicted benign nodes Iˆb obtained by MANDERA naturally leads to an aggregated message mˆ b,: = 1 \#(Iˆb) Pi∈Iˆb Mi,:. Theorem 2.2 shows that Iˆb and mˆ b lead to consistent estimations of Ib and mb = 1 n1 Pi∈Ib Mi,: respectively, indicating that MANDERA enjoys *robustness guarantee* Steinhardt (2018) for Byzantine attacks. Theorem 2.2 (Robustness guarantee). *Under Byzantine attacks, we have:* $$\operatorname*{lim}_{N^{\star},p\to\infty}\mathbb{P}(\hat{\mathcal{I}}_{b}=\mathcal{I}_{b})=1,\ \operatorname*{lim}_{N^{\star},p\to\infty}\mathbb{E}||\hat{\mathbf{m}}_{b,:}-\mathbf{m}_{b,:}||_{2}=0.$$ The proof of Theorem 2.2 can be found in Appendix B. As E(mˆ b,:) = mb,:, MANDERA obviously satisfies the (*α, f*)-Byzantine Resilience condition, which is used in Blanchard et al. (2017) and Guerraoui et al. (2018) to measure the robustness of their estimators. ## 3 Theoretical Analysis For Specific Byzantine Attacks Theorem 2.1 provides us general guidance about the behavior of nodes under Byzantine attacks. In this section, we examine the behavior for specific attacks, including Gaussian attacks, zero gradient attacks, sign flipping attacks and mean shift attacks. As the behavior of benign nodes does not depend on the type of Byzantine attack, we can study the statistical properties of (ei, vi) for a benign node i ∈ Ib before the specification of a concrete attack type. For any benign node i, the message generated for j th parameter is Mi,j = 1 Ni PNi l=1 ∂f(θ,Di,l) ∂θj, where Di,l denotes the l th sample on it. Throughout this paper, we assume that Di,l's are independent and identically distributed (IID) samples drawn from a data distribution D. Lemma 3.1. *Under the IID data assumption, further denote* µj = E ∂f(θ,Di,l) ∂θj and σ 2 j = Var ∂f(θ,Di,l) ∂θj < ∞, with Ni going to infinity, for ∀ j ∈ {1, · · · , p}, we have Mi,j → µj almost surely (a.s.) and Mi,j →d Nµj , σ2 j /Ni . Lemma 3.1 can be proved by using the Kolmogorov's Strong Law of Large Numbers (KSLLN) and Central Limit Theorem. For the rest of this section, we will derive the detailed forms of µ¯b, µ¯m, s¯ 2 b and s¯ 2 m, as defined in Theorem 2.1, under four specific Byzantine attacks. ## 3.1 Gaussian Attack Definition 3.2 (Gaussian attack). In a Gaussian attack, the attacker generates malicious gradient values as follows: {Mi,:}i∈Im *∼ MVN* (mb,:, Σ), where mb,: =1 n1 Pi∈Ib Mi,:is the mean vector of Gaussian distribution and Σ is the covariance matrix determined by the attacker. ![5_image_0.png](5_image_0.png) Figure 4: Independence test for 100,000 column pairs randomly chosen from message matrix M generated from FASHION-MNIST data. Considering that Mi,j → µj a.s. with Ni going to infinity for all i ∈ Ib based on Definition 3.2, it is straightforward to see that limN∗→∞ mb,j = µj *a.s.,* and the distribution of Mi,j for each i ∈ Im converges to the Gaussian distribution centered at µj . Based on this fact, the limiting behavior of the feature vector (ei, vi) can be established for both benign and malicious nodes. Theorem 3.3 summarizes the results, with the proof detailed in Appendix C. Theorem 3.3 (Behavior under Gaussian attacks). Assuming {R:,j}1≤j≤p are independent of each other, under the Gaussian attack, the behaviors of benign and malicious nodes are as follows: $$\bar{\mu}_{b}=\bar{\mu}_{m}=\frac{n+1}{2},\quad\bar{s}_{b}^{2}=\frac{1}{p}\sum_{j=1}^{p}s_{b,j}^{2},\quad\bar{s}_{m}^{2}=\frac{1}{p}\sum_{j=1}^{p}s_{m,j}^{2},$$ where s 2 b,j and s 2 m,j *are both complex functions of* n0, n1, σ 2 j , Σj,j and N∗ *whose concrete form is detailed in* the Appendix C. Considering that s¯ 2 b = s¯ 2 m if and only if Σj,j 's fall into a lower dimensional manifold whose measurement is zero under the Lebesgue measure, we have P(s¯ 2 b = s¯ 2 m) = 0 if the attacker specifies the Gaussian variance Σj,j 's arbitrarily in the Gaussian attack. Thus, Theorem 3.3 in fact suggests that the benign nodes and the malicious nodes are different on the value of vi, and therefore provides a guideline to detect the malicious nodes. Although the we do need N∗ and p to go to infinity for getting the theoretical results in Theorem 3.3, in practice the malicious node detection algorithm based on the theorem typically works very well when N∗ and p are reasonably large and Ni's are not dramatically far away from each other. The independent ranking assumption in Theorem 3.3, which assumes that {R:,j}1≤j≤p are independent of each other, may look restrictive. However, in fact it is a mild condition that can be easily satisfied in practice due to the following reasons. First, for a benign node i ∈ Ib, Mi,j and Mi,k are often nearly independent, as the correlation between two model parameters θj and θk is often very weak in a large deep neural network with a huge number of parameters. To verify the statement, we implemented independence tests for 100,000 column pairs randomly chosen from the message matrix M generated from the FASHION-MNIST data. Distribution of the p-values of these tests are demonstrated in Figure 4 via a histogram, which is very close to a uniform distribution, indicating that Mi,j and Mi,k are indeed nearly independent in practice. Second, even some M:,j and M:,k show a strong correlation, the magnitude of the correlation would be reduced greatly during the transformation from M to R, as the final ranking Ri,j also depends on many other factors. Actually, the independent ranking assumption could be relaxed to be an uncorrelated ranking assumption which assumes the rankings are uncorrelated with each other. Adopting the weaker assumption will result in a change in the convergence type of our theorems from the "almost surely convergence" to "convergence in probability". ## 3.2 Sign Flipping Attack Definition 3.4 (Sign flipping attack). Sign flipping attack aims to generate the gradient values of malicious nodes by flipping the sign of the average of all the benign nodes' gradient at each epoch, i.e., specifying Mi,: = −rmb,: for any i ∈ Im, where r > 0,mb = 1 n1 Pk∈Ib Mk,:. Based on the above definition, the update message of a malicious node i under the sign flipping attack is Mi,: = −rmb,: = − r n1 Pk∈Ib Mk,:. The theorem 3.5 summarizes the behavior of malicious nodes and benign nodes respectively, with the detailed proof provided in Appendix D. Theorem 3.5 (Behavior under sign flipping attacks). With the same assumption as posed in Theorem 3.3, under the sign flipping attack, the behaviors of benign and malicious nodes are as follows: $$\begin{array}{l l}{{\bar{\mu}_{b}=\frac{n+n_{0}+1}{2}-n_{0}\rho,}}&{{\bar{\mu}_{m}=n_{1}\rho+\frac{n_{0}+1}{2},}}\\ {{\bar{s}_{b}^{2}=\rho S_{[1,n_{1}]}^{2}+(1-\rho)S_{[n_{0}+1,n]}^{2}-(\bar{\mu}_{b})^{2},}}\\ {{\bar{s}_{m}^{2}=\rho S_{[n_{1}+1,n]}^{2}+(1-\rho)S_{[1,n_{0}]}^{2}-(\bar{\mu}_{m})^{2},}}\end{array}$$ where ρ = limp→∞ Pp j=1 I(µj>0) pwhich depends on n0 and n1, S 2 [a,b] =1 b−a+1 Pbk=a k 2*. And* s¯ 2 m and s¯ 2 b are both quadratic functions of ρ. Considering that µ¯b = µ¯m if and only if ρ = 1 2 , and s¯ 2 b = s¯ 2 m if and only if ρ is the solution of a quadratic function, the probability of (µ¯b, s¯ 2 b ) = (µ¯m, s¯ 2 m) is zero as p → ∞. Such a phenomenon suggests that we can detect the malicious nodes based on the moments (ei, vi) to defense the sign flipping attack as well. Noticeably, we note that the limit behavior of ei and vi does not dependent on the specification of r, which defines the sign flipping attack. Although such a fact looks a bit abnormal at the first glance, it is totally understandable once we realize that with the variance of Mi,j shrinks to zero with Ni goes to infinity for each benign node i, any different between µj and µj (r) would result in the same ranking vector R:,j in the ranking space. ## 3.3 Zero Gradient Attack Definition 3.6 (Zero gradient attack). Zero gradient attack aims to make the aggregated message to be zero, i.e., Pn i=1 Mi,: = 0, at each epoch, by specifying Mi,: = − n1 n0mb,: for all i ∈ Im. Apparently, the zero gradient attack defined above is a special case of sign flipping attack by specifying r = n1 n0 . The conclusions of Theorem 3.5 keep unchanged for different specifications of r. Therefore, the behavior follows the same limiting behaviors as described in Theorem 3.5. ## 3.4 Mean Shift Attack Definition 3.7 (Mean shift attack). Mean shift attack (Baruch et al., 2019) manipulates the updates of the malicious nodes in the following fashion, mi,j = µj − z · σj for i ∈ Im and 1 ≤ j ≤ p, where µj = 1 n1 Pi∈Ib Mi,j , σj = q 1 n1 Pi∈Ib (Mi,j − µj ) 2 and z = arg maxt φ(t) 0 such that ||(eb, vb) − (em, vm)||2 > δ, and $||(e_{i},v_{i})-(e_{b},v_{b})||_{2}\leq\frac{\delta}{2}$ for $\forall\ i\in\mathcal{I}_{b}\quad$ and $||(e_{i},v_{i})-(e_{m},v_{m})||_{2}\leq\frac{\delta}{2}$ for $\forall\ i\in\mathcal{I}_{m}$. Therefore, with a reasonable clustering algorithm such as K-mean with K = 2, we would expect Iˆb = Ib with probability 1. Because we can always find a ∆ > 0 such that ||Mi,: − Mj,:||2 ≤ ∆ for any node pair (*i, j*) in a fixed dataset with a finite number of nodes, and mˆ b,: = mb,: when Iˆb = Ib, we have $$\mathbb{E}||{\hat{\mathbf{m}}}_{b,:}-\mathbf{m}_{b,:}||_{2}\leq\Delta\cdot\mathbb{P}({\hat{\mathcal{I}}}_{b}\neq{\mathcal{I}}_{b}),$$ and thus $$\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}\mathbb{E}||\hat{\mathbf{m}}_{b,:}-\mathbf{m}_{b,:}||_{2}=0.$$ It completes the proof. Proof. According to Theorem 2.1, we only need to compute µ¯b, µ¯m, s¯ 2 b and s¯ 2 m under the Gaussian attacks. Because Mi,j →d Nµj , Σj,j for ∀ i ∈ Im and Mi,j →d Nµj , σ2 j /Ni for ∀ i ∈ Ib when N∗ → ∞, it is straightforward to see due to the symmetry of Gaussian distribution that $$\lim_{N^{1}\rightarrow\infty}E_{bj}=\lim_{N^{1}\rightarrow\infty}E_{mj}=\lim_{N^{1}\rightarrow\infty}\mathbb{E}(\mathbf{R}_{i,j})=\frac{n+1}{2},\ 1\leq i\leq n,\ 1\leq j\leq p.\tag{1}$$ Therefore, we have $$\bar{\mu}_{b}=\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}{\frac{1}{p}}\sum_{j=1}^{p}E_{b j}={\frac{n+1}{2}},$$ $$\bar{\mu}_{m}=\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{p\to\infty}{\frac{1}{p}}\sum_{j=1}^{p}E_{m j}={\frac{n+1}{2}}.$$ $$\left(2\right)$$ $\quad(3)$ . Moreover, assuming that the sample sizes of different benign nodes approach to each other with N∗ going to infinity, i.e., $$\operatorname*{lim}_{N^{*}\to\infty}{\frac{1}{N^{*}}}\operatorname*{max}_{i,k\in{\mathcal{I}}_{b}}|N_{i}-N_{k}|=0,$$ |Ni − Nk| = 0, (2) for each parameter dimension j, {Mi,j}i∈Ib would converge to the same Gaussian distribution N (µj , σ2 j /N∗) with the increase of N∗. Thus, due to the exchangeability of {Mi,j}i∈Ib and {Mi,j}i∈Im, it is easy to see that that $ \lim_{N^*\to\infty}V_{bj}=s^2_{b,j},\quad\lim_{N^*\to\infty}V_{mj}=s^2_{m,j},$ plex functions of $ n_0$, $ n_1$, $ \sigma^2_i$, $ \Sigma_{j,j}$ and $ N^*_{\sigma^2_i}$, and $ s^2_{b,j}=s^2_{m,j}$ if and ... where s 2 b,j and s 2 m,j are both complex functions of n0, n1, σ j b,j = s m,j if and only if σ 2 j /N∗ = Σj,j . According to Theorem 2.1, s¯ 2 b = limp→∞ 1 p Pp j=1 Vbj = limN∗→∞ 1 p Pp j=1 s 2 b,j and s¯ 2 m = limp→∞ 1 p Pp j=1 Vmj = limp→∞ 1 p Pp j=1 s 2 m,j . The proof is complete. Proof. According to Theorem 2.1, we only need to compute µ¯b, µ¯m, s¯ 2 b and s¯ 2 m under the sign flipping attacks. Lemma D.1. Under the sign flipping attack, for each malicious node i ∈ Im *and any parameter dimension* j, we have Mi,j = − r n1 when N∗goes to infinity is Mi,j →d Nµj (r), σ2 j (r), 1 ≤ j ≤ p, (4) where µj (r) = −rµj , σ 2 j (r) = r 2·σ 2 j n1·N¯b , and N¯b = P n1 k∈Ib1 Nk is the harmonic mean of {Nk}k∈Ib . Pk∈Ib Mk,j is a deterministic function of {Mk,j}k∈Ib , whose limiting distribution Lemma 3.1 and Lemma D.1 tell us that for each parameter dimension j, the distribution of {Mi,j} n i=1 is a mixture of Gaussian components {N µj , σ2 j /Ni }i∈Ib centered at µj plus a point mass located at µj (r) = −rµj . If Ni's are reasonably large, variances σ 2 j /Ni's would be very close to zero, and the probability mass of the mixture distribution would concentrate to two local centers µj and µj (r) = −rµj , one for the benign nodes and the other one for the malicious nodes. Under the sign flipping attack, because Mi,j →d Nµj (r), σ2 j (r)for ∀ i ∈ Im and Mi,j →d Nµj , σ2 j /Ni for ∀ i ∈ Ib when N∗ → ∞, and $$\operatorname*{lim}_{N^{*}\to\infty}(\sigma_{j}^{2}/N_{i})=\operatorname*{lim}_{N^{*}\to\infty}\sigma_{j}^{2}(r)=0.$$ It is straightforward to see that $$\operatorname*{lim}_{N^{*}\to\infty}P(M_{i,j}>M_{k,j})=\mathbb{I}(\mu_{j}>0),\ \forall\ i\in\mathcal{I}_{b},\forall\ k\in\mathcal{I}_{m},$$ $$\left(4\right)$$ which further indicates that lim N∗→∞ Ebj = lim N∗→∞ E(Ri,j ) = n1 + 1 2, if µj > 0, lim N∗→∞ Emj = lim N∗→∞ E(Ri,j ) = n + n1 + 1 2, if µj > 0, (5) lim N∗→∞ Ebj = lim N∗→∞ E(Ri,j ) = n + n0 + 1 2if µj < 0 lim N∗→∞ Emj = lim N∗→∞ E(Ri,j ) = n0 + 1 2if µj < 0, lim N∗→∞ E(R2 i,j ) = S 2 [1,n1] · I(i ∈ Ib) + S 2 [n1+1,n] · I(i ∈ Im) if µj > 0, lim N∗→∞ E(R2 i,j ) = S 2 [1,n0] · I(i ∈ Im) + S 2 [n0+1,n] · I(i ∈ Ib) if µj < 0,(6) where S 2 [a,b] =1 b−a+1 Pbk=a k 2. Therefore, we have (µ¯m = limN∗→∞ limp→∞ 1 p Pp j=1 Ebj = ρ · n+n1+1 2 + (1 − ρ) · n0+1 2, µ¯b = limN∗→∞ limp→∞ 1 p Pp j=1 Emjρ · n1+1 2 + (1 − ρ) · n+n0+1 2, where ρ = limp→∞ Pp j=1 I(µj>0) p. Define µ¯i = ¯µm · I(i ∈ Im) + ¯µb · I(i ∈ Ib). Considering that lim N∗→∞ limp→∞ 1 p Xp j=1 Vij = lim N∗→∞ limp→∞ 1 p Xp j=1 E(Ri,j − µ¯i) 2 = limp→∞lim N∗→∞ 1 p Xp j=1 E(R2 i,j ) − 2¯µiE(Ri,j ) + (¯µi) 2 =-τ¯m − (¯µm) 2· I(i ∈ Im) + -τ¯b − (¯µb) 2· I(i ∈ Ib), where $$\begin{array}{c}{{\bar{\tau}_{b}=\rho\cdot S_{[1,n_{1}]}^{2}+(1-\rho)\cdot S_{[n_{0}+1,n]}^{2},}}\\ {{\bar{\tau}_{m}=\rho\cdot S_{[n_{1}+1,n]}^{2}+(1-\rho)\cdot S_{[1,n_{0}]}^{2}.}}\end{array}$$ According to Theorem 2.1, $$\begin{array}{r c l}{{\bar{s}_{b}^{2}}}&{{=}}&{{\operatorname*{lim}_{p\to\infty}\operatorname*{lim}_{N^{*}\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{b j}=\bar{\tau}_{b}-(\bar{\mu}_{b})^{2},}}\\ {{}}&{{}}&{{}}\\ {{\bar{s}_{m}^{2}}}&{{=}}&{{\operatorname*{lim}_{p\to\infty}\operatorname*{lim}_{N^{*}\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{m j}=\bar{\tau}_{m}-(\bar{\mu}_{m})^{2}.}}\end{array}$$ It completes the proof. Proof. According to Theorem 2.1, we only need to compute µ¯b, µ¯m, s¯ 2 b and s¯ 2 m under the mean shift attacks. Under the mean shift attack, all the malicious gradient will be inserted at a position which is dependent on z. More specifically, for a relatively large n, the samples from benign nodes are normally distributed. Therefore, on average, with proportion Φ(z) of the benign nodes having higher values of gradient than the malicious nodes. First of all, we derive the property in term of the first moment. Denote α = bn1Φ(z)c. For a benign node, we have $$\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}E_{b j}=\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}\mathbb{E}(\mathbf{R}_{i,j})=\frac{1}{n_{1}}\left(\sum_{k=1}^{\alpha}k+\sum_{s=n_{0}+1+\alpha}^{n}s\right)=\frac{n+1}{2}+\frac{n_{0}}{n_{1}}(n_{1}-\alpha).$$ For a malicious node, we have $$\operatorname*{lim}_{N^{t}\to\infty}\operatorname*{lim}_{n\to\infty}E_{m j}=\operatorname*{lim}_{N^{t}\to\infty}\operatorname*{lim}_{n\to\infty}\mathbb{E}(\mathbf{R}_{i,j})={\frac{\alpha+1+\alpha+n_{0}}{2}}=\alpha+{\frac{1+n_{0}}{2}}.$$ Therefore, according to Theorem 2.1, $$\begin{array}{r c l}{{\bar{\mu}_{b}}}&{{=}}&{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}E_{b j}=\frac{n+1}{2}+\frac{n_{0}}{n_{1}}(n_{1}-\alpha),}}\\ {{}}&{{}}&{{}}\\ {{\bar{\mu}_{m}}}&{{=}}&{{\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}E_{m j}=\alpha+\frac{1+n_{0}}{2}.}}\end{array}$$ Now, we derive the property in term of the second moment. For a benign node, we have $$\lim_{N^{1}\to\infty}\lim_{n\to\infty}\mathbb{E}(\mathbf{R}_{i,j}^{2})=\frac{1}{n_{1}}\left(\sum_{k=1}^{\infty}k^{2}+\sum_{s=n_{0}+1+\alpha}^{n}s^{2}\right)=\frac{1}{n_{1}}\left(\tau(n)+\tau(\alpha)-\tau(\alpha+1+n_{0})\right),$$ where τ (·) is the function of 'sum of squares', i.e., τ (n) = Pn k=1 k 2. For a malicious node, we have $$\operatorname*{lim}_{N^{*}\to\infty}\operatorname*{lim}_{n\to\infty}\mathbb{E}({\boldsymbol{R}}_{i,j}^{2})=\left(\alpha+{\frac{1+n_{0}}{2}}\right)^{2},$$ Therefore, according to Theorem 2.1, $$\begin{array}{r c l}{{\tilde{s}_{b}^{2}}}&{{=}}&{{\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{b j}=\frac{1}{n_{1}}\left(\tau(n)+\tau(\alpha)-\tau(\alpha+1+n_{0})\right)-\bar{\mu}_{b}^{2},}}\\ {{}}&{{}}&{{}}\\ {{\tilde{s}_{m}^{2}}}&{{=}}&{{\operatorname*{lim}_{N\to\infty}\operatorname*{lim}_{n\to\infty}\operatorname*{lim}_{p\to\infty}\frac{1}{p}\sum_{j=1}^{p}V_{m j}=0.}}\end{array}$$ It completes the proof. ## F Neural Network Configurations We train these models with a batch size of 10, an SGD optimizer operates with a learning rate of 0.01, and 0.5 momentum for 25 epochs. The accuracy of the model is evaluated on a holdout set of 1000 samples. ## F.1 Fashion-Mnist, Mnist And Qmnist - Layer 1: 1 ∗ 16 ∗ 5, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling. - Layer 2: 16 ∗ 32 ∗ 5, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling. - Output: 10 Classes, Linear. ## F.2 Cifar-10 - Layer 1: 1 ∗ 32 ∗ 3, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling. - Layer 2: 32 ∗ 32 ∗ 3, 2D Convolution, Batch Normalization, ReLU Activation, Max pooling. - Output: 10 Classes, Linear. ## G Metrics The metrics observed in Section 4 to evaluate the performance of the defense mechanisms are defined as follows: $$\begin{array}{r l}{{}}&{{\mathrm{TP}}}\\ {{}}&{{\mathrm{TP}+\mathrm{FP}\,,}}\\ {{}}&{{\mathrm{Accuracy}={\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{FP}+\mathrm{FN}+\mathrm{TN}}},}}\\ {{}}&{{\mathrm{Recall}={\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}},}}\\ {{}}&{{\mathrm{Precision}\times\mathrm{Recall}}}\\ {{}}&{{\mathrm{Precision}+\mathrm{Recall}}}\end{array}$$ ## H Accuracy Of The Global Model Under Different Attacks In Table 4 and 5 the numeric accuracies of each experimental configuration at the 25th epoch are presented. ## I Mandera Performance With Different Clustering Algorithms In this section, Figure 10 demonstrate that the discriminating performance of MANDERA when hierarchical clustering and Gaussian mixture models are used in-place of K-means for FASHION-MNIST data set remain robust. ## J Model Losses On Cifar-10, Fashion-Mnist And Mnist Data Figure 11 - 13 present the model loss to accompany the model prediction performance for CIFAR-10, FASHION-MNIST and MNIST-Digits respectively, which are previously seen in Section 4. ## K Model Losses On Qmnist Data Figure 14 presents the model loss to accompany the model prediction performance of QMNIST previously seen in Section 4. Table 4: FASHION-MNIST model accuracy at 25th epoch. The **bold** highlights the best defense strategy under attack. Note "NO-attack" is the baseline, where no attack is conducted. And n0 denotes the number of malicious nodes among 100 nodes. | Attack | Defence | n0 = 5 | n0 = 10 | n0 = 15 | n0 = 20 | n0 = 25 | n0 = 30 | |-----------|-----------|----------|-----------|-----------|-----------|-----------|-----------| | GA | Krum | 83.66 | 84.13 | 84.09 | 83.30 | 84.22 | 82.32 | | NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | | | Bulyan | 87.80 | 87.80 | 87.79 | 87.73 | 87.67 | 87.69 | | | Median | 87.73 | 87.76 | 87.73 | 87.70 | 87.72 | 87.70 | | | Trim-mean | 87.85 | 87.78 | 87.75 | 87.74 | 87.72 | 87.73 | | | MANDERA | 87.81 | 87.83 | 87.82 | 87.77 | 87.80 | 87.76 | | | FLTrust | 66.13 | 36.35 | 50.20 | 17.85 | 16.00 | 9.66 | | | ZG | Krum | 83.56 | 83.57 | 84.11 | 84.33 | 84.10 | 84.30 | | NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | | | Bulyan | 86.88 | 87.38 | 87.49 | 87.45 | 87.48 | 87.38 | | | Median | 87.36 | 86.91 | 86.20 | 85.33 | 84.07 | 82.45 | | | Trim-mean | 87.13 | 86.57 | 85.67 | 84.61 | 83.06 | 81.48 | | | MANDERA | 87.79 | 87.81 | 87.84 | 87.72 | 87.76 | 87.78 | | | FLTrust | 81.59 | 83.58 | 79.41 | 80.62 | 79.00 | 74.01 | | | SF | Krum | 84.49 | 84.71 | 84.43 | 83.58 | 83.61 | 83.72 | | NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | | | Bulyan | 87.60 | 87.64 | 87.62 | 87.50 | 87.47 | 87.35 | | | Median | 87.40 | 86.91 | 86.21 | 85.36 | 84.11 | 82.31 | | | Trim-mean | 87.48 | 86.97 | 86.20 | 84.92 | 83.08 | 81.20 | | | MANDERA | 87.85 | 87.79 | 87.82 | 87.79 | 87.77 | 87.74 | | | FLTrust | 86.96 | 85.97 | 84.55 | 76.92 | 75.72 | 76.90 | | | MS | Krum | 87.82 | 87.77 | 87.66 | 87.50 | 87.36 | 86.89 | | NO-attack | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | 87.83 | | | Bulyan | 87.81 | 87.78 | 87.75 | 87.75 | 87.60 | 87.21 | | | Median | 87.75 | 87.78 | 87.69 | 87.52 | 87.26 | 86.99 | | | Trim-mean | 87.81 | 87.79 | 87.76 | 87.73 | 87.61 | 87.33 | | | MANDERA | 87.81 | 87.78 | 87.78 | 87.79 | 87.71 | 87.79 | | | FLTrust | 87.77 | 87.75 | 87.78 | 87.77 | 87.73 | 87.73 | | Table 5: CIFAR-10 model accuracy at 25 th epoch. The **bold** highlights the best defense strategy under attack. Note "NO-attack" is the baseline, where no attack is conducted. And n0 denotes the number of malicious nodes among 100 nodes. | Attack | Defence | n0 = 5 | n0 = 10 | n0 = 15 | n0 = 20 | n0 = 25 | n0 = 30 | |-----------|-----------|----------|-----------|-----------|-----------|-----------|-----------| | GA | Krum | 47.66 | 47.16 | 47.18 | 47.26 | 47.25 | 46.77 | | NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | | | Bulyan | 55.69 | 55.85 | 55.67 | 55.63 | 55.46 | 55.22 | | | Median | 55.47 | 55.53 | 55.47 | 55.40 | 55.29 | 55.22 | | | Trim-mean | 55.77 | 55.72 | 55.56 | 55.50 | 55.43 | 55.31 | | | MANDERA | 55.74 | 55.69 | 55.63 | 55.65 | 55.76 | 55.69 | | | FLTrust | 19.66 | 27.54 | 11.99 | 9.21 | 9.73 | 9.96 | | | ZG | Krum | 46.85 | 46.84 | 47.96 | 47.13 | 47.12 | 47.53 | | NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | | | Bulyan | 52.30 | 53.87 | 54.28 | 54.36 | 54.35 | 54.10 | | | Median | 54.06 | 52.18 | 50.18 | 48.01 | 44.89 | 38.08 | | | Trim-mean | 53.34 | 51.22 | 49.14 | 46.45 | 42.02 | 34.36 | | | MANDERA | 55.77 | 55.69 | 55.78 | 55.65 | 55.72 | 55.56 | | | FLTrust | 48.05 | 39.21 | 39.44 | 44.25 | 40.27 | 39.49 | | | SF | Krum | 48.11 | 47.79 | 46.93 | 47.89 | 47.59 | 47.13 | | NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | | | Bulyan | 55.30 | 54.99 | 54.86 | 54.68 | 54.43 | 54.05 | | | Median | 53.96 | 52.29 | 50.49 | 47.89 | 44.93 | 37.22 | | | Trim-mean | 54.37 | 52.40 | 49.97 | 47.30 | 42.32 | 33.76 | | | MANDERA | 55.78 | 55.69 | 55.62 | 55.55 | 55.67 | 55.56 | | | FLTrust | 54.18 | 50.21 | 46.39 | 44.45 | 36.19 | 34.39 | | | MS | Krum | 55.60 | 55.23 | 54.51 | 53.79 | 52.31 | 50.54 | | NO-attack | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | 55.78 | | | Bulyan | 55.68 | 55.62 | 55.37 | 54.98 | 54.26 | 52.10 | | | Median | 55.47 | 55.20 | 54.55 | 53.72 | 52.17 | 50.55 | | | Trim-mean | 55.64 | 55.59 | 55.38 | 55.09 | 54.29 | 52.32 | | | MANDERA | 55.65 | 55.77 | 55.72 | 55.62 | 55.66 | 55.63 | | | FLTrust | 55.81 | 55.64 | 55.62 | 55.42 | 55.09 | 54.65 | | ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) (b) FASHION-MNIST accuracy Figure 9: Model Accuracy at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks. ![22_image_0.png](22_image_0.png) (a) Gaussian mixture model. ![22_image_1.png](22_image_1.png) (b) Hierarchical clustering. Figure 10: Classification performance of our proposed approach MANDERA (Algorithm 1) with other clustering algorithms under four types of attack for FASHION-MNIST data. GA: Gaussian attack; ZG: Zero-gradient attack; SF: Sign-flipping; and MS: mean shift attack. The boxplot bounds the 25th (Q1) and 75th (Q3) percentile, with the central line representing the 50th quantile (median). The end points of the whisker represent the Q1-1.5(Q3-Q1) and Q3+1.5(Q3-Q1) respectively. ![23_image_0.png](23_image_0.png) Figure 11: Model Loss for CIFAR-10 data at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks. ![23_image_2.png](23_image_2.png) ![23_image_1.png](23_image_1.png) Figure 12: Model Loss for FASHION-MNIST data at each epoch of training, each line of the curve represents a different defense against Byzantine attacks. ![24_image_0.png](24_image_0.png) Figure 13: Model Loss for MNIST-Digits data at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks. ![24_image_1.png](24_image_1.png) Figure 14: QMNIST model loss. Figure 15: Model Loss at each epoch of training, each line of the curve represents a different defense against the Byzantine attacks. ## References Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In *International Conference on Artificial Intelligence and Statistics*, pp. 2938–2948. PMLR, 2020. Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. *Advances in Neural Information Processing Systems*, 32, 2019. Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf. Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. *arXiv preprint arXiv:2012.13995*, 2020. Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Provably secure federated learning against malicious clients. In *Proceedings of the AAAI Conference on Artificial Intelligence*, 2021. Yudong Chen, Lili Su, and Jiaming Xu. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. *Proc. ACM Meas. Anal. Comput. Syst.*, 1(2), December 2017. doi: 10.1145/ 3154503. URL https://doi.org/10.1145/3154503. Zheyi Chen, Pu Tian, Weixian Liao, and Wei Yu. Zero knowledge clustering based adversarial mitigation in heterogeneous federated learning. *IEEE Transactions on Network Science and Engineering*, 8(2):1070–1083, 2021. doi: 10.1109/TNSE.2020.3002796. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to byzantine-robust federated learning. In 29th {USENIX} Security Symposium ({USENIX} *Security 20)*, pp. 1605–1622, 2020. Chris Fraley and Adrian E Raftery. Model-based clustering, discriminant analysis, and density estimation. Journal of the American statistical Association, 97(458):611–631, 2002. Rachid Guerraoui, Sébastien Rouault, et al. The hidden vulnerability of distributed learning in byzantium. In *International Conference on Machine Learning*, pp. 3521–3530. PMLR, 2018. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® *in Machine Learning*, 14(1–2):1–210, 2021. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. In Concurrency: the works of leslie lamport, pp. 203–226. ACM, 2019. Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. Learning to detect malicious clients for robust federated learning. *arXiv preprint arXiv:2002.00211*, 2020. Hsiao-Ying Lin and Wen-Guey Tzeng. An efficient solution to the millionaires' problem based on homomorphic encryption. In *International Conference on Applied Cryptography and Network Security*, pp. 456–466. Springer, 2005. Jinhyun So, Başak Güler, and A. Salman Avestimehr. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168–2181, 2021. doi: 10.1109/JSAC.2020.3041404. Jacob Steinhardt. *Robust learning: Information theory and algorithms*. PhD thesis, Stanford University, 2018. Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. Data poisoning attacks against federated learning systems. In *European Symposium on Research in Computer Security*, pp. 480–501. Springer, 2020. Zhaoxian Wu, Qing Ling, Tianyi Chen, and Georgios B Giannakis. Byrd-saga - github. https://github.com/MrFive5555/Byrd-SAGA, 2020a. Zhaoxian Wu, Qing Ling, Tianyi Chen, and Georgios B Giannakis. Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. *IEEE Transactions on Signal Processing*, 68: 4583–4596, 2020b. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. Cong Xie, Sanmi Koyejo, and Indranil Gupta. Zeno: Distributed stochastic gradient descent with suspicionbased fault-tolerance. In *International Conference on Machine Learning*, pp. 6893–6901. PMLR, 2019. Cong Xie, Sanmi Koyejo, and Indranil Gupta. Zeno++: Robust fully asynchronous sgd. In *International* Conference on Machine Learning, pp. 10495–10503. PMLR, 2020. Chhavi Yadav and Léon Bottou. Cold case: The lost mnist digits. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 2019. Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In *International Conference on Machine Learning*, pp. 5650–5659. PMLR, 2018. Lan Zhang, Xiang-Yang Li, Yunhao Liu, and Taeho Jung. Verifiable private multi-party computation: ranging and ranking. In *2013 Proceedings IEEE INFOCOM*, pp. 605–609. IEEE, 2013.