tmlr-md-dump / EcO69RTCjU /EcO69RTCjU.md
RedTachyon's picture
Upload folder using huggingface_hub
df5f694 verified
|
raw
history blame
140 kB

Federated Learning With Heterogeneous Differential Privacy

Anonymous authors Paper under double-blind review

Abstract

Federated learning (FL) takes a first step towards privacy-preserving machine learning by training models while keeping client data local. Models trained using FL may still indirectly leak private client information through model updates during training. Differential privacy (DP) may be employed on model updates to provide privacy guarantees within FL, typically at the cost of degraded performance of the final trained model. Both non-private FL and DP-FL can be solved using variants of the federated averaging (FedAvg**) algorithm. In** this work, we consider a heterogeneous DP setup where clients require varying degrees of privacy guarantees. First, we analyze the optimal solution to the federated linear regression problem with heterogeneous DP in a Bayesian setup. We find that unlike the non-private setup, where the optimal solution for homogeneous data amounts to a single global solution for all clients learned through FedAvg**, the optimal solution for each client in this setup** would be a personalized one even for homogeneous data. We also analyze the privacy-utility trade-off for this problem, where we characterize the gain obtained from the heterogeneous privacy where some clients opt for less stringent privacy guarantees. We propose a new algorithm for FL with heterogeneous DP, referred to as FedHDP, which employs personalization and weighted averaging at the server using the privacy choices of clients, to achieve better performance on clients' local models. Through numerical experiments, we show that FedHDP provides up to 9.27% performance gain compared to the baseline DP-FL for the considered datasets where 5% of clients opt out of DP. Additionally, we show a gap in the average performance of local models between non-private and private clients of up to 3.49%, empirically illustrating that the baseline DP-FL might incur a large utility cost when not all clients require the stricter privacy guarantees.

1 Introduction

The abundance of data and advances in computation infrastructure have enabled the training of large machine learning (ML) models that efficiently solve a host of applications. It is expected that the majority of the generated data will be stored on devices with limited computational capabilities. To improve the privacy of user data and to reduce the amount of data transmission over networks in training ML models, McMahan et al. (2017) proposed the federated learning (FL) paradigm to train a model using decentralized data at clients. Several prior works on FL algorithms have been proposed in the literature to overcome various issues that arise in realistic FL setups (Kairouz et al., 2019; Li et al., 2020; Wang et al., 2021), especially with respect to data heterogeneity (Konečn`y et al., 2016; Zhao et al., 2018; Corinzia et al., 2019; Hsu et al., 2019; Karimireddy et al., 2020; Reddi et al., 2020), and device dropout and communication cost (Li et al., 2018; Zhu & Jin, 2019; Wang et al., 2020; Al-Shedivat et al., 2020).

Despite the fact that client data is kept on device in FL, the trained model at the central server is still vulnerable to various privacy attacks, such as membership inference attacks (Shokri et al., 2017) and model inversion attacks (Fredrikson et al., 2015). Privacy-preserving variations of FL algorithms have been proposed in the literature, especially building on the concept of differential privacy (DP) to bound the amount of information leakage (Dwork et al., 2014), while noting that using DP typically causes unavoidable degradation in performance. DP may be employed to provide privacy guarantees at different granularities: sample level (local DP), client level (global DP), or a set of clients (considered in this work), depending on the application at hand. Several prior works utilized DP to provide privacy guarantees for FL algorithms. For example, (Truex et al., 2020; Sun et al., 2020; Kim et al., 2021; Song et al., 2015) apply DP mechanisms at clients to ensure local DP guarantees, where clients have complete control over the amount of privacy they desire. On the other hand, (Geyer et al., 2017; McMahan et al., 2018; Andrew et al., 2019; Wei et al., 2020; Bietti et al., 2022) apply DP mechanisms at the server to ensure global DP guarantees for all clients. Applying DP typically causes some degradation in utility, i.e., the model's performance degrades as the privacy budget gets smaller (Alvim et al., 2011; Sankar et al., 2013; Makhdoumi et al., 2014; Calmon et al., 2015).

Other works include utilizing DP in cross-silo federated learning setups, such as (Lowy & Razaviyayn, 2021; Liu et al., 2022). These works commonly assume that the DP requirements of clients are homogeneous, which as we shall see may be overly strict and cause unnecessary degradation in performance when DP requirements are heterogeneous.

The concept of heterogeneous DP has been studied in the literature. Avent et al. (2017); Beimel et al. (2019) consider a hybrid model by combining local DP with global DP and give clients the option to opt into either of these notions. A blender process is considered by Avent et al. (2017) for computing heavy hitters where some clients opt in local DP while the remaining opt in global DP. Some drawbacks of these works include their assumption of clients' data to be IID, as well as applying local DP which requires a large number of samples at clients. These assumptions make their approach inapplicable in FL setups due to the non-IID nature of clients' data in FL and the relatively small number of samples generated by clients in FL which requires either increasing the variance of the added noise or relaxing the privacy leakage budget leading to either large degradation in performance or higher privacy leakage budgets. Another related line of work, (Zhou et al., 2020; Ferrando et al., 2021; Liu et al., 2021; Amid et al., 2022), studies combining public and private datasets to improve the utility of the model. However, it is not clear how to extend them to FL setups, where clients arbitrarily participate in training and only have access to their local datasets which are no longer IID. Finally, some theoretical results have also been derived about heterogeneous DP in the works (Alaggan et al., 2016; Jorgensen et al., 2015), where mechanisms were created for such case.

Heterogeneity is a central feature of FL. Data heterogeneity makes it hard to train a single global model that can effectively serve all clients (Li et al., 2020), which can be remedied via model personalization (Smith et al., 2017; Wang et al., 2019; Arivazhagan et al., 2019; Khodak et al., 2019; Mansour et al., 2020; Fallah et al., 2020; Deng et al., 2020; Dinh et al., 2020; Li et al., 2021). Another type of heterogeneity include systems heterogeneity where different devices have different capabilities, in terms of various characteristics such as connection, computational, and power capabilities (Li et al., 2018). Solutions to system heterogeneity include designing algorithms that can tolerate device dropout, reduce communication cost, or reduce computations cost (Caldas et al., 2018; Gu et al., 2021; Horvath et al., 2021; Li et al., 2018). In this work, we study heterogeneity along the privacy axis, which is relatively unexplored in FL. In this paper, we theoretically study heterogeneous privacy requirements and show that model personalization could be used to learn good (even optimal) models in FL with heterogeneous DP.

Organization & Our Contributions

In this paper, we consider a new setup for privacy-preserving FL where privacy requirements are heterogeneous across clients. We show that clients with less strict privacy requirements, even if they represent a small percentage of the overall population, can be leveraged to improve the performance of the global model learned via FL for all clients. Our contributions and the organization of the paper are as follows:

  • In Section 2, we propose a heterogeneous setup for privacy in FL. Instead of granting the same level of privacy for all clients, each client is given the option to choose their level of privacy. Moreover, we formally pose an optimization objective for solving the problem from every client's point of view.

  • In Section 3, we theoretically study heterogeneous DP in the simplified Bayesian setup of federated linear regression, introduced by Li et al. (2021), where clients are either private or non-private (i.e., two levels of privacy). Unlike the case of non-private FL with homogeneous data1**, where the Bayes** optimal solution is a single global model that could be learned via vanilla federated averaging, the optimal Bayes solution in differentially-private FL requires personalization, even in the case of homogeneous DP. Further, we characterize the privacy-utility trade-off observed at clients.

  • In Section 4, we formally propose the FL with heterogeneous DP algorithm, referred to as FedHDP, for the heterogeneous privacy setup. The FedHDP algorithm extends the Bayes optimal solution for federated linear regression to be applicable to more realistic scenarios.

1Homogeneous data refers to the case where the data for all clients is independent and identically distributed (IID).

  • In Section 5, we provide experimental results of the FedHDP algorithm using various synthetic and realistic federated datasets from TensorFlow Federated (TFF) (Google, 2019) using reasonable privacy parameters, where the privacy choices presented to clients are either to be private or non-private.

Although the design guarantees of FedHDP don't apply in these complex settings, we experimentally show that it provides significant gains compared to DP-FedAvg algorithm (Andrew et al., 2019), and other stronger variants of it.

2 Privacy Guarantees Within Federated Learning

We briefly describe the federated learning (FL) setup together with existing privacy-preserving algorithms and their privacy guarantees.

Federated learning: FL consists of a central server that wishes to learn a global model for a set of clients denoted by C**. The clients cooperate with the server to learn a model over multiple training rounds** while keeping their data on device. Each client cj ∈ C has a local loss fj (·**) and a local dataset** Dj = {dj1 , dj2 , ..., djnj }, where dji is the i-th sample at the j-th client. During communication round t**, the server** sends the current model state, i.e., θ t**, to the set of available clients in that round, denoted by** C t**, who take** multiple gradient steps on the model using their own local datasets to minimize their local loss function fj (·).

The clients then return the updated model to the server who aggregates them, e.g., by taking the average, to produce the next model state θ t+1**. This general procedure describes a large class of learning global models** with FL, such as federated averaging (FedAvg**) (McMahan et al., 2017).** Privacy-preserving FL: To design privacy-preserving FL algorithms using DP, certain modifications to the baseline FedAvg algorithm are required. In particular, the following modifications are introduced: clipping and noising. Considering client-level privacy, the averaging operation at the server is the target of such modifications. Suppose that clients are selected at each round from the population of all clients of size N, with a certain probability denoted by q**. First, each client update is clipped to have a norm at** most S**, then the average is computed followed by adding a Gaussian noise with mean zero and co-variance** σ 2I = z 2( S qN ) 2I. The variable z is referred to as the noise multiplier, which dictates the achievable values of (ǫ, δ)-DP. Training the model through multiple rounds increases the amount of leaked information. Luckily, the moment accountant method in (Abadi et al., 2016) can be used to provide a tighter estimate of the resulting DP parameters (ǫ, δ). This method achieves client-level DP. It is worth noting that the noise can be added on the client side but needs to achieve the desired resulting noise variance in the output of the aggregator at the server, which is still the desired client-level DP. Selecting the clipping threshold as well as the noise multiplier is essential to obtaining useful models with meaningful privacy guarantees.

One issue that comes up during the training of DP models is the norm of updates can either increase or decrease; if the norm increases or decreases significantly compared to the clipping norm, the algorithm may slow down or diverge. Andrew et al. (2019) presented a solution to privately and adaptively update the clipping norm during each round of communication in FL based on the feedback from clients on whether or not their update norm exceeded the clipping norm. We consider this as the baseline for the privacypreserving FL algorithm and refer to it in the rest of the paper as DP-FedAvg (Andrew et al., 2019). The case where no noise is added is the baseline for non-private FL algorithm, which is referred to simply as Non-Private.

One fundamental aspect of DP-FedAvg is that it provides an equal level of privacy to all clients. This is suitable for the case when all clients have similar behavior towards their own privacy in the FL setup. In other words, DP-FedAvg implicitly assumes a homogeneity of the privacy level is required by all clients. This is in contrast to the heterogeneity feature of FL setups, where different clients have different data, capabilities, and objectives, which also applies to privacy choices. Next, we describe our proposed setup for FL with heterogeneous DP.

Proposed Setup: Heterogeneous Privacy Within Federated Learning

Let us describe the proposed setup for FL with heterogeneous DP. We first start by discussing the privacy guarantees in the setup, then discuss the objectives in such setup from the server's and clients' points of view.

Privacy guarantees: Prior to training, each client ci ∈ C chooses their differential privacy parameters (ǫi, δi**). We investigate what the server and clients agree upon at the beginning of training an FL model in** terms of privacy to formally define the considered privacy measures. Each client cj , whose dataset is denoted as Dj that is disjoint from all other clients, requires the server to apply some randomized algorithm Aj (·), whose image is denoted as Oj , such that the following holds Pr(Aj(Dj)Oj)eϵjPr(Aj(De)Oj)+δj,\operatorname*{Pr}(A_{j}(\mathbf{D}_{j})\in O_{j})\leq e^{\epsilon_{j}}\operatorname*{Pr}(A_{j}(\mathbf{D}_{e})\in O_{j})+\delta_{j}, $\left(1\right)$. ǫj Pr(Aj (De) ∈ Oj ) + δj , (1) where De is the empty dataset, and the relationship holds for all subsets Oj ⊆ Oj . This achieves client-level privacy with parameters (ǫj , δj ) from client cj 's point of view. Now, assume we have N clients each with their own privacy requirements for the server (ǫj , δj ) for j ∈ [N**], which should hold regardless of the choices** made by any other client. Let us have a randomized algorithm A(·**), which denotes the composition of all** Aj (·)'s; then, the parallel composition property of DP states that the algorithm A(·) is (ǫc, δc**)-DP, which** satisfies the following: ′) ∈ O) + δc, (2) where D contains all datasets from all clients and D′contains datasets from all clients but one, O is the image of A(·), and the relationship holds for all neighboring datasets D and D′that differ by only one client and all O ⊆ O. The parallel composition property of DP states that the resulting ǫc = maxi ǫi, and δc = maxi δi. Next, considering our setup, let us have l sets of private clients Ci's. Each client in the i**-th set of clients** requires (ǫpi Pr(A(D)O)eϵcPr(A(D)O)+δc,\operatorname*{Pr}(A(\mathbf{D})\in O)\leq e^{\epsilon_{c}}\operatorname*{Pr}(A(\mathbf{D}^{\prime})\in O)+\delta_{c}, $\eqref{eq:walpha}$. , δpi )-DP, and without loss of generality, assume that ǫpi ≥ ǫpl and δpi ≥ δpl ∀i < l. This is the case we consider in this paper, where we apply a randomized algorithm Api (·**), whose image is denoted as** Opi , to the dataset that includes all clients in the set Ci and the following holds Pr(Api (Dpi ) ∈ Opi ) ≤ e ǫpi Pr(Api (D ′ pi ) ∈ Opi ) + δpi , (3) where Dpi contains all datasets from all clients in Ci and D′pi contains datasets from all clients in that subset except one, and the relationship holds for all neighboring datasets Dpi and D′pi that differ by only one client and all Opi ⊆ Opi .

Now, let us assume in the proposed heterogeneous DP setup that each client in Ci **requires (**ǫpi , δpi )-DP in the sense of (1). As a result, we can see that the only way for DP-FedAvg to guarantee meeting the privacy requirement for the clients in Cl **with (**ǫpl , δpl ) is to enforce (ǫpl , δpl )-DP for all clients. In other words, DP-FedAvg needs to be (ǫpl , δpl )-DP, i.e., it needs to apply the strictest privacy parameters to all clients in the sense of (2). On the other hand, in our setup, we can guarantee meeting the privacy requirements for each set of clients by ensuring an (ǫpi , δpi )-DP for clients in Ci, respectively, in the sense of (3). In other words, we need to only apply the appropriate DP algorithm with its appropriate metrics for each subset of clients to ensure the privacy metrics are met. This in turn results in our setup satisfying the corresponding privacy requirements needed by each set of clients, which are the main targets that need to be achieved in both algorithms from the clients' point of view in terms of their desired privacy levels.

Objectives in FL setup: In terms of objectives in FL setups, the server's goal is to utilize clients' updates, which may be subject to specific DP conditions, by averaging them to produce the next model state. On the other hand, clients have a different objective when it comes to their performance measures. The clients' goal is to minimize their loss function given all other clients' datasets including their own. However, since clients do not have access to other clients' raw data, a client desires to use the information from the differentiallyprivate updates computed using the datasets by other clients as well as its own local update in order to reach a solution. Assume that the client cj observes all other clients' DP-statistics of the datasets {ψi: i ∈ [N]\j}, which are the outputs of a randomized function that satisfies the privacy condition in (1), as well as its own non-DP dataset Dj . Then the client's Bayes optimal solution is

θj=argminθ^j{EDj[j(θ^j){ψi:i[N]\j},Dj]}(1)\theta_{j}^{*}=\arg\min_{\hat{\theta}_{j}}\left\{\mathbb{E}_{\mathcal{D}_{j}}\left[\ell_{j}(\hat{\theta}_{j})|\{\psi_{i}:i\in[N]\backslash j\},\mathbf{D}_{j}\right]\right\}\tag{1} io (Local Bayes obj.) where ℓj (·) is the loss function used to train a model for client cj , and Dj is the true distribution of the dataset at client cj . Notice that client cj has access to their own dataset Dj and DP-statistics of the other datasets {Dei: i ∈ [N]\j}. From client cj 's point of view, this objective is the Bayes optimal solution when observing all DP-statistics from other clients that are subject to their privacy constraints. It is important to note that (Local Bayes obj.) is not suited for a federated optimization framework due to the fact that even individual updates from other clients are not available at the client to utilize, but rather their aggregation

(Local Bayes obj.) $$\newcommand{\vecs}[1]{\overset{\rightharpoonup}{\mathbf{#1}}}$$ $$\newcommand{\vecd}[1]{\overset{-!-!\rightharpoonup}{\vphantom{a}\smash{#1}}}$$ jl.){\mathfrak{j l}}.) through a global model at the server. In practice, each client utilizes the global model θb to optimize the following: θ^j=argminθ^j{EDj[j(θ^j)θ^,Dj]}.\widehat{\boldsymbol{\theta}}_{j}^{*}=\arg\operatorname*{min}_{\widehat{\boldsymbol{\theta}}_{j}}\left\{\mathbb{E}_{\boldsymbol{D}_{j}}\left[\ell_{j}(\widehat{\boldsymbol{\theta}}_{j})|\widehat{\boldsymbol{\theta}},\boldsymbol{D}_{j}\right]\right\}. θbj (Local personalized federated obj.) $\newcommand{\vecs}[1]{\overset{\rightharpoonup}{\mathbf{#1}}}$ $\newcommand{\vecd}[1]{\overset{-!-!\rightharpoonup}{\vphantom{a}\smash{#1}}}$ io . (Local personalized federated obj.) We notice that this solution is a form of personalization in FL, where clients no longer deploy the global model locally by default, but rather utilize it to derive local, preferably better, models that perform well on their own local dataset. In the remainder of this paper, we will demonstrate this approach's ability to learn good (even optimal as we shall see in the next section) personalized local models compared to baseline private FL. Next, we will consider the proposed setup for a simplified federated problem known as the federated linear regression.

3 Analyzing Heterogeneous Differential Privacy In Simplified Settings

In this section, we study the heterogeneous DP problem in a simplified linear regression setup inspired by the one proposed by Li et al. (2021). We consider a federated linear regression setup, where clients are interested in learning Bayes optimal models in the sense of (Local Bayes obj.). We briefly show that in this simplified setup, the solution could be cast as a bi-level optimization problem, which can be solved as a personalized FL problem (Local personalized federated obj.), which motivates our subsequent algorithm design.

We first start by considering the global estimation on the server and show the proposed solution is Bayes optimal. Then, we consider local estimations for all clients and show that the proposed solution is Bayes optimal for all clients when using appropriate values of the respective hyperparameters. We further characterize the privacy-utility trade-off gains in the Bayes optimal solution compared to variants of DP federated averaging (Andrew et al., 2019). The discussions in this section are brief, refer to the appendix for further and more comprehensive discussions.

3.1 Analyzing Heterogeneous Differential Privacy In Federated Linear Regression

We consider a learning setup consisting of N clients, each with a fixed number of data points, where the goal of each client is to estimate d**-dimensional linear regression coefficient vector. In linear regression, clients** have observations of feature vectors, each of length d**, and a response variable. The features linearly combine** to produce the response variable, where the goal is to find the coefficients vector of length d**. To encapsulate** the property of data heterogeneity in federated learning setups, assume that such coefficient vectors are correlated across clients but they are not the same. Specifically, assume a fixed coefficient vector to be estimated at the server φ drawn from some distribution, and assume clients' coefficient vectors φj 's, which will be estimated at clients, are correlated observations of such vector. This can be modeled by assuming clients' observations are noisy versions of the coefficient vector at the server, where the noise is drawn from a Gaussian distribution and its covariance controls the level of heterogeneity of data at clients. Additionally, to ensure client-level privacy, assume the algorithm is required to provide a specific client-level privacy for each subset of clients by adding Gaussian noise to the clients' updates with an appropriate covariance. Without loss of generality, assume the privacy noise is added on the client's side which achieves the desired level at the server after the aggregation of updates. Denote the estimate of φj at client cj as φbj , and its DP update sent to the server as ψj , then the server's and clients' goals are to minimize the Bayes risk (i.e., test error),

defined as follows $$\boldsymbol{\theta}^{}:=\arg\min_{\boldsymbol{\theta}}\left{\mathbb{E}\left[\frac{1}{2}|\hat{\boldsymbol{\theta}}-\boldsymbol{\phi}|{2}^{2}\right]\boldsymbol{\psi}{1},...,\boldsymbol{\psi}{N}\right]\right},\tag{4}$$ $$\boldsymbol{\theta}{j}^{}:=\arg\min_{\boldsymbol{\theta}}\left{\mathbb{E}\left[\frac{1}{2}|\hat{\boldsymbol{\theta}}-\boldsymbol{\phi}{j}|{2}^{2}\right]\left(\boldsymbol{\psi}{i}:i\in[N]\setminus j\right),\hat{\boldsymbol{\phi}}{j}\right]\right},\tag{10}$$ where (4) corresponds to the server's Bayes objective, given all clients' updates, while (Local Bayes obj.) is the joint Bayesian algorithm, it is assumed that the decision can be used to estimate

the client's Bayes objective, given its non-private estimation as well as other clients' private updates. From now on, we refer to our solution and algorithm as FedHDP in the remainder of the paper. The algorithm's pseudocode for federated linear regression is described in Algorithm 1. For the federated linear regression setup, the server's goal is to find the following:

θ^:=argminθ^{12i[N]wiψiθ^22}.{\widehat{\boldsymbol{\theta}}}^{*}:=\arg\operatorname*{min}_{{\widehat{\boldsymbol{\theta}}}}\left\{{\frac{1}{2}}\left\|\sum_{i\in[N]}w_{i}\psi_{i}-{\widehat{\boldsymbol{\theta}}}\right\|_{2}^{2}\right\}. (5ˉ)\left({\bar{\mathbf{5}}}\right) . (5) However, as for the clients in the considered federated setup, they don't have access to individual updates from other clients, but rather have the global estimate θb∗**. So, we have the local** FedHDP objective as

θ^j:=argminθ{12θ^ϕ^j22+λ2θ^θ^22},\widehat{\mathbf{\theta}}_{j}^{*}:=\arg\min_{\mathbf{\theta}}\left\{\frac{1}{2}\|\widehat{\mathbf{\theta}}-\widehat{\mathbf{\phi}}_{j}\|_{2}^{2}+\frac{\lambda}{2}\|\widehat{\mathbf{\theta}}-\widehat{\mathbf{\theta}}^{*}\|_{2}^{2}\right\}, (Local FedHDP obj.) θb where λj is a regularization parameter that controls the closeness of the personalized model towards the global model. Higher values of λj steer the personalized model to the global model, while smaller values of λj steer the personalized model towards the local model at the client. Notice that (Local FedHDP obj.) is a special case of the (Local personalized federated obj.) where personalization is performed through a bi-level regularization. Refer to Appendix A.1 for the full description of all variables and parameters considered in the federated linear regression setup. Next, we provide a brief discussion on the optimality of the FedHDP and state its convergence in this setup to the Bayes optimal solution for the server as well as the clients.

Algorithm 1 FedHDP: Federated Learning with Heterogeneous Differential Privacy (Linear Regressi on)
Inputs: θ 0 , σ c , γ 2 2 , η = 1, {λj}j∈[N] , r, ρnp, N. ∗ Outputs: θ ∗ , {θ j }j∈[N] At server: for client cj in C t in parallel do ψj ← ClientUpdate(θ t , cj ) end for θ ∗ ← 1 P ci∈Cnp ψi ρnpN+r(1−ρnp)N + r P ci∈Cp ρnpN+r(1−ρnp)N ψi At client cj : ClientUpdate(θ 0 , cj ): θ ← θ 0 0 θj ← θ 1 θ ← θ − η(θ − P i xj,i) ns ∗ (θj − 1
θ j ← θj − ηj P i xj,i) + λj (θj − θ)
ns
ψ ← θ + 1cj∈CpN (0,(1 − ρnp)Nγ2 ) return ψ to server

3.1.1 Federated Linear Regression With Private And Non-Private Clients

In our discussion so far, we assumed that clients have multiple privacy levels to choose from. In realistic setups, clients are expected to be individuals who may not have complete awareness of what each parameter means in terms of their privacy. In fact, interpreting the meaning of ǫ in DP is hard in general (Dwork et al., 2019), and thus it is unrealistic to assume the average client can make their choice precisely. Therefore, the server needs to make a choice on how these parameters are presented to clients. A special case we consider extensively in this paper is the case with two classes of private and non-private clients, with ρnp fraction of the clients being non-private. Clients who choose to be private, denoted by the subset Cp**, are guaranteed** a fixed (ǫ, δ)-DP, while clients who choose otherwise, denoted by the subset Cnp**, are not private. In fact, a** practical approach to privacy from the server's point of view is to enable privacy by default for all clients and give each client the option to opt out of privacy if they desire to do so. The non-private choice can be suitable for different types of clients such as enthusiasts, beta testers, volunteers, and company employees, among others. Before we present our proposed solution, we will discuss two major baselines that we consider in the remainder of the paper: DP-FedAvg uses the differentially private federated averaging (Andrew et al., 2019) as the global model, where all clients are guaranteed the same level of privacy regardless of their preference.

HDP-FedAvg ensures privacy for the set of private clients similar to DP-FedAvg**, however, no noise is** added to the non-private clients' updates. By design, HDP-FedAvg is a stronger variant of DP-FedAvg for a more fair comparison in a heterogeneous privacy setting.

FedHDP is the algorithm we propose in this paper. There are three important parameters introduced in the algorithm: r (the ratio of the weight assigned to private clients to that of the non-private ones server-side), γ 2 (the variance of the noise added for privacy), and λj described earlier. Furthermore, utilizing the assumption of a diagonal covariance matrix, which was also made by Li et al. (2021), denote the noisy observations of the vector φj at client cj by xj,i for i = 1, ..., ns**. In the remainder of this section, we briefly analyze Algorithm 1,** and discuss its Bayes optimality for the federated linear regression problem for a specific privacy parameter γ 2.

3.1.2 Optimal Global Estimate On The Server

The server's goal is to combine the updates received from clients such that the resulting noise variance is minimized, while ensuring the privacy of the set of private clients. To this end, we build on and extend the approach presented by Li et al. (2021) to find the optimal aggregator at the server. The server first computes the two intermediate average values for non-private and private clients such that they achieve the desired levels of privacy for each group. Now, the server aims to combine such values to compute its estimation θ of the value of φ with the goal of minimizing the resulting estimation noise covariance. Lemma 1 in Appendix A.2 states the optimal aggregator, in this case, is the weighted average of such updates. If the resulting weighted average is expanded, it can be expressed as Pi=[N] wiψi**. In this case, considering the weights used** in the weighted average, let us denote the ratio of weights wi**'s dedicated for private clients to weights for** non-private clients by r = wp wnp . The aim here is to put more weight for the non-private clients compared to the private clients, so that the information provided by them is utilized more heavily in the global estimate. Lemma 2 in Appendix A.2 states the Bayes optimality of the proposed solution and derives the optimal value of r ∗for the considered setup when substituting the corresponding values of such parameters.

3.1.3 Optimal Local Estimates On Clients

Now that we have stated the optimal server-side aggregation, we consider optimal learning strategies on the client side. In particular, we find that FedHDP achieves the Bayes optimal solution (Local Bayes obj.) for local estimates at both the private as well as the non-private clients when using the optimal value of the ratio hyperparameter r ∗ as well as the regularization parameters λj 's for each client. Lemma 3 in Appendix A.3 states the Bayes optimality of this solution and states the optimal values of each of the regularization parameters λ ∗ j . It is worth mentioning that this result shows that FedHDP recovers the (Local Bayes obj.), which is the best one could hope for even if the client had access to DP versions of all other clients' updates without any constraints that arise in FL, such as just having access to the global model. We also notice that the values of λ ∗ are different for private and non-private clients. We recall that during the computation of the expressions for the personalization parameters for all clients we considered the presence of data heterogeneity as well as privacy heterogeneity. In Table 1 we provide a few important special cases for the FedHDP algorithm for the considered federated linear regression problem.

Table 1: Special cases of FedHDP in the federated linear regression. No privacy Homogeneous privacy Homogeneous data FedAvg DP-FedAvg+Ditto Heterogeneous data FedAvg+Ditto DP-FedAvg+Ditto

Finally, we state the optimality of the FedHDP solution for both estimates, i.e., local and global, using the appropriate learning rate in Theorem 5 in Appendix 4.

Remark: Although the heterogeneous DP problem is fundamentally different from robustness to datapoisoning attacks (Li et al., 2021), its solution bears resemblance to a recently-proposed personalization scheme known as Ditto (Li et al., 2021). FedHDP**, in its general form, differs from** Ditto in a number of major ways, and recovers it as a special case. First, the server-side aggregation in Ditto is the vanilla FedAvg**; however, in the proposed solution the server-side aggregation is no longer** FedAvg**, but rather a** new aggregation rule which utilizes the privacy choices made by clients. Second, Ditto**, where the setup** includes two sets of clients, i.e., benign and malicious, is designed for robustness against malicious clients; hence, the performance on malicious clients is not considered. On the other hand, the sets of clients in our proposed setup are the sets of clients with different levels of privacy; hence, measuring the performance across all sets of clients, i.e., clients with different privacy levels, is needed, and improving their performance is desired across all sets of clients. Third, the server in Ditto is unaware of the status of the clients, i.e., whether or not they are malicious; while in the proposed setup the server is aware of the privacy choices made by clients, and hence can give different weights to updates from private and non-private clients.

3.1.4 Privacy-Utility Trade-Off

Thus far, we have shown that for the problem of federated linear regression, the global estimate can benefit from the introduced setup of heterogeneous DP. A better global estimate would enable better performance on clients' devices in the federated linear regression setup, even when no personalization is utilized. However, a question may arise on whether clients have a utility cost if they choose to remain private compared to the case where they opt out and become non-private.

To answer this question, we argue that opting out helps the server produce a better global estimate, in addition to helping clients produce better personalized local estimates. In other words, clients that opt out can produce better personalized local estimates compared to the clients that remain private.

To illustrate the motivation of opting out for clients, we perform an experiment where we conduct the federated linear regression experiment for two scenarios. The first is the case where client ck remains private, and the second is the case where ck opts out of privacy and becomes non-private. For comparison, we provide the results of the experiments of FedHDP with the optimal value r ∗, as well as HDP-FedAvg+Ditto**. The results of these** experiments are shown in Figure 1.We can see that if the client is non-private, they exhibit improvements in their estimates using the optimal value λ ∗for both algorithms, but the proposed FedHDP with the optimal value r ∗ greatly outperforms HDP-FedAvg+Ditto**. Additionally, in this problem, we can see that the optimal value of** λ ∗ np for non-private clients is always greater than or equal to the value λ ∗ pfor private clients, which is due to the value of r being less than or equal to 1. In other words, nonprivate clients have more influence on the global estimate, and hence, encouraging the local estimate to get closer to the global estimate is more meaningful compared to private clients. Furthermore, this experiment illustrates an important trade-off between privacy and utility for each client, where opting out of privacy improves performance, while maintaining privacy incurs degraded performance.

Figure 1: The effect of opting out on the personalized local model estimate for a linear regression problem as a function of λ when employing (left) HDPFedAvg+Ditto and (right) FedHDP**. Note that** σ 2 c includes the non-IID parameter, and γ 2is the variance of the privacy noise.

3.2 Extension To Federated Linear Regression With Multiple Privacy Levels

We presented brief results of the analysis for the federated linear regression with two privacy levels, i.e., private and non-private, as a first step towards demonstrating the effectiveness of FedHDP**. We present the** complete analysis of the federated linear regression and extend such analysis to the following setups:

  • Federated linear regression with two privacy levels: In this extended setup, we have two subsets of clients C1 and C2**, each having its own privacy requirements** γ 2 1 and γ 2 2**, respectively. We** perform an analysis of the new setup, and derive the expressions for the optimal hyperparameters under a diagonal covariance matrix assumption. You can refer to Appendix A for more. Additionally, in Appendix B, we provide a summarized analysis of the federated point estimation where we highlight some insightful observations.

  • Federated linear regression beyond two privacy levels: In Appendix A.6, we consider the case with more than two privacy levels in federated linear regression, provide an analysis of this setup, and show the solution to the optimal aggregator as well as the optimal regularization parameters for each set of clients. The solution would still be achieved by FedHDP algorithm, however, now with different values of λj for each set of clients, and a server-side weighted averaging that depends on the individual values of noise variance γ 2 i of each set of clients.

4 Fedhdp: Federated Learning With Heterogeneous Differential Privacy

Now that we have been able to find a Bayes optimal solution in the simplified federated linear regression setup (and some extensions of it), we build upon the ingredients we used to build a general solution for FL with heterogeneous DP. We formally present the FedHDP algorithm and elaborate on its hyperparameters. The FedHDP algorithm that is designed to take advantage of the aforementioned heterogeneous privacy setup is described in Algorithm 2. Similarly to the simplified setting, FedHDP utilizes DP with adaptive clipping, upweighing of less private clients on the server side, and a simple form of personalization.

Algorithm 2 FedHDP: Federated learning with heterogeneous DP (General Algorithm) At client cj : ClientUpdate(θ 0 , cj , S): θ ← θ 0 θj ← θ 0 (if not initialized) B ← batch the client's data Dj for epoch e = 1, 2, ..., E do for B in B do θ ← θ − η∇fj (θ, B) θj ← θj − ηp(∇fj (θj , B) + λj (θj − θ 0 ))
end for
end for 0
△θ ← θ − θ b ← 1k△θk2≤S return Clip(△θ, S), b to server
Inputs: model parameters θ 0 , sensitivity S 0 , learning rate η, personalized learning rate ηp, noise multipliers z, zb, quantile κ, and factor ηb. Outputs: θ T , {θj }j∈[N] At server: for t = 0, 1, 2, ..., T − 1 do C t ← Sample N t clients from C for client cj in C t in parallel do △θ j , bt j ← ClientUpdate(θ t t , cj , St ) end for for j ∈ [l] in parallel do t N j ← C t j , z t j ← zj t S Nt j △θ˜t j ← 1 i + N (0, (z t t 2 I) Nt P ci∈C t △θ j ) j j end for i △ θ˜t t △θ t ← P i∈[l] w i θ t+1 ← θ t + △θ t ( 1 t 1 2 ))−κ t+1 ← S t e −ηb Nt P i∈Ct b i+N(0,z2 b Nt S Clip(θ, S): return θ ×
max(kθk2,S)
end for

First, the notations for the variables used in the algorithm are introduced. The set of N clients C is split into subsets containing clients grouped according to their desired privacy levels, denoted by C1, C2, ..., Cl**. Let** the number of clients in the subset Ci be denoted by Ni = |Ci|. The rest of the hyperparameters in the algorithm are as follows: the noise multipliers z, zb, the clipping sensitivity S**, the learning rate at clients** η, the personalized learning rate at clients ηp, quantile κ, and factor ηb**. Also, the superscript (·) tis used to** denote a parameter during the t**-th training round.** During round t of training, no additional steps are required for the clients during the global model training. Clients train the received model using their local data followed by sending back their clipped updates ∆θ t j along with their clipping indicator b t j to the server. The server collects the updates from clients and performs a two-step aggregation process. During the first step, the updates from the clients in each subset Ci are passed through a (ǫi, δi**) DP averaging function to produce** △θ˜t i . In the second step of the aggregation, the outputs of the previous averaging functions are combined to produce the next iteration of the model. In this step, the server performs a weighted average of the outputs. The weights for this step are chosen based on the number of clients in each subset in that round, the privacy levels, as well as other parameters. This part resembles the weighted averaging considered in the aforementioned federated linear regression problem in Section 3.1.2. In general, the goal is to give more weight to updates from clients with less strict privacy requirements compared to the ones with stricter privacy requirements. The output of this step is △θ t**, which** is then added to the previous model state to produce the next model state.

To further elaborate on the averaging weights, let us reconsider the simple setup where we have only two subsets of clients, i.e., C1 and C2, with DP parameters (ǫ1, δ1) and (ǫ2, δ2**), respectively. Also, suppose that** the second subset has stricter privacy requirements, i.e., ǫ1 ≥ ǫ2 and δ1 ≥ δ2 The weights w t 1 and w t 2 during round t can be expressed as follows w t 1 =Nt 1 Nt 1+rNt 2 and w t 2 =rNt 2 Nt 1+rNt 2 . In general, we desire the value of the ratio r to be bounded as 0 ≤ r ≤ 1 in FedHDP to use the less-private clients' updates more meaningfully. The first factor to consider when choosing r is related to the desired privacy budget, lower privacy budgets require more noise to be added, leading to a lower value of r**. This intuition was verified in the simplified** setting in the previous section. Another factor that is more difficult to quantify is the heterogeneity between the less-private set of clients and the private set of clients. To illustrate this intuition we give the following example. Suppose that the model is being trained on the MNIST dataset where each client has samples of only one digit. Consider two different scenarios: the first is when each of the less-private clients has a digit drawn uniformly from all digits, and the second is when all of the less-private clients have the same digit. It can be argued that the ratio r**, when every other hyperparameter is fixed, should be higher in the second** scenario compared to the first; since contributions from the more-private clients are more significant to the overall model in the second scenario than the first. This will be experimentally verified in the experiments section presented later.

Then, clients need to train personalized models to be used locally. In the personalization process, each client simultaneously continues learning a local model when participating in a training round using the local dataset and the most recent version of the global model received during training and the appropriate value of λ**. It is worth noting that the personalization step is similar in spirit to the personalized solution to the** federated linear regression problem in Section 3.1.3. Furthermore, the server keeps track of the privacy loss due to the clients' participation in each round by utilizing the moment accountant method (Abadi et al., 2016) for each set of clients to provide them with tighter bounds on their privacy loss.

5 Experimental Evaluation

Thus far, we showed that FedHDP achieves Bayes optimal performance on a class of linear problems. In this section, we present the results of a number of more realistic experiments to show the utility gain of the proposed FedHDP algorithm with fine-tuned hyperparameters compared to the baseline DP-FedAvg algorithm, where we additionally apply personalization on this baseline at clients using Ditto (Li et al., 2021) and refer to it as DP-FedAvg+Ditto**. Additionally, we compare the performance against another** stronger baseline, HDP-FedAvg+Ditto**, which is a personalized variant of** HDP-FedAvg**. Note that** HDP-FedAvg+Ditto can be viewed as a special case of FedHDP with uniform averaging, i.e., r = 1, instead of a weighted averaging at the server. Note that the hyperparameters for the models, such as learning rate and others, are first tuned using the baseline Non-Private algorithm, then the same values of such hyperparameters are applied in all the private algorithms. The experiments consider the case where two privacy levels are presented to each client to choose from, to be private or non-private. The experiments show that FedHDP outperforms the baseline algorithms with the right choice of the hyperparameters r, λ in terms of the global model accuracy, as well as in terms of the average personalized local model accuracy.

5.1 Setup

The experiments are conducted on multiple federated datasets, synthetic and realistic. The synthetic datasets are manually created to simulate extreme cases of data heterogeneity often exhibited in FL scenarios. The realistic federated datasets are from TFF (Google, 2019), where such datasets are assigned to clients according to some criteria. The synthetic dataset is referred to as the non-IID MNIST dataset, and the number of samples at a client is fixed across all clients. Each client is assigned samples randomly from the subsets of samples each with a single digit between 0−9. A skewed version of the synthetic dataset is one where nonprivate clients are sampled from the clients who have the digit 7 in their data. In the non-IID MNIST dataset, we have 2, 000 clients and we randomly sample 5% of them for training each round. The realistic federated datasets are the FMNIST and FEMNIST from TFF datasets. The FMNIST and FEMNIST datasets contain 3, 383 and 3, 400 clients, respectively, and we sample ∼ 3% of them for training each round. TensorFlow Privacy (TFP) (Google, 2018) is used to compute the privacy loss, i.e., the values of (ǫ, δ), incurred during the training phase. Refer to the appendix for an extended description of the used models and their parameters, as well as an extended version of the results.

Remark: It is worth noting that computing the optimal values of r, λnp, and λp for non-convex models such as neural networks is not straightforward. To resolve this issue in practice, we treat them as hyperparameters to be tuned via grid search on the validation set. Note that tuning the regularization parameters λnp and λp is done locally at clients and, hence, does not lead to any privacy loss. On the other hand, the ratio hyperparameter r needs tuning by the server. The recent work by Papernot & Steinke (2022) shows that tuning hyperparameters from non-private training runs incurs significant privacy loss while such a

clients is between parenthesis. FMNIST dataset, (0.6, 10−4 )-DP
Setup Global model Personalized local models
Algorithm hyperparameters Accg% Accg,p% Accg,np% △g% Accl,p% Accl,np% △l%
Non-Private+Ditto λnp = 0.05 89.65 - 89.35(1.68) - - 94.53(0.59) -
DP-FedAvg+Ditto λp = 0.05 77.61 77.62(2.55) - - 90.04(1.04) - -
HDP-FedAvg+Ditto λp =λnp = 0.005 75.87 75.77(2.84) 74.41(2.8) −1.36 90.45(1.02) 92.32(0.8) 1.87
FedHDP r= 0.01,
λp = 0.05, λnp = 0.005 86.88 85.36(1.89) 90.02(1.28) 4.66 93.76(0.68) 95.94(0.41) 2.18
FEMNIST dataset, (4.1, 10−4 )-DP
Non-Private+Ditto λnp = 0.25 81.66 - 81.79(1.38) - - 84.46(0.89) -
DP-FedAvg+Ditto λp = 0.05 75.42 75.86(1.82) - - 74.69(1.29) - -
HDP-FedAvg+Ditto λp =λnp = 0.05 75.12 75.87(1.65) 78.59(1.58) 2.72 74.67(1.34) 75.95(1.12) 1.28
FedHDP r= 0.1,
λp = λnp = 0.05 76.52 77.91(1.67) 83.9(1.27) 5.99 77.9(1.22) 79.15(0.99) 1.25
FedHDP r= 0.01,
λp = λnp = 0.25 74.86 77.31(2.18) 86.73(0.98) 9.42 81.19(1.02) 84.68(0.78) 3.49
between parenthesis. nonIID MNIST dataset, (3.6, 10−4 )-DP
Setup Global model Personalized local models
Algorithm hyperparameters Accg% Accg,p% Accg,np% △g% Accl,p% Accl,np% △l%
Non-Private+Ditto λnp = 0.005 93.8 - 93.75(0.13) - - 99.98(0.001) -
DP-FedAvg+Ditto λp = 0.005 88.75 88.64(0.39) - - 99.97(0.002) - -
HDP-FedAvg+Ditto λp =λnp = 0.005 87.71 87.55(0.42) 88.35(0.34) 0.8 99.97(0.001) 99.93(0.001) −0.04
FedHDP r= 0.01,
λp =λnp = 0.005 92.48 92.43(0.30) 93.30(0.21) 0.88 99.94(0.001) 99.94(0.001) 0.0
Skewed nonIID MNIST dataset, (3.6, 10−4 )-DP
Non-Private+Ditto λnp = 0.005 93.67 - 93.62(0.15) - - 99.98(0.001) -
DP-FedAvg+Ditto λp = 0.005 88.93 88.87(0.35) - - 99.98(0.001) - -
HDP-FedAvg+Ditto λp =λnp = 0.005 88.25 88.05(0.39) 89.98(0.05) 1.93 99.97(0.001) 99.85(0.001) −0.11
FedHDP r= 0.1,
λp =λnp = 0.005 90.36 89.96(0.37) 97.45(0.01) 7.49 99.97(0.001) 99.76(0.003) −0.21
FedHDP r= 0.9,
λp =λnp = 0.005 87.96 87.69(0.56) 92.97(0.04) 5.28 99.98(0.001) 99.96(0.001) −0.02

hyperparameter tuning based on private runs may lead to manageable privacy loss. We must state however that we assumed, as many works in the literature do, that tuning of such hyperparameter does not incur any additional privacy loss on clients. We acknowledge that we have not investigated this privacy loss and leave a comprehensive investigation of this issue to future work.

5.2 Results

In this part, we provide the outcomes of the experiments on the datasets mentioned above. In these experiments, we provide results when 5% of the total client population is non-private. Clients that opt out are picked randomly from the set of all clients but fixed for a fair comparison across all experiments. The exception for this assumption is for the skewed non-IID MNIST dataset, where clients that opt out are sampled from the clients who have the digit 7. All other hyperparameters are fixed. To evaluate the performance of each algorithm, we measure the following quantities for each dataset:

  1. Accg**: the average test accuracy on the** server test dataset using the global model. 2. Accg,p, Accg,np**: the average test accuracy of all** private and non-private clients using the global model on their local test datasets, respectively.

  2. Accl,p, Accl,np**: the average test accuracy of all** private and non-private clients using their personalized local models on their local test datasets, respectively.

  3. △g, △l**: the gain in the average performance of** non-private clients over the private ones using the global model and the personalized local models on their local test datasets, respectively; computed as △g = Accg,np − Accg,p and △l = Accl,np − Accl,p.

A summary of the results, shown in Table 2 and Table 3, provides the best performance for each experiment along with their corresponding hyperparameters. More detailed results are shown in the appendix. If different values of the hyperparameters in FedHDP yield two competing results, such as one with a better global model performance at the server and one with better personalized models at the clients, we show both.

We can see from Tables 2 and 3 that FedHDP allows the server to learn better global models while allowing clients to learn better personalized local models compared to the other baselines, i.e., DP-FedAvg+Ditto as well as the FedHDP with r = 1. For example, the gain due to FedHDP compared to the DPFedAvg+Ditto in terms of global model performance is up to 9.27%. For personalized local models, the gain for clients due to FedHDP compared to DP-FedAvg+Ditto is up to 9.99%. Additionally, we can also see the cost in the average performance in personalized local models between clients who choose to opt out of privacy and clients who choose to remain private. This demonstrates the advantage of opting out, which provides clients with an incentive to opt out of DP to improve their personalized local models, for example, non-private clients can gain up to 3.49% on average in terms of personalized local model performance compared to private clients. It is worth mentioning that opting out can also improve the global model's performance on clients' local data. We observe that there is up to 12.4% gain in the average performance of non-private clients in terms of the accuracy of the global model on the local data compared to the one of baseline DP-FedAvg+Ditto**. Similar trends can be observed for the other baseline.**

6 Conclusion

In this paper, we considered a new aspect of heterogeneity in FL setups, namely, heterogeneous privacy.

We proposed a new setup for privacy heterogeneity between clients where privacy levels are no longer fixed for all clients. In this setup, the clients choose their desired privacy levels according to their preferences and inform the server about the choice. We provided a formal treatment for the federated linear regression problem and showed the optimality of the proposed solution on the central server as well as the personalized local models in such a setup. Moreover, we have observed that personalization becomes necessary whenever data heterogeneity is present, privacy is required, or both. We proposed a new algorithm called FedHDP for the considered setup. In FedHDP**, the aim is to employ DP to ensure the privacy level desired by each** client is met, and we proposed a two-step aggregation scheme at the server to improve the utility of the model. We also utilize personalization to improve the performance at clients. Finally, we provided a set of experiments on synthetic and realistic federated datasets considering a heterogeneous DP setup. We showed that FedHDP outperforms the baseline private FL algorithm in terms of the global model as well as the personalized local models' performance, and showed the cost of requiring stricter privacy parameters in such scenarios in terms of the gap in the average performance at clients.

Acknowledgement

We are grateful to the anonymous TMLR reviewers and AE Kangwook Lee whose thoughtful comments on a previous version of this paper helped significantly with clarity of the presentation.

References

Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.

Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security**, pp. 308–318, 2016.** Maruan Al-Shedivat, Jennifer Gillenwater, Eric Xing, and Afshin Rostamizadeh. Federated learning via posterior averaging: A new perspective and practical algorithms. arXiv preprint arXiv:2010.05273, 2020. Mohammad Alaggan, Sébastien Gambs, and Anne-Marie Kermarrec. Heterogeneous differential privacy.

Journal of Privacy and Confidentiality**, 7(2):127–158, 2016.** Mário S Alvim, Miguel E Andrés, Konstantinos Chatzikokolakis, Pierpaolo Degano, and Catuscia Palamidessi. Differential privacy: on the trade-off between utility and information leakage. In International Workshop on Formal Aspects in Security and Trust, pp. 39–54. Springer, 2011. Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M Suriyakumar, Om Thakkar, and Abhradeep Thakurta. Public data-assisted mirror descent for private model training. In International Conference on Machine Learning, pp. 517–535. PMLR, 2022. Galen Andrew, Om Thakkar, H Brendan McMahan, and Swaroop Ramaswamy. Differentially private learning with adaptive clipping. arXiv preprint arXiv:1905.03871, 2019. Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019. Brendan Avent, Aleksandra Korolova, David Zeber, Torgeir Hovden, and Benjamin Livshits. BLENDER: Enabling local search with a hybrid differential privacy model. In 26th {USENIX} Security Symposium ({USENIX} Security 17), pp. 747–764, 2017. Amos Beimel, Aleksandra Korolova, Kobbi Nissim, Or Sheffet, and Uri Stemmer. The power of synergy in differential privacy: Combining a small curator with local randomizers. arXiv preprint arXiv:1912.08951, 2019.

Alberto Bietti, Chen-Yu Wei, Miroslav Dudik, John Langford, and Steven Wu. Personalization improves privacy-accuracy tradeoffs in federated learning. In International Conference on Machine Learning, pp. 1945–1962. PMLR, 2022.

Sebastian Caldas, Jakub Konečny, H Brendan McMahan, and Ameet Talwalkar. Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210, 2018. Flavio P Calmon, Ali Makhdoumi, and Muriel Médard. Fundamental limits of perfect privacy. In 2015 IEEE International Symposium on Information Theory (ISIT), pp. 1796–1800. IEEE, 2015. Luca Corinzia, Ami Beuret, and Joachim M Buhmann. Variational federated multi-task learning. arXiv preprint arXiv:1906.06268**, 2019.** Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Adaptive personalized federated learning.

arXiv preprint arXiv:2003.13461**, 2020.** Canh T Dinh, Nguyen H Tran, and Tuan Dung Nguyen. Personalized federated learning with moreau envelopes. arXiv preprint arXiv:2006.08848, 2020. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science**, 9(3-4):211–407, 2014.** Cynthia Dwork, Nitin Kohli, and Deirdre Mulligan. Differential privacy in practice: Expose your epsilons!

Journal of Privacy and Confidentiality**, 9(2), 2019.** Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning: A meta-learning approach. arXiv preprint arXiv:2002.07948, 2020. Cecilia Ferrando, Jennifer Gillenwater, and Alex Kulesza. Combining public and private data. arXiv preprint arXiv:2111.00115**, 2021.** Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security**, pp. 1322–1333, 2015.** Robin C Geyer, Tassilo Klein, and Moin Nabi. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017. Google. TensorFlow Privacy, 2018. URL http://tensorflow.org/privacy. Google. TensorFlow Federated, 2019. URL http://tensorflow.org/federated.

Xinran Gu, Kaixuan Huang, Jingzhao Zhang, and Longbo Huang. Fast federated learning in the presence of arbitrary device unavailability. Advances in Neural Information Processing Systems, 34:12052–12064, 2021.

Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, and Nicholas Lane.

FjORD: Fair and accurate federated learning under heterogeneous targets with ordered dropout. Advances in Neural Information Processing Systems**, 34:12876–12889, 2021.** Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019. Zach Jorgensen, Ting Yu, and Graham Cormode. Conservative or liberal? personalized differential privacy.

In 2015 IEEE 31St international conference on data engineering, pp. 1023–1034. IEEE, 2015. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pp. 5132–5143. PMLR, 2020. Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Adaptive gradient-based meta-learning methods. arXiv preprint arXiv:1906.02717, 2019. Muah Kim, Onur Günlü, and Rafael F Schaefer. Federated learning with local differential privacy: Trade-offs between privacy, utility, and communication. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2650–2654. IEEE, 2021. Jakub Konečn`y, H Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527, 2016. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127, 2018. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3):50–60, 2020. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. International Conference on Machine Learning, 2021. Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, and Xiaofeng Meng. Projected federated averaging with heterogeneous differential privacy. Proceedings of the VLDB Endowment, 15(4):828–840, 2021. Ziyu Liu, Shengyuan Hu, Zhiwei Steven Wu, and Virginia Smith. On privacy and personalization in cross-silo federated learning. arXiv preprint arXiv:2206.07902, 2022. Andrew Lowy and Meisam Razaviyayn. Private federated learning without a trusted server: Optimal algorithms for convex losses. arXiv preprint arXiv:2106.09779, 2021. Ali Makhdoumi, Salman Salamatian, Nadia Fawaz, and Muriel Médard. From the information bottleneck to the privacy funnel. In 2014 IEEE Information Theory Workshop (ITW 2014), pp. 501–505. IEEE, 2014. Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619, 2020. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas.

Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics**, pp. 1273–1282. PMLR, 2017.** H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. In International Conference on Learning Representations, 2018. Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with Rényi differential privacy. International Conference on Learning Representations (ICLR), 2022. Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečn`y, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020.

Lalitha Sankar, S Raj Rajagopalan, and H Vincent Poor. Utility-privacy tradeoffs in databases: An information-theoretic approach. IEEE Transactions on Information Forensics and Security, 8(6):838–852, 2013.

Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE, 2017. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet Talwalkar. Federated multi-task learning. arXiv preprint arXiv:1705.10467**, 2017.** Shuang Song, Kamalika Chaudhuri, and Anand Sarwate. Learning from data with heterogeneous noise using SGD. In Artificial Intelligence and Statistics, pp. 894–902. PMLR, 2015. Lichao Sun, Jianwei Qian, Xun Chen, and Philip S Yu. LDP-FL: Practical private aggregation in federated learning with local differential privacy. arXiv preprint arXiv:2007.15789, 2020. Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking**, pp. 61–66, 2020.** Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. Federated learning with matched averaging. arXiv preprint arXiv:2002.06440, 2020. Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. A field guide to federated optimization.

arXiv preprint arXiv:2107.06917**, 2021.** Kangkang Wang, Rajiv Mathews, Chloé Kiddon, Hubert Eichner, Françoise Beaufays, and Daniel Ramage.

Federated evaluation of on-device personalization. arXiv preprint arXiv:1910.10252, 2019. Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi, Shi Jin, Tony QS Quek, and H Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security**, 15:3454–3469, 2020.** Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018. Yingxue Zhou, Zhiwei Steven Wu, and Arindam Banerjee. Bypassing the ambient dimension: Private sgd with gradient subspace identification. arXiv preprint arXiv:2007.03813, 2020. Hangyu Zhu and Yaochu Jin. Multi-objective evolutionary federated learning. IEEE transactions on neural networks and learning systems**, 31(4):1310–1322, 2019.**

Appendix: Organization & Contents

We include additional discussions and results complementing the main paper in the appendix.

  • Appendix A includes a discussion on federated linear regression and the optimality of FedHDP along with some related simulation results.

  • Appendix B includes additional results for the federated point estimation problem. - Finally, we include additional information about the experimental setup along with extended versions of the results of each experiment in Appendix C.

  • Finally, broader impact and limitations are presented in Appendix D.

A Theoretical Derivations For Federated Linear Regression

In this appendix, we consider applying the proposed algorithm FedHDP on the simplified setup of federated linear regression problem, first introduced by Li et al. (2021). We formally present the setup, solve the optimization problem, and show the Bayes optimality of FedHDP on the global model on the server, as well as the optimality of local models on clients.

A.1 Federated Linear Regression Setup

ϕj=ϕ+pj\phi_{j}=\phi+p_{j} fj(ϕ)=1msFjϕxj22(8)f_{j}(\phi)=\frac{1}{m_{s}}\|F_{j}\phi-\mathbf{x}_{j}\|_{2}^{2}\tag{8}

Local estimate of φj at client cj that minimizes the loss function given Fj and xj is denoted by φbj and is computed as ϕ^j=(FjTFj)1FjTxj,\widehat{\phi}_{j}=\left(F_{j}^{T}F_{j}\right)^{-1}F_{j}^{T}x_{j}, j xj , (9) which is distributed as φbj ∼ N φj , β2(F T j Fj ) −1**. As assumed earlier, let** F T j Fj = nsId**, then the loss function** can be translated to $(x_{j},x_{j})$. As assumed earlier, let $x_{j},x_{j}=n_{s},a$, then the loss function $$f_{j}(\phi)=\frac{1}{2}\big{|}\phi-\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}x_{j,i}\big{|}{2}^{2},\tag{10}$$ of the vector $\phi{i}$ at client $c_{i}$, which holds due to the assumption of $\left(7\right)$. (g)({\mathfrak{g}})

$\left(11\right)$. where xj,i's are the noisy observations of the vector φj at client cj , which holds due to the assumption of the diagonal covariate covariance matrix. The updates sent to the server by client cj are as follows ψj = φbj + lj (11) where lj ∼ N (0, N1γ 2 1Id) for private clients cj ∈ C1 or lj ∼ N (0, N2γ 2 2Id) for private clients cj ∈ C2**. Note** that as we mentioned in the paper, we still consider client-level privacy; however, we move the noise addition process from the server side to the client side. This is done such that when the server aggregates the private Assume the number of samples per client is fixed and the effect of clipping is negligible. Let us denote the total number of clients as N, of whom N1 = ρ1N are private with privacy level p1 and N2 = (1 − ρ1)N are private with privacy level p2, where p2 is stricter than p1**, i.e., more private. The subset of clients with** privacy level p1 is denoted by C1, while the subset of clients with privacy level p2 is denoted by C2**. Denote** the number of samples held by each client as ms, the samples at client cj as {Fj , xj}, where Fj is the data features matrix, and xj is the response vector. Let us assume that the features matrix Fj has the property of diagonal covariate covariance, i.e., F T j Fj = nsId. Let us denote the relationship between xj and Fj as xj = Fjφj + vj (6) where elements of the observations noise vector vj are drawn independently from N (0, β2), and φj is the vector of length d to be estimated. The vector φj is described as φj = φ + pj (7) where pj ∼ N (0, τ 2Id), and φ is the vector to be estimated at the server. It is worth noting that τ 2is a measure of relatedness, as specified by Li et al. (2021), where larger values of τ 2reflect increasing data heterogeneity and vice versa. The local loss function at client cj is as follows clients' updates in each subset the resulting privacy noise variance is equivalent to the desired value by the server, i.e., γ 2 1 and γ 2 2**. This is done for simplicity and clarity of the discussion and proofs.** In this setup, the problem becomes a vector estimation problem and the goal at the server is to estimate the vector φ given the updates from all clients, denoted by {ψi: i ∈ [N]} as

\boldsymbol{\theta}^{*}:=\arg\min_{\widehat{\boldsymbol{\theta}}}\left\{\mathbb{E}\left[\frac{1}{2}\|\widehat{\boldsymbol{\theta}}-\boldsymbol{\phi}\|_{2}^{2}\right]\psi_{1},...,\psi_{N}\right]\right\}. (12)(12)

On the other hand, client cj 's goal is to estimate the vector φj given their local estimate φbj as well as the updates from all other clients {ψi: i ∈ [N] \ j} as

ends ${\phi_{1}:\mathcal{N}\in[N]\setminus J}$ as $$\boldsymbol{\theta}{j}^{*}:=\arg\min{\boldsymbol{\theta}}\left{\mathbb{E}\left[,\frac{1}{2}|\boldsymbol{\hat{\theta}}-\phi_{j}|{2}^{2}\right]{\psi{i}:i\in[N]\setminus j},\hat{\phi}_{j}\right]\right}.\qquad\text{(Local Bayes obj.)}$$

Now, considering the value of φ, the covariance matrix of client cj 's update is denoted by Σj . The value of the covariance matrix can be expressed as follows Σj = β 2(F T j Fj ) −1 + (τ 2 + N1γ 2 1)Id, if cj ∈ C1 β 2(F T j Fj ) −1 + (τ 2 + N2γ 2 2)Id, if cj ∈ C2(13) We have F T j Fj = nsId, and let β 2 ns = α 2, σ 2 c = α 2 + τ 2, σ 2 p1 = σ 2 c + N1γ 2 1, and σ 2 p2 = σ 2 c + N2γ 2 2, then we have Σj = (α 2 + τ 2 + N1γ 2 1)Id, if cj ∈ C1 (α 2 + τ 2 + N2γ 2 2)Id, if cj ∈ C2(14) = σ 2 p1 Id, if cj ∈ C1 σ 2 p2 Id, if cj ∈ C2(15) Next, we discuss the optimality of FedHDP for the specified federated linear regression problem for the server's global model as well as the clients' personalized local models.

A.2 Global Estimate On The Server

In the considered federated setup, the server aims to find θb∗ described as follows

θ^:=argminθ^{12i[N]wiψiθ^22}.(16)\widehat{\theta}^{*}:=\arg\min_{\widehat{\theta}}\left\{\frac{1}{2}\left\|\sum_{i\in[N]}w_{i}\psi_{i}-\widehat{\theta}\right\|_{2}^{2}\right\}.\tag{16} $\widehat{\theta}^{}:=\arg\min_{\widehat{\theta}}\left{\frac{1}{2}\left|\sum_{i\in[N]}w_{i}\psi_{i}-\widehat{\theta}\right|_{2}^{2}\right}.$ $\widehat{\theta}^{}:=\arg\min_{\widehat{\theta}}\left{\frac{1}{2}\left|\sum_{i\in[N]}w_{i}\psi_{i}-\widehat{\theta}\right|_{2}^{2}\right}.$ (13)(13) have $\binom{14}{4}$ (15) . (17)(17)

The server's goal, in general, is to combine the client updates such that the estimation error of φ in (12) is minimized, i.e., the server aims to find the Bayes optimal solution described in (12). For the considered setup, the server aims to utilize the updates sent by clients, i.e., {ψi: i ∈ [N]}, to estimate the vector φ.

The estimate at the server is denoted by θ**. Our goal in this part is to show that the solution to (16), which** is the solution in FedHDP algorithm converges to the Bayes optimal solution in (12). First, we state an important lemma that will be used throughout this section.

Lemma 1 (Lemma 2 in Li et al. (2021)). Let φ be drawn from the non-informative uniform prior on R d.

Also, let {ψi: i ∈ [N]} denote noisy observations of φ with independent additive zero-mean independent Gaussian noise and corresponding covariance matrices {Σi: i ∈ [N]}. Let

\Sigma_{\phi}=\Big{(}\sum_{i\in[N]}\Sigma_{i}^{-1}\Big{)}^{-1}. (18)(18)

Then, conditioned on {ψi: i ∈ [N]}, we have

Then, $\phi$ is bounded on ${\psi_i:\psi\in[P^{-1}]}$, we have $$\phi=\Sigma_{\phi}\sum_{i\in[N]}\Sigma_i^{-1}\psi_i+\pmb{p}{\phi},$$ where $\pmb{p}{\phi}\sim\mathcal{N}(\pmb{0},\Sigma_{\phi})$, which is independent of ${\psi_i:i\in[N]}$. Next, we state the Bayes optimality of the solution at the server.

Lemma 2 (Global estimate optimality). The proposed solution, from the server's point of view, with weights wj 's chosen below, is Bayes optimal in the considered federated linear regression problem.

It the considered spectral linear regression problem. $\begin{array}{ll}w_j=\left{\begin{matrix}\frac{1}{N_1+N_2r^*},&\mbox{if}c_j\in\mathcal{C}1\ \frac{r^*}{N_1+N_2r^*},&\mbox{if}c_j\in\mathcal{C}2\end{matrix}\right.\ \ r^*=\frac{\sigma_c^2+N_1\gamma_1^2}{\sigma_c^2+N_2\gamma_2^2}.\end{array}$ imation error is where (19)(19) r=σc2+N1γ12σc2+N2γ22.r^{*}=\frac{\sigma_{c}^{2}+N_{1}\gamma_{1}^{2}}{\sigma_{c}^{2}+N_{2}\gamma_{2}^{2}}. on error is: (20)(20) (21)(21) (22)(22) σc+N1γ12\sigma_{c}+N_{1}\gamma_{12} of the estimation error is: $$\Sigma{s,\mathrm{opt}}=\frac{1}{N}\left[\frac{(\sigma{c}^{2}+N_{1}\gamma_{1}^{2})(\sigma_{c}^{2}+N_{2}\gamma_{2}^{2})}{\sigma_{c}^{2}+(1-\rho_{1})N_{1}\gamma_{1}^{2}+\rho_{1}N_{2}\gamma_{2}^{2}}\right]I_{d}.\tag{1}$$ Furthermore, the covariance of the estimation error is: Proof. First, for the considered setup, Lemma 1 states that the optimal aggregator at the server is the weighted average of the client updates. The server observes the updates {ψi: i ∈ [N]}, which are noisy observations of φ with zero-mean Gaussian noise with corresponding covariance matrices {Σi: i ∈ [N]}.

Then, the server computes its estimate θ of φ as

$\theta$ of $\varphi$ as $$\theta=\Sigma_{\theta}\sum_{i\in[N]}\Sigma_{i}^{-1}\psi_{i}+p_{\theta},\tag{1}$$

In FedHDP with only two subsets of clients, we only have a single hyperparameter to manipulate server-side,
which is the ratio r that is the ratio of the weight dedicated for clients with higher privacy level to the one
for clients with the lower privacy level. To achieve the same noise variance as in (24) we need to choose
the ratio r carefully. To this end, setting r =
                                            σ
                                             2
                                             c+N1γ
                                                  2
                                                  1
                                            σ2
                                             c+N2γ
                                                  2
                                                  2
                                                    in FedHDP results in additive noise variance in the
estimate with zero mean and covariance matrix as follows

covariance matrix as follows $\Sigma_{s,opt}=\frac{1}{N}\left[\frac{(\sigma_{c}^{2}+N_{1}\gamma_{1}^{2})(\sigma_{c}^{2}+N_{2}\gamma_{2}^{2})}{\sigma_{c}^{2}+(1-\rho_{1})N_{1}\gamma_{1}^{2}+\rho_{1}N_{2}\gamma_{2}^{2}}\right]I_{d}$.
Therefore, the weighted average of the updates using the above weights, results in the solution being Bayes optimal, i.e., produces θ ∗.

A.3 Personalized Local Estimates On Clients

As mentioned in Section 3.1.3, FedHDP differs from Ditto in many ways. First, the global model aggregation is different, i.e., FedAvg was employed in Ditto compared to the 2-step aggregator in FedHDP.

Second, in Ditto measuring the performance only considers benign clients, while in FedHDP it is important to measure the performance of all subsets of clients, and enhancing it across all clients is desired. In this part, we focus on the personalization part for both sets of clients. The goal at clients is to find the Bayes optimal solution to the (Local Bayes obj.). However, in the considered federated setup, clients don't have access to individual updates from other clients, but rather have the global estimate θb∗**. So, we have** the local FedHDP objective as

θ^j:=argminθ{12θ^ϕ^j22+λ2θ^θ^22}.(Local FeedHDP obj.){\widehat{\boldsymbol{\theta}}}_{j}^{*}:=\arg\operatorname*{min}_{\boldsymbol{\theta}}\left\{{\frac{1}{2}}\|{\widehat{\boldsymbol{\theta}}}-{\widehat{\boldsymbol{\phi}}}_{j}\|_{2}^{2}+{\frac{\lambda}{2}}\|{\widehat{\boldsymbol{\theta}}}-{\widehat{\boldsymbol{\theta}}}^{*}\|_{2}^{2}\right\}.\qquad\qquad{\mathrm{(Local~FeedHDP~obj.)}}

First, we compute the Bayes optimal local estimate θ ∗ j of φj for the local objective at client cj . We consider client cj , which can be either in C1 or C2**, and compute their minimizer of (Local Bayes obj.). In this case, the** client is given all other clients' estimates {ψi: i ∈ [N] \ j} and has their own local estimate φˆj . To this end, we utilize Lemma 1 to find the optimal estimate θ ∗ j . Given the updates by all other clients {ψi: i ∈ [N] \ j},

where pθN(0,Σθ) and {\mathrm{where~}}p_{\theta}\sim{\mathcal{N}}({\bf0},\Sigma_{\theta}){\mathrm{~and~}} where pθ ∼ N (0, Σθ) and i∈[N] Σ −1 i −1= N1(σ 2 c + N1γ 2 1) −1Id + N2(σ 2 c + N2γ 2 2) −1Id −1(23) Σθ = X = 1 N (σ 2 c + N1γ 2 1)(σ 2 c + N2γ 2 2) σ 2 c + (1 − ρ1)N1γ 2 1 + ρ1N2γ 2 2 Id. (24) (23)(23) (24)(24) (25)(25)

the client can compute the estimate φ \j of the value of φ as

$\phi^{\psi}$ of the value of $\phi$ as $$\phi^{\psi j}=\Sigma_{\phi^{\psi j}}\Big{(}\sum_{i=[N]\setminus j}\Sigma_{i}^{-1}\psi_{i}\Big{)}+p_{\phi^{\psi j}},\tag{1}$$ (26)(26) (27)(27) where pφ\j ∼ N (0, Σφ\j ) and

i=[N]\j Σ −1 i −1, (27) Σφ\j = X = m 1 σ 2 p1 Id + n 1 σ 2 p2 Id −1, (28) = σ 2 p1 σ 2 p2 nσ2 p1 + mσ2 p2 Id, (29) where n = N2 − 1, m = N1 if cj ∈ C2, or n = N2, m = N1 − 1 if cj ∈ C1. Then, the client uses Σφ\j and φˆj to estimate θ ∗ as r(r)\mathbf{r}(\mathbf{r}) (29)(29) (30) $\binom{31}{2}$ . j θ ∗ j = Σθ ∗ j (Σφ\j + τ 2Id) −1φ \j + (σ 2 c − τ 2) −1φˆj + pθ ∗ j , (30) = Σθ ∗ j nσ2 p1 + mσ2 p2 σ 2 p1 σ 2 p2 + τ 2(nσ2 p1 + mσ2 p2 ) φ \j +1 σ 2 c − τ 2 φˆ∗ j + pθj , (31)

where pθ
       ∗
       j
         ∼ N (0, Σθ
                  ∗
                  j
                   ) and

$\sigma_{\theta_{j}^{}}$) and $$\Sigma_{\theta_{j}^{}}=\left(\left(\frac{\sigma_{p_{1}}^{2}\sigma_{p_{2}}^{2}+\tau^{2}(n\sigma_{p_{1}}^{2}+m\sigma_{p_{2}}^{2})}{n\sigma_{p_{1}}^{2}+m\sigma_{p_{2}}^{2}}I_{d}\right)^{-1}+((\sigma_{c}^{2}-\tau^{2})I_{d})^{-1}\right)^{-1},$$ $$=\frac{(\sigma_{c}^{2}-\tau^{2})(\sigma_{p_{1}}^{2}\sigma_{p_{2}}^{2}+\tau^{2}(n\sigma_{p_{1}}^{2}+m\sigma_{p_{2}}^{2}))}{\sigma_{c}^{2}(n\sigma_{p_{1}}^{2}+m\sigma_{p_{2}}^{2})+\sigma_{p_{1}}^{2}\sigma_{p_{2}}^{2}}I_{d}.$$ c p2 p1 p2 We expand (31) as θ ∗ j = σ 2 p1 σ 2 p2 + τ 2(nσ2 p1 + mσ2 p2 ) σ 2 c(nσ2 p1 + mσ2 p2 ) + σ 2 p1 σ 2 p2 φˆj +σ 2 p2 (σ 2 c − τ 2) σ 2 c(nσ2 p1 + mσ2 p2 ) + σ 2 p1 σ 2 p2 X ci∈C1 i6=j ψi + σ 2 p1 (σ 2 c − τ 2) σ 2 c(nσ2 p1 + mσ2 p2 ) + σ 2 p1 σ 2 p2 X ci∈C2 i6=j ψi + pθ ∗ j . (34) This is the Bayes optimal solution to the local Bayes objective optimization problem for client cj in (32) $\binom{33}{2}$ (33) . (Local Bayes obj.). Now, recall that in FedHDP**, the clients do not have access to individual client updates,** but rather the global model. Therefore, the clients solve the FedHDP local objective in (Local FedHDP obj.). Given a value of λj and the global estimate θˆ∗, the minimizer θˆj (λj ) of (Local FedHDP obj.) is

θˆj (λj ) = 1 1 + λj φˆj + λjθˆ∗(35) =1 1 + λj (N1 + N2r) + λj ij (N1 + N2r)φˆj + λj (N1 + N2r) X ci∈C1 i6=j ψi +λj r (N1 + N2r) X ci∈C2 i6=j ψi , (36) where ij = 1 if cj ∈ C1 or ij = r if cj ∈ C2. Now, we are ready to state the Bayes optimality of the local FedHDP objective for optimal values λ ∗ j for all clients. Lemma 3 (Local estimates optimality). The solution to the local FedHDP objective from the clients' point of view using λ ∗ j Bayes optimal in the considered federated linear regression problem. λ ∗ j = N(1+Υ2)+N1Γ 2 2+N2Γ 2 1 NΥ2(1+Υ2)+Υ2((N2+1)Γ21+N1Γ22 )+Γ21 (1+Γ22 ) , if cj ∈ C1 N(1+Υ2)+N1Γ 2 2+N2Γ 2 1 NΥ2(1+Υ2)+Υ2(N2Γ21+(N1+1)Γ22 )+Γ22 (1+Γ21 ) , if cj ∈ C2 (37) where Υ2 = τ 2 α2 , Γ 2 1 = N1γ 2 1 α2 , and Γ 2 2 = N2γ 2 2 α2 . chosen below, under the assumption of global estimate optimality stated in Lemma 2, is Proof. To prove this lemma, as shown in (Li et al., 2021), we only need to find the optimal values of λ ∗ j that minimize the following

λj=argminλE(θjθ^j(λ)22ϕ\j,ϕ^j)\lambda_{j}^{*}=\arg\operatorname*{min}_{\lambda}\mathbb{E}\big(\|\theta_{j}^{*}-{\hat{\theta}}_{j}(\lambda)\|_{2}^{2}|\phi^{\backslash j},{\hat{\phi}}_{j}\big) (38)(38)

(38) for private and non-private clients. To compute the values of λ ∗ j , we plug in the values of θ ∗ j from (34) and θj (λ) in (36), which gives us the following

λ1 = (N1 + N2r) σ 2 c(nσ2 p1 + mσ2 p2 ) − τ 2(nσ2 p1 + mσ2 p2 ) (N1 + N2r) τ 2(nσ2 p1 + mσ2 p2 ) + σ 2 p1 σ 2 p2 − ij σ 2 c(nσ2 c + mσ2 p) + σ 2 p1 σ 2 p2 λ2 = (N1 + N2r)(σ 2 c − τ 2)σ 2 p2 σ 2 c(nσ2 p1 + mσ2 p2 ) + σ 2 p1 σ 2 p2 − (N1 + N2r)(σ 2 c − τ 2)σ 2 p2 , (40) λ3 = (N1 + N2r)(σ 2 c − τ 2)σ 2 p2 σ 2 c(nσ2 p1 + mσ2 p2 ) + σ 2 p1 σ 2 p2 − (N1 + N2r)(σ 2 c − τ 2)σ 2 p2 , (41) and λ ∗ j = 1 3 (λ1 + λ2 + λ3) (42) . For client cj ∈ C1, we have ij = 1, n = N2 and m = N1 − 1. Setting Υ2 = τ 2 2 , (39) (39) $\binom{40}{40}$ . (41)(41) (42)(42) where r = σ 2 p1 σ2p2 α2 , Γ 1 = N1γ 2 1 α2 , and Γ 2 2 = N2γ 2 2 α2 and substituting in (42) gives the desired result in (37). For client cj ∈ C2**, we have** ij = r, n = N2 − 1 and m = N1**. Setting** Υ2 = τ 2 α2 , Γ 2 1 = N1γ 2 1 α2 , and Γ 2 2 = N2γ 2 2 α2 and substituting in (42) gives the desired results in (37). As a result, the resulting θˆj (λ ∗ j ) is Bayes optimal. Next, we provide a few examples of corner cases for both λ ∗ p and λ ∗ np for the considered linear regression setup:

  • r → 1**, i.e., noise added for privacy is similar for both sets of clients** Γ 2 1 → Γ 2 2, λ ∗ 1 → N**(1+Υ2)+NΓ 2 2 NΥ2(1+Υ2)+(N+1)Υ2Γ22+(1+Γ22 )Γ22 and λ ∗ 2 →N(1+Υ2)+NΓ 2 2 NΥ2(1+Υ2)+(N+1)Υ2Γ22+(1+Γ22 )Γ22 . If Γ 2 1 = Γ22 = 0, then** we have λ ∗ 1 = λ ∗ 2 → 1 Υ2 as in Ditto with FedAvg and no malicious clients.

  • N2 → N**, i.e., all clients have the same privacy level, as in** DP-FedAvg, λ ∗ 2 → N Υ2N+Γ22 .

  • α 2 → 0, λ ∗ 2 → 0 and λ ∗ 1 → 0**. The optimal estimator for all clients approaches the local estimator,** i.e., θˆj (λ ∗ j ) → φˆj .

  • τ 2 → 0**, i.e., all clients have IID samples,** λ mples, $\lambda_1^\to\frac{N+N_2\Gamma_1^2+N_1\Gamma_2^2}{\Gamma_1^2(1+\Gamma_2^2)}$ and $\lambda_2^\to\frac{N+N_2\Gamma_1^2+N_1\Gamma_2^2}{\Gamma_2^2(1+\Gamma_1^2)}$.

A.4 Optimality Of Fedhdp

Next, we show the convergence of the FedHDP algorithm to the FedHDP global and local objectives for the linear regression problem described above as follows Lemma 4 (FedHDP convergence). FedHDP, with learning rate η = 1 and ηp =1 1+λj converges to the global FedHDP objective and the local FedHDP objective.

Proof. In the considered setup, we denote φˆj =1 ns Pns i=1 xj,i at client cj . The client updates the global estimation θ by minimizing the loss function in (10). The global estimation update at the client follows θ ← θ − η(θ − φˆj ). (43) Updating the estimation once with η = 1 results in the global estimation update being φˆj , adding the noise results in the same ψj , and hence the global estimate in the next iteration is unchanged. As for the local FedHDP estimation, when the client receives the global estimate θ after the first round, the client updates its estimate θj as

$\theta_{j}\leftarrow\theta_{j}-\eta_{p}\big{(}(\theta_{j}-\hat{\phi}{j})+\lambda{j}(\theta_{j}-\theta)\big{)}$. $\eta_{p}=\frac{1}{1+\lambda_{j}}$ gives $\theta_{j}=\frac{1}{1+\lambda_{j}}(\hat{\theta}{j}+\lambda{j}\theta)$, which is the solution to Updating the estimate once with ηp =1 1+λj 1+λj (φˆj + λjθ), which is the solution to the local FedHDP objective in (35). Hence, FedHDP converges to the global and local FedHDP objectives.

(44)(44)

20_image_0.png

Figure 2: The effect of opting out on the personalized local model estimate for a linear regression problem as a function of λ when employing (left) HDP-FedAvg and (right) FedHDP. Next, we state the optimality theorem of FedHDP algorithm for the considered setup described above. Theorem 5 (FedHDP optimality). FedHDP from the server's point of view with ratio r ∗chosen below is Bayes optimal (i.e., θ converges to θ ∗*) in the considered federated linear regression problem.*

r=σc2+N1γ12σc2+N2γ22.r^{*}=\frac{\sigma_{c}^{2}+N_{1}\gamma_{1}^{2}}{\sigma_{c}^{2}+N_{2}\gamma_{2}^{2}}. (45)(45)

. (45) Furthermore, FedHDP from the client's point of view, with λ ∗ j chosen below, is Bayes optimal (i.e., θj converges to θ ∗ j for each client j ∈ [N]) in the considered federated linear regression problem.

\lambda_{j}^{*}=\left\{\begin{array}{ll}\frac{N(1+\Upsilon^{2})+N_{1}\Gamma_{2}^{2}+N_{2}\Gamma_{1}^{2}}{N\Upsilon^{2}(1+\Upsilon^{2})+\Upsilon^{2}(N_{2}+1)\Gamma_{2}^{2}+N_{1}\Gamma_{1}^{2}(1+\Upsilon_{2}^{2})},&\mbox{if$c_{j}\in\mathcal{C}_{1}$}\\ \frac{N(1+\Upsilon^{2})+N_{1}\Gamma_{2}^{2}+N_{2}\Gamma_{1}^{2}}{N\Upsilon^{2}(1+\Upsilon^{2})+\Upsilon^{2}(N_{2}\Gamma_{1}^{2}+(N_{1}+1)\Gamma_{2}^{2})+\Upsilon_{1}^{2}(1+\Upsilon_{2}^{2})},&\mbox{if$c_{j}\in\mathcal{C}_{2}$}\end{array}\right.\tag{46}

Proof. Follows by observing Lemma 4, which states that the algorithm converges to the global and local FedHDP objectives, then by Lemma 2 and Lemma 3, which state that the solution to the FedHDP objective is the Bayes optimal solution for both global and local objectives.

A.5 Privacy-Utility Tradeoff

Let us consider the special case of opt-out of privacy in this simplified linear regression setup, i.e., γ 2 1 = 0 and γ 2 2 = γ 2**. We would like to observe the effect of opting out of privacy on the client's personalized local** model, compared to the one where the client remains private. We show an experiment comparing FedHDP using r ∗ against HDP-FedAvg for two scenarios. The first is when the client chooses to opt out of privacy, and the second is when the client chooses to remain private. See Figure 2 for the results of such experiment.

We can see that FedHDP outperforms the one with HDP-FedAvg**, and the cost of remaining private is** evident in terms of higher loss at the client.

A.6 Extension Beyond Two Privacy Levels

The setup for federated learning with two privacy levels was presented to show the steps and the explicit expressions for the values of the ratio and the regularization hyperparameters. Next, we show briefly that the same derivation can be extended to find the explicit expressions for the hyperparameters in a general case of federated linear regression with clients choosing one of l privacy levels. To start, assume that we have l privacy levels where clients can be split into l subsets denoted by Ci for i = 1, 2, ..., l, each has Ni > 1 clients, respectively, while other notations are still the same. Notice that this setup contains the most general case where l = |C| and each client has their own privacy level. Similar to the setup with two privacy levels, each client sends their update to the server after adding the appropriate amount of Gaussian noise, i.e., N (0, Niγ 2 i Id) for client cj ∈ Ci**, for privacy. Let us denote the** updates sent to the server by {ψ : i ∈ [N]} which are estimates of φ with zero-mean Gaussian noise with corresponding covariance matrices {Σi: i ∈ [N]}, where Σj for client cj ∈ Ciis expressed as

Σj=(α2+τ2+Niγi2)Id=σj2Id=σpi2Id.(1)\Sigma_{j}=(\alpha^{2}+\tau^{2}+N_{i}\gamma_{i}^{2})I_{d}=\sigma_{j}^{2}I_{d}=\sigma_{p_{i}}^{2}I_{d}.\tag{1} The $\alpha$-function is a function of $\alpha$. The $\alpha$-function is a function of $\alpha$.
Then, the server computes its estimate θ of φ as

$\theta=\Sigma_{\theta}\sum_{i}\Sigma_{i}^{-1}\psi_{i}+p_{\theta}$, $i\in[N]$ (47)(47) (48)(48) i ψi + pθ, (48) where pθ ∼ N (0, Σθ) and

Σθ=(i[N]Σi1)1=(i[l]σpi2i[l]Nik[l]iσpk2)Id.\Sigma_{\boldsymbol{\theta}}=\Big(\sum_{i\in[N]}\Sigma_{i}^{-1}\Big)^{-1}=\Big(\frac{\prod_{i\in[l]}\sigma_{p_{i}}^{2}}{\sum_{i\in[l]}N_{i}\prod_{k\in[l]\setminus i}\sigma_{p_{k}}^{2}}\Big)I_{d}.

In FedHDP, the server applies a weighted averaging to the clients' updates ψi**'s of this form**

$\theta=\sum_{i\in[N]}w_{i}\psi_{i}$.
(49)(49) (50)(50) (51)(51) To achieve the optimal covariance of the estimation at the server, the resulting weights used for client cj in Ci at the server as follows

$w_{j}=\dfrac{r_{i}}{\sum_{k=[l]}N_{k}r_{k}}$. ers $r_{i}$'s to tune, Similar to the approach followed for the 1 In this case, we have l ratio hyperparameters ri**'s to tune. Similar to the approach followed for the 2-level** privacy heterogeneity, we can find the optimal values of the ratio hyperparameters that achieve the optimal covariance of the estimation at the server. The optimal values of r ∗ i are

$r_{i}^{}=\frac{\sigma_{p_{1}}^{2}}{\sigma_{p_{i}}^{2}}$. ate $\theta_{j}^{}$ of $\phi_{j}$ for the local objective at client $c_{j}$. We can (52)(52) Next, we compute the Bayes optimal local estimate θ j client cj , which can be in any private set of clients Ci**, and compute their minimizer of (Local Bayes obj.). In** this case, the client has access to all other clients' private estimates {ψi: i ∈ [N] \ j} and has their own local non-private estimate φˆj . Similar to the approach before, we utilize Lemma 1 to find the optimal estimate θ ∗ j . Given the updates by all other clients {ψi: i ∈ [N] \ j}, the client can compute the estimate φ \j of the value of φ as

ϕ\j=Σϕ\j(i=[N]\jΣi1ψi)+pϕ\j,(1)\phi^{\backslash j}=\Sigma_{\phi^{\backslash j}}\Big(\sum_{i=[N]\backslash j}\Sigma_{i}^{-1}\psi_{i}\Big)+p_{\phi^{\backslash j}},\tag{1} (53)(53) (54)(54) (55) $$\begin{array}{l}\left(56\right)\end{array}$$ . where pφ\j ∼ N (0, Σφ\j ) and

\Sigma_{\Phi^{\setminus j}}=\Big{(}\sum_{i=[N]\setminus j}\Sigma_{i}^{-1}\Big{)}^{-1}, $$=\Big{(}M_{1}\frac{1}{\sigma_{p_{1}}^{2}}I_{d}+M_{2}\frac{1}{\sigma_{p_{2}}^{2}}I_{d}+...+M_{l}\frac{1}{\sigma_{p_{l}}^{2}}I_{d}\Big{)}^{-1},$$ $$=\frac{\prod_{i\in[l]}\sigma_{p_{i}}^{2}}{\sum_{i\in[l]}M_{i}\prod_{k\in[l]\setminus i}\sigma_{p_{k}}^{2}}I_{d},$$

where

Mi={NiifcjCiNi1,ifcjCi.(1)M_{i}=\begin{cases}N_{i}&\text{if}c_{j}\notin C_{i}\\ N_{i}-1,&\text{if}c_{j}\in C_{i}\end{cases}.\tag{1}

Therefore, we have the following

(57)\left(57\right) ϕ\j=1i{l}Mik{l}\iσpk2(i={l}k1{l}\iσpk12ck2Cik2jψi)+pϕ\j.\phi^{\backslash j}=\frac{1}{\sum_{i\in\{l\}}M_{i}\prod_{k\in\{l\}\backslash i}\sigma_{p_{k}}^{2}}\left(\sum_{i=\{l\}}\prod_{k_{1}\in\{l\}\backslash i}\sigma_{p_{k_{1}}}^{2}\sum_{\begin{array}{l}{c_{k_{2}\in C_{i}}}\\ {k_{2}\neq j}\end{array}}\psi_{i}\right)+p_{\phi^{\backslash j}}. k26=j Then, the client uses Σφ\j and φˆj to estimate θ ∗ j as θ ∗ j = Σθ ∗ j ( Qi∈[l] σ 2 P pi i∈[l] Mi Qk∈[l]\i σ 2 pk + τ 2)Id −1φ \j + (σ 2 c − τ 2) −1φˆj + pθ ∗ j , (59) where pθ ∗ j ∼ N (0, Σθ ∗ j ) and Σθ ∗ j = Qi∈[l] σ 2 pi + τ 2 Pi∈[l] Mi Qk∈[l]\i σ 2 P pk i∈[l] Mi Qk∈[l]\i σ 2 pk −1+ (σ 2 c − τ 2) −1 −1Id. (60) We expand (59) as θ ∗ j = Σθ ∗ j (σ 2 c − τ 2) −1φˆj + Σθ ∗ j 1 Qi∈[l] σ 2 pi + τ 2 Pi∈[l] Mi Qk∈[l]\i σ 2 pk X i=[l] k1∈[l]\i σ 2 pk1 X ck2∈Ci k26=j ψi + pθ ∗ j . (61) Y This is the Bayes optimal solution to the local Bayes objective optimization problem for client cj in (Local Bayes obj.). Next, we know in FedHDP the clients do not have access to individual client updates, but (58)(58)

rather the global model. As a result, the clients solve the FedHDP local objective in (Local FedHDP obj.). Given a value of λj and the global estimate θˆ∗, the minimizer θˆj (λj ) of (Local FedHDP obj.) is

θˆj (λj ) = 1 1 + λj φˆj + λjθˆ∗(62) =1 1 + λj Pi=[l] Niri + λj ij Pi=[l] Niriφˆj + 1 Pi=[l] Niri X i∈[l] λj ri X ck∈Ci k6=j ψk , (63) where ij = ri for client cj ∈ Ci. Note that we have l + 1 terms in both (63) and (61), which we can use to compute the value of λ ∗ j as done in a previous part, and results similar to the ones in the prior parts of this appendix then follow from such findings. Note that computing the expressions of the optimal λ ∗ j in closed form for each one of the l sets of private clients in the considered setup is involved; hence, our brief presentation of the sketch of the solution.

B Federated Point Estimation

In this appendix, we provide a brief discussion of a special case of the considered federated linear regression and make use of the results stated in Appendix A. In the federated point estimation problem, Fj = [1, 1, ..., 1]T of length ns is available at client cj . For reference, the federated point estimation algorithm is described in Algorithm 3. The results in the previous appendix can be used for federated point estimation by setting d = 1**. In the remainder of this appendix, we assume the opt-out of privacy scenario where clients choose to** be either private or non-private in the setup. First, we state the global estimate optimality in Lemma 6 and show its proof. Lemma 6 (Global estimate optimality). FedHDP from the server's point of view, with ratio r ∗chosen below, is Bayes optimal (i.e., θ converges to θ ∗) in the considered federated point estimation problem given by r ∗ =σ 2 c by $\tau=\frac{1}{\sigma{\rm r}^{2}+N_{\rm p}\gamma^{2}}$. Furthermore, the resulting atomic is:_ $$\sigma_{\rm s,opt}^{2}=\frac{1}{N}\left[\frac{\sigma_{\rm r}^{2}(\sigma_{\rm r}^{2}+N_{\rm p}\gamma^{2})}{\sigma_{\rm r}^{2}+N_{\rm p}N_{\rm p}\gamma^{2}}\right].$$ Proof.: Follows directly by setting $d=1$, $\gamma_{1}^{2}=0$, $\gamma_{2}^{2}=\gamma^{2}$, $N_{1}=N_{\rm pp}$, and $N_{2}=N_{\rm p}$ in Lemma 2.
c+Npγ2 . Furthermore, the resulting variance is: Next, we show some simulation results for the server noise σ 2 s against the ratio r for different values of σ 2 c and γ 2in the federated point estimation setup with N clients and ρnp fraction of non-private clients. The results are shown in Figure 3, and we can see that the optimal ratio r ∗in Lemma 6 minimizes the server variance as expected. Additionally, we show the resulting server noise σ 2 s versus the fraction of non-private clients ρnp

(64)(64) $\square$ Algorithm 3 FedHDP: Federated Learning with Heterogeneous Differential Privacy (Point Estimation)

Inputs: $\theta^0$, $\alpha^2$, $\tau^2$, $\gamma^2$, $\eta=1$, ${\lambda_j}{j\in[N]}$, $r$, $\rho{\texttt{np}}$, $N$.* Outputs: $\theta^*$, ${\theta^*j}{j\in[N]}$ At server: for client $c_j$ in $\mathcal{C}^t$ in parallel do $\psi_j$ $\leftarrow$ ClientUpdate($\theta^t,c_j$) end for ψjCuentivpuate(θ,cj) end for θ1ρnpN+r(1ρnp)NciCnpψi+rρnpN+r(1ρnp)NciCpψi\begin{array}{c}{{\psi_{j}\leftarrow\,C u e n t i v p u a t e(\theta^{*},c_{j})}}\\ {{\mathrm{{\bf~end~for~}}}}\\ {{\theta^{*}\leftarrow\,\frac{1}{\rho_{\mathrm{{np}}}N+r(1-\rho_{\mathrm{{np}}})N}\sum_{c_{i}\in C_{\mathrm{{np}}}}\psi_{i}}}\\ {{+\frac{r}{\rho_{\mathrm{{np}}}N+r(1-\rho_{\mathrm{{np}}})N}\sum_{c_{i}\in C_{\mathrm{{p}}}}\psi_{i}}}\end{array}

23_image_0.png

At client cj : ClientUpdate(θ 0, cj ): θ ← θ 0 $\theta_{j}\leftarrow\theta^{0}$ $\epsilon\gets\theta-\eta(\theta-\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}x_{j,i})$ $\epsilon_{j}^{}\leftarrow\theta_{j}-\eta_{j}\big{(}(\theta_{j}-\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}x_{j,i})+\lambda_{j}(\theta_{j}-\theta)\big{)}$ $\epsilon\gets\theta+1_{c_{j}\in\mathcal{C}{\mathrm{p}}}\mathcal{N}(0,(1-\rho{\mathrm{np}})N\gamma^{2})$ return $\psi$ to server θ return ψ to server Figure 3: Server noise variance σ 2 s vs the ratio hyperparameter r. (left) Trade-off for different* γ 2**, (right) trade-off** for different σ 2 c = α 2 + τ 2. plotted for two scenarios. The first is the baseline FedAvg**, and the second is the optimal** FedHDP**. We** can see in Figure 4 that FedHDP provides better noise variance at the server compared to FedAvg**, and** the gain can be significant for some values of ρnp**, even if a small percentage of clients opt out.** Furthermore, we consider the performance gap between the optimal solution and the other baselines in this setup. The following lemma states such results.

Lemma 7 (Global model performance gap). The server-side estimation mean-square error gap between FedHDP (optimal) and the baselines, HDP-FedAvg and DP-FedAvg, is as follows:

thimally and the usefulness, HDP-FedAVG and DP-FedAVG, is as follows: $$\sigma_{s,hdp-fedavg}^2-\sigma_{s,opt}^2=\frac{\rho_{np}(1-\rho_{np})^{3}\gamma^4N}{\alpha^2+\tau^2+\rho_{np}(1-\rho_{np})\gamma^2N}\geq0,$$ $$\sigma_{s,dp-fedavg}^2-\sigma_{s,opt}^2=\frac{\rho_{np}(1-\rho_{np})(\alpha^2+\tau^2)\gamma^2+\rho_{np}(1-\rho_{np})^2\gamma^4N}{\alpha^2+\tau^2+\rho_{np}(1-\rho_{np})\gamma^2N}\geq0,$$ $$\sigma_{s,dp-fedavg}^2-\sigma_{s,hdp-fedavg}^2=\frac{\rho_{np}(1-\rho_{np})(\alpha^2+\tau^2)\gamma^2+\rho_{np}^2(1-\rho_{np})^2\gamma^4N}{\alpha^2+\tau^2+\rho_{np}(1-\rho_{np})\gamma^2N}\geq0.$$ ≥ 0, (65) 2N≥ 0, (66) 2N≥ 0. (67) Notice that if ρnp → 0 (homogeneous private clients) or ρnp → 1 (no private clients), the gap vanishes as expected. In other words, the benefit of FedHDP on the server side is only applicable in the case of heterogeneous differential privacy. It can be observed that if the number of clients is large (N → ∞**), the gap** approaches (1−ρnp) 2γ 2 and (1−ρnp)γ 2in (65) and (66), respectively. Notice that having this constant gap is in contrast to σ 2 s,opt vanishing as N → ∞**. This is expected since the noise in the observation itself decreases** as the number of clients increases and, hence, having the non-private clients alone would be sufficient to

(65)(65) (66)(66) (67)(67)

24_image_1.png

Figure 4: Server noise variance σ 2 s vs non-private client fraction ρnp for the baseline FedAvg aggregator and optimal FedHDP aggregator. Figure 5: The effect of opting out on the personalized local model estimate as a function of λ when employing (left) Ditto with vanilla FedAvg and (right)

24_image_0.png

FedHDP. (perfectly) learn the optimal global model. Finally, if the noise γ 2 added for the privacy is large (γ 2 → ∞), which corresponds to a small ǫ, then the gap with optimality grows unbounded. In contrast, in this case, σ 2 s,opt remains bounded, again because the optimal aggregation strategy would be to discard the private clients and to only aggregate the non-private updates.

Follows the local estimate optimality lemma and its proof.

Theorem 8 (Local estimate optimality). Assuming using FedHDP with ratio r ∗in Lemma 6, and using the values λ ∗np for non-private clients and λ ∗pfor private clients stated below, FedHDP is Bayes optimal (i.e., θj converges to θ ∗ j for each client j ∈ [N])

where Υ2 = $re;\Upsilon^2=\frac{\tau^2}{\alpha^2}$.

α2 and Γ 2 = Npγ α2 . λnp=1Γ2,(68)\lambda_{np}^{*}=\frac{1}{\Gamma^{2}},\tag{68} $$\lambda_{p}^{*}=\frac{N+N\Upsilon^{2}+(N-N_{p})\Gamma^{2}}{N\Upsilon^{2}(\Upsilon^{2}+1)+(N-N_{p}+1)\Upsilon^{2}\Gamma^{2}+\Gamma^{2}}.\tag{69}$$ $\square$ $\mathbf{a}$ 3. $\hat{\underline{\phantom{\hat{a}}}}$ . Proof. Follows directly by setting d = 1, Γ 2 1 = 0, Γ 2 2 = Γ2, N1 = Nnp, and N2 = Np in Lemma 3. We show an additional simulation result for the federated point estimation problem. We compare the effect of opting out by a client in the federated point estimation where we observe some similar effects as in the federated linear regression, see Figure 5.

It is worth mentioning that we notice that the values of λ ∗ are different for private and non-private clients. We recall that the derived expression for the personalization parameters for all clients considers the presence of data heterogeneity as well as privacy heterogeneity. In Table 4 we provide a few important special cases for both λ ∗ p and λ ∗ np for the considered federated point estimation problem.

No privacy (ρnp = 1) Homogeneous privacy (ρnp = 0)
Homogeneous data (τ 2 = 0) FedAvg λ ∗ np = ∞ DP-FedAvg+Ditto λ ∗ p = 1 γ2
2 > 0) FedAvg+Ditto λ 2 ∗ p = Nα2
Heterogeneous data (τ ∗ np = α τ2 DP-FedAvg+Ditto λ Nτ2+(1−ρnp)Nγ2

Table 4: Special cases of FedHDP in the federated point estimation. Homogeneous data & No privacy: In this case, the optimal FedHDP algorithm recovers FedAvg (McMahan et al., 2017) with no personalization (λ ∗ np = ∞). This is not surprising as this is exactly the setup for which the vanilla federated averaging was originally introduced.

Heterogeneous data & No privacy: In this case, the optimal FedHDP algorithm recovers Ditto (Li et al., 2021). Again, this is not surprising as this is exactly the setup for which Ditto has been shown to be Bayes optimal.

Homogeneous data & Homogeneous privacy: In this case, the optimal FedHDP algorithm recovers DP-FedAvg (Andrew et al., 2019), however, with additional personalization using Ditto (Li et al., 2021). At first, this might be surprising as there is no data heterogeneity in this case, which is where Ditto would be needed. However, a closer look at this case reveals that the noise added due to differential privacy creates artificial data heterogeneity that needs to be dealt with using Ditto. In fact, as ǫ → 0**, or equivalently, as** γ 2 → ∞, for the added noise for privacy, we observe that λ ∗ p → 0 implying that the local learning becomes optimal. This is expected since, in this case, the data from other (private) clients is, roughly speaking, so noisy that it is best to rely solely on local data.

Heterogeneous data & Homogeneous privacy: In this case, the optimal FedHDP algorithm again recovers DP-FedAvg+Ditto**. Similar to the homogeneous data case, with** γ 2 → ∞, we observe that λ ∗ p → 0, i.e., the local learning becomes optimal.

C Experiments: Extended Experimental Results

In this section, we provide an extended version of the results of experiments conducted on the considered datasets. We describe the datasets along with the associated tasks in Tables 5, the models used in Table 6, and the hyperparameters used in Table 7.

Table 5: Experiments setup: Number of clients is N**, approximate fraction of clients per round is** q.

Dataset N q Task Model
non-IID MNIST 2,000 5% 10-label classification FC NN
FMNIST 3,383 3% 10-label classification FC NN
FEMNIST 3,400 3% 62-label classification CNN
non-IID MNIST, and FMNIST Datasets Layer Size Activation Input image 28 × 28 - Flatten 784 -
Fully connected 50 ReLU
Fully connected 10 Softmax
FEMNIST Dataset
Input image 28 × 28 -
Convolutional (2D) 28 × 28 × 16 ReLU
Max pooling (2D) 14 × 14 × 16 -
Convolutional (2D) 14 × 14 × 32 ReLU
Max pooling (2D) 7 × 7 × 32 -
Dropout (25%) - -
Flatten 1568 -
Fully connected 128 ReLU
Dropout (50%) - -
Fully connected 62 Softmax

Table 6: Models used for experiments. For each experiment, we presented the results of each dataset for the two baselines, i.e., Non-Private, DPFedAvg**, and** HDP-FedAvg**, as well as the proposed** FedHDP algorithm along with the best parameters that produce the best results in the main body of the paper. In this appendix, we show the extended version of the experiments. For all experiments, training is stopped after 500 communication rounds for each experiment. The server's test dataset is the test MNIST dataset in the non-IID MNIST experiments or the collection of the test datasets of all clients in the FMNIST and FEMNIST datasets. Note that the experiment of FedHDP with r = 0 denotes the case where the server only communicates with nonprivate clients during training and ignores all private clients. For the baseline algorithms, we note that the

Hyperparameter non-IID MNIST FMNIST FEMNIST
Batch size 20
Epochs 25 50 25
η, ηp 0.5 0.01 0.02
η, ηp decaying factor 0.9 every 50 rounds N/A
S 0 0.5 0.5 2.0
ηb 0.2
κ 0.5
Effective noise multiplier 1.5 4.0 1.0

Table 7: Hyperparameters used for each experiment. personalization scheme that is used on clients is Ditto. We vary the ratio hyperparameter r as well as the regularization hyperparameters λp and λnp at the clients and observe the results. In the following tables, we list the entirety of the results of all experiments conducted on each dataset for various values of the hyperparameters. For readability, we highlight the rows that contain the best values of performance metrics in the proposed algorithm FedHDP.

λp = λnp = 0.005
Setup Global model Personalized local models
Algorithm hyperparam. Accg% Accg,p% Accg,np% △g% Accl,p% Accl,np% △l%
Non-Private+Ditto - 93.8 - 93.75(0.13) - - 99.98(0.001) -
DP-FedAvg+Ditto - 88.75 88.64(0.39) - - 99.97(0.002) - -
HDP-FedAvg+Ditto - 87.71 87.55(0.42) 88.35(0.34) 0.8 99.97(0.001) 99.93(0.001) −0.04
FedHDP r= 0 90.7 90.64(0.68) 91.72(0.5) 1.08 90.64(0.68) 99.94(0.001) 9.2964
FedHDP r= 0.001 91.74 91.65(0.39) 92.61(0.27) 0.94 99.94(0.001) 99.95(0.001) 0.01
FedHDP r= 0.01 92.48 92.43(0.30) 93.30(0.21) 0.88 99.94(0.001) 99.94(0.001) 0
FedHDP r= 0.025 92.36 92.28(0.27) 92.96(0.19) 0.68 99.95(0.001) 99.91(0.001) −0.04
FedHDP r= 0.1 90.7 90.59(0.34) 91.31(0.26) 0.73 99.97(0.001) 99.95(0.001) −0.02
λp = λnp = 0.05
Non-Private+Ditto - 93.81 - 93.76(0.13) - - 99.93(0.001) -
DP-FedAvg+Ditto - 87.98 87.97(0.39) - - 99.84(0.002) - -
HDP-FedAvg+Ditto - 89.64 89.50(0.32) 90.55(0.24) 1.05 99.83(0.002) 99.84(0.002) 0.01
FedHDP r= 0 91 91.12(0.48) 92.08(0.41) 0.96 91.12(0.48) 99.76(0.002) 8.65
FedHDP r= 0.001 92.15 92.10(0.33) 92.88(0.25) 0.78 99.81(0.002) 99.78(0.002) −0.03
FedHDP r= 0.01 92.45 92.39(0.33) 93.26(0.25) 0.87 99.81(0.002) 99.78(0.003) −0.03
FedHDP r= 0.025 92.14 92.09(0.35) 93.01(0.26) 0.92 99.85(0.002) 99.8(0.002) −0.05
FedHDP r= 0.1 90.7 90.82(0.29) 91.55(0.21) 0.73 99.87(0.002) 99.80(0.003) −0.06
λp = λnp = 0.25
Non-Private+Ditto - 93.79 - 93.75(0.13) - - 99.10(0.007) -
DP-FedAvg+Ditto - 88.26 88.23(0.41) - - 98.23(0.017) - -
HDP-FedAvg+Ditto - 88.27 88.09(0.49) 89.08(0.4) 0.99 98.14(0.017) 98.13(0.017) −0.01
FedHDP r= 0 90.42 90.41(0.69) 91.41(0.58) 1.0 90.41(0.69) 98.06(0.023) 7.65
FedHDP r= 0.001 92.18 92.12(0.34) 92.85(0.26) 0.73 98.47(0.015) 98.08(0.024) −0.39
FedHDP r= 0.01 92.41 92.35(0.29) 93.19(0.21) 0.83 98.62(0.015) 98.33(0.019) −0.28
FedHDP r= 0.025 92.5 92.42(0.28) 93.19(0.19) 0.77 98.71(0.011) 98.41(0.017) −0.3
FedHDP r= 0.1 91.17 91.10(0.32) 91.94(0.24) 0.84 98.71(0.012) 98.60(0.013) −0.11

Table 8: Experiment results on non-IID MNIST, (ǫ, δ) = (3.6, 10−4**). The variance of the performance metric across** clients is between parenthesis.

Table 9: Experiment results on Skewed non-IID MNIST, (ǫ, δ) = (3.6, 10−4**). The variance of the performance metric** across clients is between parenthesis.

λp = λnp = 0.005
Setup Global model Personalized local models
Algorithm hyperparam. Accg% Accg,p% Accg,np% △g% Accl,p% Accl,np% △l%
Non-Private+Ditto - 93.67 - 93.62(0.15) - - 99.98(0.001) -
DP-FedAvg+Ditto - 88.93 88.87(0.35) - - 99.98(0.001) - -
HDP-FedAvg+Ditto - 88.25 88.05(0.39) 89.98(0.05) 1.93 99.97(0.001) 99.85(0.001) −0.11
FedHDP r= 0 10.27 7.1(6.5) 100(0) 92.9 7.1(6.5) 100(0) 92.9
FedHDP r= 0.025 87.11 86.61(1.10) 98.16(0.01) 11.55 99.99(0.001) 99.91(0.001) −0.08
FedHDP r= 0.1 90.36 89.96(0.37) 97.45(0.01) 7.49 99.97(0.001) 99.76(0.003) −0.21
FedHDP r= 0.5 88.44 88.14(0.36) 93.36(0.03) 5.2 99.98(0.001) 99.93(0.001) −0.05
FedHDP r= 0.75 89.14 88.92(0.37) 92.43(0.06) 3.5 99.97(0.001) 99.93(0.001) −0.04
FedHDP r= 0.9 87.96 87.69(0.56) 92.97(0.04) 5.28 99.98(0.001) 99.96(0.001) −0.02
λp = λnp = 0.05
Non-Private+Ditto - 93.67 - 93.62(0.15) - - 99.93(0.001) -
DP-FedAvg+Ditto - 88.78 88.70(0.53) - - 99.83(0.002) - -
HDP-FedAvg+Ditto - 88.33 88.11(0.46) 91.67(0.04) 3.56 99.87(0.001) 99.61(0.001) −0.26
FedHDP r= 0 10.28 7.1(6.5) 100(0) 92.9 7.1(6.5) 100(0) 92.9
FedHDP r= 0.025 87.92 87.45(0.99) 98.1(0.01) 10.65 99.95(0.001) 99.75(0.003) −0.2
FedHDP r= 0.1 88.98 88.64(0.52) 96.18(0.02) 7.54 99.9(0.001) 99.47(0.005) −0.43
FedHDP r= 0.5 88.22 87.9(0.38) 93.43(0.03) 5.33 99.85(0.002) 99.42(0.008) −0.42
FedHDP r= 0.75 88.56 88.37(0.35) 91.33(0.04) 2.94 99.84(0.002) 99.52(0.004) −0.33
FedHDP r= 0.9 89.19 88.97(0.4) 92.24(0.03) 3.27 99.88(0.001) 99.58(0.005) −0.3
λp = λnp = 0.25
Non-Private+Ditto - 93.67 - 93.62(0.15) - - 99.09(0.007) -
DP-FedAvg+Ditto - 87.78 87.71(0.53) - - 98.15(0.02) - -
HDP-FedAvg+Ditto - 89.4 89.22(0.26) 92.01(0.03) 2.79 98.27(0.02) 97.62(0.03) −0.64
FedHDP r= 0 10.27 7.1(6.5) 100(0) 92.9 7.1(6.5) 100(0) 92.9
FedHDP r= 0.025 87.51 87.01(0.9) 98.49(0.01) 11.48 98.69(0.01) 99.09(0.006) −0.4
FedHDP r= 0.1 89.05 88.66(0.54) 96.8(0.02) 8.14 98.69(0.012) 98.55(0.008) −0.13
FedHDP r= 0.5 88.18 88.11(0.55) 93.43(0.03) 5.32 98.32(0.014) 97.80(0.01) −0.52
FedHDP r= 0.75 87.96 87.8(0.33) 92.58(0.03) 4.78 98.25(0.017) 97.5(0.02) −0.75
FedHDP r= 0.9 88.26 87.93(0.41) 91.67(0.03) 3.74 98.25(0.02) 97.68(0.02) −0.57

Table 10: Experiment results on FMNIST, (ǫ, δ) = (0.6, 10−4**). The variance of the performance metric across clients** is between parenthesis.

λp = λnp = 0.005
Setup Global model Personalized local models
Algorithm hyperparam. Accg% Accg,p% Accg,np% △g% Accl,p% Accl,np% △l%
Non-Private+Ditto - 89.65 - 89.35(1.68) - - 93.95(0.67) -
DP-FedAvg+Ditto - 71.76 71.42(2.79) - - 91.01(0.94) - -
HDP-FedAvg+Ditto - 75.87 75.77(2.84) 74.41(2.8) −1.36 90.45(1.02) 92.32(0.8) 1.87
FedHDP r= 0 81.78 80.73(2.45) 89.35(1.5) 8.62 80.73(2.4) 95.80(0.39) 15.06
FedHDP r= 0.01 85.38 84.61(2.05) 89.3(1.26) 4.69 93.26(0.74) 95.94(0.41) 2.67
FedHDP r= 0.025 85.7 84.93(1.97) 89.58(1.29) 4.65 93.04(0.76) 95.22(0.54) 2.18
FedHDP r= 0.05 85.21 84.68(1.99) 86.22(1.76) 1.54 92.87(0.74) 95.40(0.51) 2.53
FedHDP r= 0.1 81.76 81.45(2.45) 81.96(1.84) 0.51 92.47(0.78) 94.83(0.52) 2.36
FedHDP r= 0.5 78.19 78.02(2.59) 76.48(3.02) −1.53 91.08(0.94) 92.59(0.83) 1.51
λp = λnp = 0.05
Non-Private+Ditto - 89.65 - 89.35(1.68) - - 94.53(0.59) -
DP-FedAvg+Ditto - 77.61 77.62(2.55) - - 90.04(1.04) - -
HDP-FedAvg+Ditto - 72.42 77.14(2.72) 76.28(2.76) −0.86 89.12(1.15) 90.92(0.91) 1.8
FedHDP r= 0 82.61 80.72(2.45) 89.45(1.51) 8.73 80.72(2.45) 95.57(0.38) 14.84
FedHDP r= 0.01 86.88 85.36(1.89) 90.02(1.28) 4.66 93.76(0.68) 95.78(0.36) 2.02
FedHDP r= 0.025 86.03 84.22(1.98) 88.40(1.68) 4.18 93.53(0.68) 95.11(0.54) 0.52
FedHDP r= 0.05 84.65 82.68(2.16) 86.68(1.67) 4.00 92.92(0.76) 95.02(0.55) 2.1
FedHDP r= 0.1 82.89 81.72(2.28) 83.68(2.18) 1.96 92.38(0.83) 94.25(0.61) 1.87
FedHDP r= 0.5 76.59 78.05(2.60) 78.04(2.66) −0.0041 89.63(1.10) 91.67(0.84) 2.04
λp = λnp = 0.25
Non-Private+Ditto - 89.66 - 89.36(1.69) - - 94.32(0.64) -
DP-FedAvg+Ditto - 70.1 70.40(2.91) - - 88.38(1.25) - -
HDP-FedAvg+Ditto - 72.67 72.05(2.83) 74.31(2.42) 2.26 87.54(1.34) 87.39(1.23) −0.15
FedHDP r= 0 81.93 80.85(2.39) 89.71(1.39) 8.86 80.85(2.39) 94.56(0.50) 13.71
FedHDP r= 0.01 85.31 84.55(1.98) 89.27(1.54) 4.72 92.76(0.78) 94.77(0.5) 2.01
FedHDP r= 0.025 86.17 85.52(1.92) 89.25(1.31) 3.73 92.46(0.85) 94.35(0.57) 1.89
FedHDP r= 0.05 83.97 83.5(2.19) 85.4(1.88) 1.9 91.69(0.91) 93.9(0.53) 2.21
FedHDP r= 0.1 83.78 83.22(2.11) 84.94(2.12) 1.72 90.9(1.02) 92.62(0.73) 1.72
FedHDP r= 0.5 74.64 74.63(3.23) 72.54(2.93) −2.09 88.12(1.34) 88.69(1.28) 0.57

Table 11: Experiment results on FEMNIST, (ǫ, δ) = (4.1, 10−4**). The variance of the performance metric across** clients is between parenthesis.

λp = λnp = 0.005
Setup Global model Personalized local models
Algorithm hyperparam. Accg% Accg,p% Accg,np% △g% Accl,p% Accl,np% △l%
Non-Private+Ditto - 81.56 - 81.72(1.37) - - 73.86(1.5) -
DP-FedAvg+Ditto - 75.39 76.1(1.73) - - 71.3(1.47) - -
HDP-FedAvg+Ditto - 74.96 75.6(1.69) 77.84(1.47) 2.24 71.44(1.5) 70.03(1.18) −1.4
FedHDP r= 0 72.77 75.34(2.6) 85.7(1.14) 10.36 75.34(2.6) 72.2(1.22) −3.14
FedHDP r= 0.001 73.66 76.22(2.44) 86.04(1.2) 9.82 72.78(1.51) 71.74(1.34) −1.04
FedHDP r= 0.01 74.75 77.16(2.26) 86.4(1.07) 9.24 73.24(1.52) 71.49(1.29) −1.76
FedHDP r= 0.025 75.37 77.66(2.06) 86.56(1) 8.9 73.11(1.55) 71.62(1.18) −1.49
FedHDP r= 0.1 76.47 77.99(1.68) 84.36(1.31) 6.37 72.3(1.5) 69.93(1.15) −2.37
FedHDP r= 0.5 76.11 76.69(1.62) 80.82(1.3) 4.13 71.32(1.56) 70.11(1.26) −1.2
λp = λnp = 0.05
Non-Private+Ditto - 81.95 - 82.09(1.38) - - 82.89(1.13) -
DP-FedAvg+Ditto - 75.42 75.86(1.82) - - 74.69(1.29) - -
HDP-FedAvg+Ditto - 75.12 75.87(1.65) 78.59(1.58) 2.72 74.67(1.34) 75.95(1.12) 1.28
FedHDP r= 0 72.65 75.9(2.5) 86.19(1.27) 10.29 80.59(1.13) 81.97(0.88) 1.38
FedHDP r= 0.001 73.31 75.9(2.5) 86.19(1.27) 10.29 80.59(1.13) 81.97(0.88) 1.38
FedHDP r= 0.01 74.68 77.16(2.27) 86.25(1.05) 9.09 80.74(1.06) 82.13(0.98) 1.38
FedHDP r= 0.025 75.22 77.43(2.09) 85.95(1.12) 8.52 80(1.16) 80.99(0.92) 1.01
FedHDP r= 0.1 76.52 77.91(1.67) 83.9(1.27) 5.99 77.9(1.22) 79.15(0.99) 1.25
FedHDP r= 0.5 76.15 76.55(1.68) 80.04(1.62) 3.49 75.43(1.25) 77.13(1.17) 1.7
λp = λnp = 0.25
Non-Private+Ditto - 81.66 - 81.79(1.38) - - 84.46(0.89) -
DP-FedAvg+Ditto - 75.99 76.56(1.6) - - 73.06(1.46) - -
HDP-FedAvg+Ditto - 75.31 75.67(1.71) 78.88(1.59) 3.21 72.58(1.45) 74.98(1.43) 2.4
FedHDP r= 0 72.89 75.5(2.56) 86.09(1.28) 10.6 75.5(2.56) 84.77(0.8) 9.28
FedHDP r= 0.001 73.41 76.01(2.51) 85.99(1.13) 9.97 80.98(1.06) 84.71(0.83) 3.73
FedHDP r= 0.01 74.86 77.31(2.18) 86.73(0.98) 9.42 81.19(1.02) 84.68(0.78) 3.49
FedHDP r= 0.025 75.41 77.68(2.1) 86.23(1.03) 8.55 80.01(1.1) 83.2(0.8) 3.19
FedHDP r= 0.1 76.62 77.82(1.68) 83.35(1.27) 5.52 76.99(1.24) 78.96(1.04) 1.97
FedHDP r= 0.5 75.89 76.71(1.65) 80.01(1.4) 3.3 73.48(1.37) 75.49(1.57) 2.01

D Broader Impact & Limitations

In this paper, we investigated a heterogeneous privacy setup where different clients may have different levels of privacy protection guarantees, and in particular explored an extreme setup where some clients may opt out of privacy guarantees, to gain improvements in performance. However, the choice to loosen privacy requirements is heavily dependent on the client, the setting, and their valuation of their data. Moreover, since the algorithm orients the model towards the less private clients, it may introduce unfairness for the more private clients. Additionally, the server may have its own requirements during training, for example, a lower limit to the fraction of less private clients or vice versa, where the privacy choices may be overridden by the server. Overall, we believe that the interplay between all of these different societal aspects need to be carefully studied before the proposed mechanisms in this paper can be practically used.

We acknowledge that some of the assumptions in the theoretical study of the federated linear regression and federated point estimation setups are unrealistic, but similar assumptions have been made in prior work of (Li et al., 2021). For example, we neglected the effect of clipping, assumed that all clients have the same number of samples, and assumed the covariance matrix is diagonal. On the other hand, for more complex models such as the ones used in the experiments, finding the best values of weights to be used in the aggregator at the server as well as the personalization parameters for each client is not straightforward, as some of the theoretical constructs in this paper are not estimable from data. Nevertheless, we believe that the theoretical studies in this paper can be used to build intuition about heterogeneous privacy setups and can be used as guiding principles for designing new algorithms.

Finally, the FedHDP algorithm comes with two additional hyperparameters compared with DP-FedAvg: r (the weight ratio of private and non-private clients at the server), and λj (the degree of personalization at client cj ). In this paper, we chose r based on grid search, however that will naturally incur a loss of privacy that we did not carefully study. In fact, we assumed there is no privacy loss due to the tuning of such hyperparameter in our paper. Having said that, recent work by Papernot & Steinke (2022) suggests that the privacy loss due to such hyperparameter tuning based on private training runs might be manageable but the exact interplay remains to be studied in future work.