tmlr-md-dump / 9D4rvCnbqt /9D4rvCnbqt.md
RedTachyon's picture
Upload folder using huggingface_hub
f366923 verified

Striking A Balance: An Optimal Mechanism Design For Heterogenous Differentially Private Data Acquisition For Logistic Regression

Anonymous authors Paper under double-blind review

Abstract

We investigate the problem of solving ML tasks from data collected from privacy-sensitive sellers. Since the data is private, sellers must be incentivized through payments to provide their data. Thus, the goal is to design a mechanism that optimizes a weighted combination of test loss, seller privacy, and payment, i.e., strikes a balance between getting a good privacy-preserving ML model and limiting payments to the sellers. To do this, we first solve logistic regression with known heterogeneous differential privacy guarantees. We then consider the main problem where the differential privacy requirements are decided by the buyer to balance the tradeoff between test loss and payments. To solve this problem, we use our earlier result on logistic regression with known privacy guarantees along with standard mechanism design theory to formulate an optimization problem which is nonconvex. We establish conditions under which the problem can be convexified using a change of variables technique. This insight is then harnessed to develop an algorithm that provides optimal solution. Additionally, we demonstrate the resilience of our mechanism to scenarios in which data points and privacy sensitivities are correlated. Finally, we demonstrate the utility of our algorithm by applying it to the Wisconsin breast cancer dataset.

1 Introduction

Machine learning (ML) applications have experienced significant growth in recent years. Furthermore, substantial efforts have been dedicated to ensuring the privacy of training data, with the prevalent adoption of differential privacy. While existing literature presents various algorithms to guarantee differential privacy, a lingering question persists: determining the optimal degree of differential privacy. For instance, opting for a higher level of differential privacy may compromise the performance of the machine learning model, yet it significantly enhances privacy protection for the data provider. Therefore, along with considering the model performance through metrics such as misclassification loss, we need to also consider the privacy loss of the data providers (also referred by sellers). In this paper, we delve into addressing this nuanced tradeoff by formulating a mechanism which balances competing objectives: achieving a high quality ML model while minimizing the privacy loss experienced by data providers.

To further motivate our problem, in practice, while some of the data for training ML models are publicly available, sensitive information such as health or financial data may not be shared due to privacy concerns. For instance, a hospital may want to use health vitals to predict heart disease, but patients may be reluctant to share such data due to privacy concerns. Moreover, each patient would have a different cost for the same loss of privacy (which we term privacy sensitivity). Addressing this, there is a growing interest in encouraging data sharing through two strategies: (i) introducing noise to ML model's weights to enhance dataset anonymity, and (ii) providing compensation to data sellers to offset potential privacy risks Posner & Weyl (2019), Kushmaro (2021). The amount of compensation that is provided to patients would, of course, depend on their privacy sensitivity and their privacy loss, which can be measured using differential privacy. To operationalize this concept and thus accurately represent the tradeoff between model performance and privacy loss, we propose a robust mathematical framework for designing a data market. This market facilitates data acquisition from privacy-conscious sellers, utilizing (a) mechanism design to incentivize sellers to truthfully disclose their privacy sensitivity (we consider that sellers can lie about their privacy sensitivities) and (b) statistical learning theory to strike a balance between payments and model accuracy. The market involves a buyer seeking data from privacy-sensitive sellers, aiming to construct a high-quality ML model while minimizing overall payments to sellers. Conversely, individual sellers seek fair compensation for potential privacy compromises. Therefore, through the data market, we capture the tradeoff between designing a good ML model while ensuring that privacy loss to data providers is small.

We are motivated by the work in Fallah et al. (2023), which considers mean estimation of a scalar random variable using data from privacy-sensitive sellers. Our objective here is to design a mechanism for the more challenging and practically useful problem of logistic regression with vector-valued data. Now, to consider the tradeoff between payments and model accuracy, it is imperative to mathematically represent the buyer's objective, i.e., model accuracy. In contrast to Fallah et al. (2023), where the buyer's objective simplifies to the variance of a mean estimator, which they assume to be known, our scenario considers the buyer's objective to be the expected misclassification error of a logistic regression model in which the statistics of the dataset are unknown. To address this issue, we propose using Rademacher complexity to model the buyer's objective. Furthermore, most prior work on differentially private logistic regression, such as (Chaudhuri et al. (2011), Ding et al. (2017)) consider homogeneous differential privacy, in which every individual has the same privacy guarantees. However, our approach acknowledges the practical reality that sellers might have different degrees of willingness (privacy sensitivity) to share their data. Therefore, we consider that each data point has to be privacy protected differently (which is done by considering heterogeneous differential privacy), leading to different utility of each data point in contributing to the ML model.

To summarize, our goal is to design a mechanism for the buyer to optimize an objective that trades off between classification loss and payments to sellers while also taking into account the differential privacy requirements of the sellers. Our contributions are as follows:

  • In section 3, we provide an approach to accurately model the misclassification loss for logistic regression. We further highlight that unlike the case of the same differential privacy for all users as in Chaudhuri et al. (2011), our objective for logistic regression should include an additional regularization term for achieving optimal test loss performance.

  • Next, we build upon the above result to solve our mechanism design problem (section 4). For this problem, we provide a payment identity that determines payments as a function of the differential privacy guarantees. This is used to design an objective for the mechanism design problem. Further, we show that the objective can be made convex through a change in variables trick for a large class of model parameters. Subsequently, we propose an algorithm to optimally solve the mechanism design problem. We note that, in practice, if we consider health data, certain segments of society may have poorer health outcomes than other segments and it is possible that those segments of the society may be less sensitive to privacy considerations. In other words, it is possible that the data and privacy sensitivities are correlated. Our model allows for such correlations.

  • We also perform an asymptotic analysis by considering large number of sellers to understand how much it will cost a buyer to obtain sufficient data to ensure a certain misclassification loss in the ML model that results from the mechanism. The interesting insight here is that, because the buyer can selectively choose sellers to acquire data from, the budget required for a given bound on the misclassification loss is bounded.

  • Finally, we demonstrate the application of our proposed mechanism on the Wisconsin breast cancer data set UCI (1995). We observe fast convergence, indicating the usefulness of the change of variables.

1.1 Related Work

Differentially Private ML Algorithms: While literature on creating differentially private data markets is relatively sparse, there is a vast literature on incorporating differential privacy in statistical modeling and learning tasks. For example, Cummings et al. (2015) builds a linear estimator using data points so that there is a discrete set of privacy levels for each data point. Nissim et al. (2012), Ghosh et al. (2014), Nissim et al. (2014), and Ligett et al. (2017) use differential privacy to quantify loss that sellers incur when sharing their data. McMahan et al. (2017) demonstrates training of large recurrent language models with seller-level differential privacy guarantees. Works such as (Alaggan et al. (2017), Nissim et al. (2014), Wang et al. (2015), Liao et al. (2020), Ding et al. (2017)) also consider problems concerned with ensuring differential privacy.

Some works consider a different definition of privacy. Roth & Schoenebeck (2012), Chen et al. (2018), and Chen & Zheng (2019) use a menu of probability-price pairs to tune privacy loss and payments to sellers.

Perote-Peña & Perote (2003), Dekel et al. (2010), Meir et al. (2012), Ghosh et al. (2014), Cai et al. (2014) consider that sellers can submit false data. In the context of differentially private ML algorithms, a portion of our work can be viewed as contributing to differentially private logistic regression with heterogeneous sellers.

Mechanism Design: Mechanism design has a long history, originally in economics and more recently in algorithmic game theory. Recent work such as Abernethy et al. (2019) considers auctions in which buyers bid multiple times. Chen et al. (2018) provides a mechanism that considers minimizing worst-case error of an unbiased estimator while ensuring that the cost of buying data from sellers is small. However, in this paper, the cost is chosen from a discrete set of values. Other papers, such as Ghosh & Roth (2015), Liu & Chen (2017), and Immorlica et al. (2021), also consider mechanism design for different objectives and problems of interest. However, none of these works incorporates ML algorithms or differential privacy in their analysis.

As mentioned earlier, our work is more closely related to Fallah et al. (2023), where the authors develop a mechanism to estimate the mean of a scalar random variable by collecting data from privacy-sensitive sellers.

However, unlike Fallah et al. (2023) where they assume some statistical knowledge of the quantity to be estimated, our challenge is to design a mechanism without such knowledge. This leads to interesting problems in both deriving a bound for the misclassification loss and solving a non-convex optimization problem to implement the mechanism.

2 Logistic Regression While Ensuring Heterogeneous Differential Privacy

For designing the optimal mechanism, a major challenge is to represent the misclassification error of the logistic regression problem. We first formally define differential privacy and then introduce the problem of repesenting the misclassification error.

2.1 Differential Privacy

To build the necessary foundation, we define the notion of privacy loss that we adopt in this paper. We assume that sellers trust the platform to add necessary noise to the model weights to keep their data private.

This is called central differential privacy. The first definition of differential privacy was introduced by Dwork et al. (2006) which considered heterogenous differential privacy (same privacy guarantee to all data providers).

In our paper, we consider a slight extension wherein all the users are provided different privacy guarantees, i.e., heterogeneous differential privacy. More formally, it is defined as follows.

Definition 1 [Alaggan et al. (2017)]: Let ϵ = (ϵi) m i=1 ∈ R m

  • . Also, consider S, S ′ be two datasets that differ in i th component with |S| being cardinality of set S. Let A be an algorithm that takes a dataset as input and provides a vector in R n as output. We say that the algorithm provides ϵ-centrally differential privacy, if for any set V ⊂ R n,

eϵiP[A(S)V]P[A(S)V]eϵii{1,2,,S}.e^{-\epsilon_{i}}\leq{\frac{\mathbb{P}[\mathbb{A}(S)\in V]}{\mathbb{P}[\mathbb{A}(S^{\prime})\in V]}}\leq e^{\epsilon_{i}}\quad\forall i\in\{1,2,\ldots,|S|\}.

This definition states that if the value of ϵiis small, then it is difficult to distinguish between the outputs of the algorithm when the data of seller i is changed. Note that a smaller value of ϵi means a higher privacy guarantee for the seller.

2.2 Representing The Misclassification Error

In this subsection, we formulate and solve the problem of representing the misclassification error in logistic regression with heterogeneous user privacy requirements. To do this, we initially focus on a related yet simpler

$\left(1\right)^{2}$ . scenario: logistic regression with heterogeneous differential privacy requirements. Later, when we consider the mechanism, the results in this subsection will be used. In this section, we consider the following problem:

  • We have a set of m users, with user i having data point z i = (x i, yi), where we assume ∥x i∥ ≤ 1 ∀i.

We let D = {(x 1, y1), . . . ,(x m, ym)} denote the data set.

  • Each user i demands that ϵi differential privacy must be ensured for their data.

  • The platform aims to design best estimator w by minimizing misclassification loss E[I{sign(wT x)̸=y}] such that ∥w∥ ≤ β while ensuring differential privacy ϵ.

1 Chaudhuri et al. (2011) present an algorithm to solve logistic regression while ensuring homogeneous differential privacy, i.e., each user demands same differential privacy. A natural extension of their algorithm for minimizing heterogenous differential privacy can be stated as follows (proof that this algorithm ensures heterogeneous differential privacy is presented in Proposition 1 of the appendix):

Algorithm 0:

  1. Choose (a, η) ∈ F, where the constraint set is F = {(a, η) : η > 0,Pi ai = 1, ai > 0, ai ≤ k/m, aiη ≤ ϵi ∀i} where k is a fixed constant.

  2. Pick b ′from the density function h(b ′) ∝ e − η 2 ∥b ′∥. To ensure this, we pick ∥b ′∥ ∼ Γ(n, 2 η ) and direction of b ′ uniformly at random. Let b ′ = 2b η . Thus, ∥b∥ ∼ Γ(n, 1) and direction chosen uniformly at random.

  3. Given a dataset D and differential privacies ϵ, compute wˆ = argminwLˆ(D, w, a, η), where Lˆ(D, w, a, η) = Pm i=1 ailog(1 + e −y i·wT x i) + b ′Tw + λ 2 ||w||2for some λ > 0. Output wˆ .

Proposition 1 shows that any choice of (a, η) satisfying the aforementioned constraints satisfies (ϵi + ∆(m, λ))- differential privacy requirements for ∆(m, λ) = 2 log 1 + k mλ .

Remark: Note that in most machine learning models, m is large enough such that mλ >> 1 and thus the term ∆ is much smaller than the differential privacy guarantees used in practice. Therefore, for brewity, we consider (ϵi + ∆(m, λ)) ≈ ϵi for further analysis.

However, it is unclear how to choose a and η to get a good test error performance. For example, one can choose ai = 1/m and η = m mini ϵi. However, such a choice is clearly unable to exploit the fact that we need to protect some data points more than others. Besides, our numerical experiments in Section 5 demonstrate that solving minwLˆ(D, w, a, η) over feasible set of (a, η) does not provide good results. Therefore, to understand how to choose (a, η), we appeal to statistical learning theory to get an upper bound on E[I{sign(wT x)̸=y}] in terms of Lˆ(D, w, a, η) with the true loss. This leads to following result.

Theorem 2.1. Given a classification task, let D be the dataset from m users with ∥x i∥ ≤ 1*. Further, consider* that users have differential privacy requirements ϵ = (ϵi) m i=1 ∈ R m

  • respectively. Also, let L(D, c; ϵ, w) = E[I{sign(wT x)̸=y}] be misclassification loss and Lˆ(D, w, a, η) be as defined above. Then, the following holds for appropriate µ, σ with probability at least (1 − δ)(1 − δ ′) for every choice of ϵ ∈ R m and (a, η) ∈ F and w chosen such that ∥w∥ ≤ β for some β > 0.

E[I{sign(wTx)y}]L^(D,w,a,η)+μ(δ,β)a+σ(δ,δ,β)(1η).\mathbb{E}[\mathbb{I}_{\{s i g n(\mathbf{w}^{T}\mathbf{x})\neq y\}}]|\leq\hat{\mathbb{L}}(D,\mathbf{w},\mathbf{a},\eta)+\mu(\delta,\beta)||a||+\sigma(\delta,\delta^{\prime},\beta)\Big(\frac{1}{\eta}\Big). . (2) 1∥w∥ is the l2 norm of w, and I{·} is the indicator function. $\left(2\right)$. Using the above bound for generalization error, we add additional regularization terms, namely µ||a|| and σ/η to incorporate generalization loss in the objective function.

mina,η,w[i=1mailog(1+eyiwTxi)+λ2w2+2bTwη+μa+ση],\min_{\mathbf{a},\eta,\mathbf{w}}\left[\sum_{i=1}^{m}a_{i}\log(1+e^{-y^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})+\frac{\lambda}{2}||\mathbf{w}||^{2}+\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}+\mu\|\mathbf{a}\|+\frac{\sigma}{\eta}\right], s.t.$(\mathbf{a},\eta)\in\mathbb{F}$ and $\mu,\sigma,\lambda$ are hyperparameters (3)\left({\boldsymbol{3}}\right)

4_image_0.png

Further, we can choose an appropriate value of λ to satisfy the constraint ||w*|| ≤* β. Therefore, the objective in Eq. (3) with the mentioned constraints can be used to solve logistic regression with heterogeneous differential privacy requirements. Additionally, Eq. (3) serves as a proxy for representing misclassification loss. In section 4, we will discuss algorithmic considerations in solving the above optimization problem and also provide an algorithm to optimally choose parameters (µ, σ, λ) using validation data.

Figure 1: Interaction between sellers and the platform

3 Mechanism Design

We will now use our logistic regression result to consider the mechanism design problem. We consider a platform (buyer) interested in collecting data from privacy-sensitive users (sellers) to build a logistic regression model. Further, sellers may have different costs associated with the privacy lost by sharing their data, i.e., they may have different privacy sensitivities. Therefore, the platform buys data from sellers in exchange of a payment and provides them with differential privacy guarantees. The differential privacy guarantees are determined by optimizing an objective consisting of the misclassification error and the payments. More specifically, our problem has the following components

  • We have a set of m sellers, with seller i having data z i = (x i, yi). with y i ∈ {+1, −1}. Therefore, let D = {(x 1, y1), . . . ,(x m, ym)} denote the dataset.

  • We model the cost that the sellers incur due to loss of privacy using privacy sensitivity ci ≥ 0. In other words, if the seller is provided a differential privacy guarantee of ϵi, then the seller incurs a total cost of ci· u(ϵi) where u(.) is considered to be a convex and strictly increasing function with u(0) = 0. This is consistent with the practical observation that the privacy cost increase of seller i for a slight increase in ϵi, will be higher for larger values of ϵi. Additionally, the knowledge of the function u(.) is public information.

  • Sellers can potentially lie about their privacy sensitivity to get an advantage. Therefore, we denote the reported privacy sensitivity of seller i by c ′ i . The mechanism will be designed so that c ′ i = ci.

  • As is standard in the mechanism design literature, we assume that seller i's cost ciis drawn from a probability density function fi(·), which is common knowledge. Moreover, we assume that sellers cannot lie about their data. This assumption is valid in scenarios such as healthcare data, where patient information is already within the possession of the hospital. In this context, sellers merely need to grant permission to the hospital (a trusted authority) to utilize their data, specifying their privacy sensitivities in the process.

  • The buyer announces a mechanism, i.e., in return for {(x i, yi), ci}, each seller is guaranteed a differential privacy level ϵi and payment ti both of which depend on dataset D and reported privacy sensitivities c ′.

2

  • Based on the privacy loss and the payment received, the cost function of seller i with privacy sensitivity ci, reported privacy sensitivity c ′ i , and data point z i = (x i, yi) is given by COST(ci, c−i, c′i , c−i; ϵi, ti) = ci· u(ϵi) − ti. (4) To design the mechanism, we next state the objectives of the buyer.

  • The buyer learns an ML model θ(D, c ′) from dataset D, and computes a payment ti(D, c ′) to seller i while guaranteeing a privacy level ϵi(D, c ′) to each seller i. To do this, the buyer optimizes a combination of the test loss incurred by the ML model L(D, c ′; ϵ, θ) and the payments ti(D, c ′). The overall objective of the buyer is to minimize

(4)\left(4\right) Ec[L(D,c;ϵ,θ)+γiti(D,c)],(1)\mathbb{E}_{\mathbf{c}}\biggl[\mathbb{L}(D,\mathbf{c}^{\prime};\mathbf{\epsilon},\mathbf{\theta})+\gamma\sum_{i}t_{i}(D,\mathbf{c}^{\prime})\biggr],\tag{1} $\left(5\right)^3$ (6)(6) (7)\left(7\right)

where γ is a hyperparameter that adjusts the platform's priority to get a better predictor or reduce payments.

  • The buyer is also interested in ensuring each seller is incentivized to report their privacy sensitivities truthfully. To that end, the IC property imposes that no seller can misrepresent their privacy sensitivity if others report truthfully, i.e., COST(ci, c−i, ci, c−i; ϵi, ti) ≤ COST(ci, c−i, c′i , c−i; ϵi, ti) ∀i, c′i , c. (6)

  • Moreover, the buyer wants to ensure that sellers are incentivized to participate. Thus, the IR property imposes the constraint that the platform does not make sellers worse off by participating in the mechanism. COST(ci,ci,ci,ci;ϵi,ti)0i,ci,c\mathrm{COST}(c_{i},\mathbf{c}_{-i},c_{i}^{\prime},\mathbf{c}_{-i};\epsilon_{i},t_{i})\leq0\quad\forall i,c_{i}^{\prime},\mathbf{c} , c (7) Using ideas from Myerson (1981) we show that, if ϵi(D, c ′) is privacy guarantee provided to seller i, then using the IC and IR constraints, we can replace the payments ti(D, c ′) in the objective function by Ψi(ci)uϵi(D, c ′) where Ψi(c) = c + Fi(c)/fi(c) 3 Further, the IC constraint incentivizes sellers to be truthful, and henceforth, we can replace c ′ with c. Since this replacement is a generalization of the payment identity in Fallah et al.

(2023) we state and prove this result along with mentioning the required regularity assumptions on Ψiin a later section. Substituting it in Eq. (5), we get

minw,c()Ec[L(D,c;ϵ,θ)+γi=1mΨi(ci)u(ϵi(D,c))].\operatorname*{min}_{\mathbf{w},\mathbf{c}(\cdot)}\mathbb{E}_{\mathbf{c}}\biggl[\mathbb{L}(D,\mathbf{c};\mathbf{\epsilon},\mathbf{\theta})+\gamma\cdot\sum_{i=1}^{m}\Psi_{i}(c_{i})u(\epsilon_{i}(D,\mathbf{c}))\biggr]. (s)({\mathfrak{s}})

Therefore, buyer's problem reduces to solving Eq. (8) while ensuring ϵi(D, c) differential privacy. We refer to Appendix C for a simple numerical example. The order of operations of our mechanism can be summarized as follows.

  • The sellers provide the platform with their data (xi, yi) and their privacy sensitivity ci.

  • The platform announces that in exchange for the data, it will pay according to the payment identity.

  • The platform uses this data to obtain an ML model θ(D, c ′) and sets privacy levels ϵ(D, c′), i.e., even if the model θ(D, c ′) is released publicly, each seller i will be guaranteed differential privacy of ϵi.

Note that the payment mechanism does not depend on the choice of loss function. Since we consider our ML problem to be logistic regression the loss function L(D, c; ϵ, θ) in our case becomes the misclassification loss E[I{sign(wT x)̸=y}], where w(D, c) is the weight vector corresponding to the regression model.

2c = [c1, c2*, . . . , c*m]. Same notation is used in writing ϵ, c ′, t.

3F(ci) and f(ci) denote the values of CDF and PDF functions for privacy sensitivities at ci, respectively.

3.1 Calculating The Payments

Before we proceed to solve Eq. (8), we first state the result which calculates the payments based on IC and IR Assumption 3.1. The virtual cost Ψi(c) = c + Fi(c) fi(c) is an increasing function of c.

Theorem 3.2. Assume that ci is drawn from a known PDF f(·). Given a mechanism design problem with privacy sensitivities c and privacy guarantees ϵ, let the sellers' costs be given by Eq. (4). Then, using the IC and IR constraints, the payments ti(c) can be substituted by Ψi(ci)uϵi(c)where Ψi(ci) is the virtual cost function given by

Ψi(ci)=ci+Fi(ci)fi(ci)  iN,ciR(1)\Psi_{i}(c_{i})=c_{i}+\frac{F_{i}(c_{i})}{f_{i}(c_{i})}\ \ \forall i\in\mathbb{N},c_{i}\in\mathbb{R}\tag{1}

The theorem is a generalization of the payment identity in Fallah et al. (2023). The result is similar to Myerson's interpretation of mechanism design to virtual welfare maximization.

3.2 Solving The Mechanism Design Problem

Now, we use the results in previous subsections to solve the mechanism design problem, i.e., Eq. (8). From Theorem 3.2, we obtain the payments, and Eq. (3) provides us with a proxy for the logistic loss, i.e., E[Isign(wT x)̸=y] while also satisfying differential privacy constraints. Thus, our mechanism design objective can be written as

(g)({\mathfrak{g}}) itten as $ \min_{\mathbf{a},\eta,\mathbf{w},\mathbf{c}}\bigg[\sum_{i=1}^m a_i\log(1+e^{-y^i\cdot\mathbf{w}^T\mathbf{x}^i})+\frac{2\mathbf{b}^T\mathbf{w}}{\eta}+\frac{\lambda}{2}||\mathbf{w}||^2+\mu||\mathbf{a}||+\sigma\frac{1}{\eta}+\gamma\sum_{i=1}^m u(\epsilon_i)\Psi_i(c_i)\bigg]$, ... s.t. Pi ϵi=1,a0,ak/m,η0,aiηϵi.\epsilon_{i}=1,\mathbf{a}\geq0,\mathbf{a}\leq k/m,\eta\geq0,a_{i}\eta\leq\epsilon_{i}.

Note that the above objective is minimized with respect to P ϵ when aiη = ϵi ∀i. Using this equality and i ai = 1, we get ai = ϵi/η, where η =Pi ϵi. Thus, the final objective function for mechanism design can be written as

\min_{\mathbf{a},\eta,\mathbf{w}}\bigg{[}\sum_{i=1}^{m}a_{i}\log(1+e^{-\eta^{i}\cdot\mathbf{w}^{T}\mathbf{a}^{\prime}})+\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}+\frac{\lambda}{2}||\mathbf{w}||^{2}+\mu\|\mathbf{a}\|+\sigma\frac{1}{\eta}+\gamma\sum_{i=1}^{m}u(a_{i},\eta)\Psi_{i}(c_{i})\bigg{]},\tag{10}

subject to η > 0, a ≤ k/m, a ≥ 0, and Pi ai = 1. Here, {β, µ, σ} are hyperparameters while γ is used to tradeoff between test loss and payments. After solving the optimization problem (10), ϵ can be obtained using ϵi = aiη.

Remark: The constraint ai ≤ k/m for some k > 0 indirectly imposes the condition that ϵiis upper-bounded by a finite quantity. Since η =P ϵi, we can write η = mϵavg, which together with ϵi = aiη implies ϵi ≤ kϵavg.

Later, we will show that it is sufficient to constraint ϵavg to a bounded set while optimizing the objective.

Therefore, the upper bound constraint on ai means that ϵiis upper-bounded. In other words, we can incorporate constraints such as sellers unwilling to tolerate more than a certain amount of privacy loss even if they are paid generously for it.

3.3 Interpretation Of The Terms In The Objective

We can divide the objective function Eq. (10) into three parts.

  1. The first part is given by Pm i=1 ailog(1 + e −y i·wT x i) + 2b T w/η. This focuses on obtaining w to solve

the differentially-private logistic regression problem.

  1. The second part µ∥a∥ + σ/η denotes the difference between true loss L(D, c; ϵ, w) and Lˆ(D, a, w, η).

This tries to reduce the gap between L(D, c; ϵ, w) and Lˆ(D, a, w, η). We see that the gap reduces as ai approach 1/m and η → ∞. Therefore, higher weight on these terms would mean that the optimal solution is forced to pick similar values for ai and smaller noise, which leads to the standard logistic regression problem. 3. Finally, γPm i=1 u(aiη)Ψi(ci) accounts for payments made to sellers. Here, increasing γ would mean that the platform would focus more on reducing the payments rather than designing a better logistic regression model.

To gain more insight into the optimal solution of Eq. (10), we first state the first-order necessary conditions.

For this analysis, we consider u(.) to be linear, i.e., u(aiη) = aiη.

Theorem 3.3. Let (a ∗, w∗, η∗) be the optimal solution for the optimization problem (10). Then

ai=aμ(τγηψi(ci)log(1+eyi(w)Txi))+a_{i}^{*}=\frac{\|a^{*}\|}{\mu}\Big(\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(w^{*})^{T}x^{i}})\Big)^{+}

where (f(x))+ = max(0, f(x)), with τ such that

i=1m(τγηψi(ci)log(1+eyi(w)Txi))+=μa,\sum_{i=1}^{m}\left(\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(\mathbf{w}^{*})^{T}\mathbf{x}^{i}})\right)^{+}={\frac{\mu}{\|\mathbf{a}^{*}\|}}, (11)\quad(11)... Thus,

ai=(τγηψi(ci)log(1+eyi(w)Txi))+j=1m(τγηψj(cj)log(1+eyj(w)Txj))+a_{i}=\frac{\left(\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(\mathbf{w}^{*})^{T}\mathbf{x}^{i}})\right)^{+}}{\sum_{j=1}^{m}\left(\tau-\gamma\eta^{*}\psi_{j}(c_{j})-\log(1+e^{-y^{j}\cdot(\mathbf{w}^{*})^{T}\mathbf{x}^{j}})\right)^{+}} where η ∗is given by η=(σ+2bTwγi=1mψi(ci)ai)1/2.\eta^{*}=\left({\frac{\sigma+2b^{T}w}{\gamma\sum_{i=1}^{m}\psi_{i}(c_{i})a_{i}^{*}}}\right)^{1/2}.

From Theorem 3.3, we can make certain observations:

  1. First, we note that ai depends inversely on the privacy sensitivities ci, i.e., ψi(ci) is an increasing function of ci. Therefore, the platform will be willing to buy relatively more privacy from sellers whose per-unit privacy costs are lower. Additionally, the platform will choose not to use data points with excessively high virtual costs.

  2. Note that from Eq. (11), τ is directly proportional to µ (the weight on ∥a∥). Further, a higher τ will reduce the variance in a ∗ i because τ will dominate over γη∗ψi(ci)−log(1 +e −y i·(w∗) T x i). Therefore, if µ → ∞ then τ → ∞ which would make ai → 1/m. Additionally, by considering a higher value of µ, we can indirectly satisfy the constraint ai ≤ k/m.

  3. Finally, η is inversely proportional to γ. Therefore, a lower weight on payments, i.e., smaller γ and thus more focus on getting a better model would mean that the optimal solution will try to reduce noise by making η ∗ → ∞.

3.4 Discussion

We make the following remarks on our solution:

  • The payment mechanism is independent of L(D, c; ϵ, w). Thus, using an upper bound for the misclassification loss does not affect the mechanism. Therefore, if one uses a tighter bound, it will help the platform get a better estimator for the same payments. However, it would not affect the behavior of sellers, i.e., sellers will still be incentivized to be truthful and willing to participate in the mechanism.

  • Since the payment mechanism does not depend on the choice of the function L(D, c; ϵ, w), designing payment mechanism and solving objective function can be treated as two separate problems. Thus, any such mechanism design problem can be decoupled into separate problems.

  • Finally, we can see that our algorithm can be used to solve the logistic regression problem with heterogeneous privacy guarantee requirements. Previous work in the literature, such as Chaudhuri et al. (2011), solves the problem in the case when it is assumed that all users have the same differential privacy requirements. In our paper, Eq. (3) extends it to the case when users are allowed to have different privacy requirements.

3.5 Robustness To Correlations Between The Data Points And Privacy Sensitivities

In our model, we consider that the data point (x i, yi) is independent of privacy sensitivity ci. However, in some applications, this might not hold. For example, if x iis the income of an individual then people with a high income might be reluctant to share their data. Therefore their privacy sensitivities ci would be higher.

This could potentially deter the platform from incorporating data points from high-income individuals, as it would imply higher costs.

Our model, however, remains versatile in handling such practical intricacies, thanks to the presence of the regularization term µ||a||. Since, ||a|| is minimized when ai = 1/m, a higher value of µ will force ai to be closer to 1/m. This can also be inferred by the observations made from Theorem 3.3. Therefore, we can always tweak µ to ensure that all data points are sufficiently considered in the objective therefore helping the platform get a higher classification accuracy while also making sure that the payments are small. The robustness of the model to correlations between (x i, yi) and ci further highlights the importance of adding additional regularization terms which we derived in Thm 2.1.

3.6 Asymptotic Analysis

It is also instructive to analyze the objective function in the regime where the number of sellers is large, i.e., when m → ∞. For this purpose, we make the following assumptions:

  1. The data set is linearly separable, that is, there exists a w∗such that w∗T x iy i ≥ δ ∀i for some δ > 0.

  2. c has a bounded support implying that ψi(ci) is bounded. Therefore, let ψi(ci) ∈ [p, q].

  3. There is sufficient finite probability mass around c = p such that the following condition holds: ∃ k > 0, such that lim m→∞ m · Pψi(ci) ≤ p + 1/mk→ ∞.

Theorem 3.4. Assume that dataset and privacy sensitivities satisfy (a)-(c) above. Furthermore, let ∥b∥ ∼ Γ(n, 1). Then, as m → ∞, the objective function can be upper-bounded almost surely as

limm+minw,e,wβE[Isign(wx)ρy)]+γi=1ϵiΨi(ci)log(1+eiβγ)+2σpγ+2bpγ(12)\lim_{m\rightarrow+\infty}\min_{\mathbf{w},e,\|\mathbf{w}\|\leq\beta}\mathbb{E}[\mathbb{I}_{sign(\mathbf{w}^{\intercal}\mathbf{x})\mathbf{\rho}\mathbf{y})}]+\gamma\sum_{i=1}\epsilon_{i}\Psi_{i}(c_{i})\leq\log(1+e^{-\frac{i}{\sqrt{\beta\gamma}}})+2\sqrt{\sigma\mathbf{p}\gamma+2\|\mathbf{b}\|\sqrt{\mathbf{p}\gamma}}\tag{12}

wherein the payment is at most pσpγ + 2∥b∥ √pγ/γ. In particular, the inequality is non-trivial if pγ satisfies

log(1+eδpγ)+2σpγ+2bpγ<1.\log(1+e^{-{\frac{\delta}{\sqrt{p\gamma}}}})+2{\sqrt{\sigma p\gamma+2\|\mathbf{b}\|{\sqrt{p\gamma}}}}<1.

Furthermore, as p → 0*, the above limit becomes zero.* The first term log(1 + e − √δpγ ) represents maximum possible error for logistic loss. The second term is extra error due to payment costs and errors associated with ensuring differential privacy. This is unavoidable because there is finite cost (ψi(ci) ≥ p) associated with each data point.

We observe a dynamic interplay: as p decreases, the cost per data point diminishes, leading to a reduction in payments. Notably, as p → 0 and m → ∞, the upper bound on the error can be driven to 0. This phenomenon is intuitively explained by the platform's ability to select a lot of samples with nearly zero virtual cost from a large pool, enabling the reduction of misclassification error.

4 Algorithmic Considerations

This section will discuss algorithmic considerations in solving the optimization problem associated with our mechanism design solution.

4.1 Making The Objective Function Convex

The objective functions for logistic regression with heterogeneous differential privacy, i.e., Eq. (3) and for solving the mechanism design problem, i.e., Eq. (10) are nonconvex in (a, w). Therefore, we introduce a change of variables trick to make the function convex. We will first prove the convexity result for Eq. (3) and then argue that it also holds for Eq. (10). We make the substitution ai = e zi. With the proposed modifications, the logistic regression objective in Eq. (3) becomes

minz,η,wf(w,z,η),\min_{\mathbf{z},\eta,\mathbf{w}}f(\mathbf{w},\mathbf{z},\eta), $$f(\mathbf{w},\mathbf{z},\eta)=\left[\sum_{i=1}^{m}e^{z_{i}}\log(1+e^{-y^{i}\cdot\mathbf{w}^{T}\mathbf{w}^{i}})+\frac{\lambda}{2}|\mathbf{w}|^{2}+\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}+\mu|e^{\mathbf{z}}|+\sigma\frac{1}{\eta}\right]\tag{13}$$

The following theorem states that the new objective is strongly convex in (w, z) for sufficiently large λ, and thus gradient descent converges exponentially to global infimum.

Theorem 4.1. Given a classification task, let D be a set of data points from m users with ∥x i∥ ≤ 1, for each i. Then, there exists a value λconv such that the objective function as defined in Eq. (13) is convex in (w, z) for λ > λconv and µ, σ, η ∈ R+. Let (wt, z t)t∈N be the sequence of iterates on applying projected gradient descent on f(·) for a fixed η on a convex set S*, and let* f ∗ η = infw,z f(w, z, η). Then, for λ > λconv*, there* exists 0 < α < 1*, such that*

f(wt,zt,η)fηαt(f(w0,z0,η)fη).f(\mathbf{w}^{t},\mathbf{z}^{t},\eta)-f_{\eta}^{*}\leq\alpha^{t}(f(\mathbf{w}^{0},\mathbf{z}^{0},\eta)-f_{\eta}^{*}).

We also observe experimentally that the projected gradient descent on (w, z) for a fixed η converges to the same stationary point for different initializations for the real dataset considered in this paper. This suggests that the condition on λ in Theorem 4.1 is not very restrictive. Remark: Note that the same change of variables and adding a regularizer for w also makes the mechanism design objective Eq. (10) convex in (w, z) for λ > λconv. This is because γPm i=1 u(ηezi )ψi(ci) is convex for a fixed η.

4.2 Algorithm

Let us denote the mechanism design objective (10) with the change of variables by

g(w,z,η)=f(w,z,η)+γi=1mu(ηezi)Ψi(ci).g(\mathbf{w},\mathbf{z},\eta)=f(\mathbf{w},\mathbf{z},\eta)+\gamma\sum_{i=1}^{m}u(\eta e^{z_{i}})\Psi_{i}(c_{i}).

To optimize this objective, we first optimize g(w, z, η) with respect to (w, z) for a fixed η over the constraint set using projected gradient descent, and then perform a line search over the scalar parameter η. Now, to determine the range of η, note that η =Pm i=1 ϵi = mϵavg, where ϵavg =Pi ϵi/m. Considering that, in practice, the differential privacy guarantees ϵ cannot be excessively high, we restrict the range of ϵ by taking ϵavg ∈ [0, L] for some L ∈ R+. Thus, we can choose different values of η by discretizing [0, L] to any required precision and choosing ϵavg from it. The pseudocode for the algorithm is provided in the provided table.

As a result, for each combination of {λ, µ, σ, γ}, the algorithm provides the corresponding optimal weight vector w and privacy guarantees ϵ. The privacy guarantees are then used to determine payments. Moreover, the misclassification error is computed over a validation dataset using w. Therefore, we fix γ and optimize our objective E[I{sign(wT x)̸=y}] + γPi ti(D, c ′) wrt {λ, µ, σ}. Subsequently, the platform can pick appropriate value of γ by comparing different combinations of payment sum (Pi ti(D, c ′)) and misclassification loss (E[I{sign(wT x)̸=y}]) corresponding to each value of γ.

Remark: Note that the same algorithm can be used to solve the logistic regression problem, i.e., Eq. (3) by considering the projection set S to be {z :P e zi = 1*, ηe*zi ≤ ϵi}.

An Iterative Algorithm To Optimize The Mechanism Design Objective

Set step-size α ∈ (0, 1], gmin = ∞.

Discretize [0, L] to any required precision and choose ϵavg from this discrete set.

Sample ∥b∥ from Γ(n, 1) with its direction chosen uniformly at random.

foreach ϵavg do Initialize w from N(0, 1), zi = log( 1m ) ∀i; while not converged do w ← w − α d dw g(w, z, ϵavg), z ← z − α d dz g(w, z, ϵavg), z ← ProjS(z) s.t. S = {z :Pm i=1 e zi = 1}.

end if gmin > g(w, z, ϵavg) then wopt ← w, zopt ← z, ϵopt ← ϵavg, gmin ← g(w, z, ϵavg) end end

4.3 Assumption On Validation Data

In addition to (w, z), our objective function needs to be minimized on a set of parameters {λ, µ, σ, γ}.

10_image_0.png

Therefore, we require a validation dataset to compare different values of the hyperparameters. Since hyperparameters are chosen based on the validation set, if the validation dataset is private, it may violate differential privacy guarantees. Chaudhuri et al. (2011) provides a detailed discussion on ensuring differential privacy using validation data. Referring to their work, we assume existence of a small publicly available dataset that can be used for validation. This assumption ensures that differential privacy guarantees are not violated.

Figure 2: a) Misclassification error and payments b) Comparison of overall error

5 Numerical Results 5.1 Application To Medical Data

Dataset and Specifications: To demonstrate the applications of our proposed mechanism, we perform our mechanism design approach on the Wisconsin Breast Cancer dataset UCI (1995). Furthermore, c is drawn from U[e −4, 5e −4] and ψ(c) is calculated accordingly to be 2c − e −4. Also, we consider u(x) = x.

Implementation:The optimization of the loss function in Eq. (10) is conducted on the training data, and subsequently, the hyperparameters {λ, µ, σ} are selected based on the validation data for each value of γ.

Therefore, the corresponding misclassification error (misclassified samples/total samples) and payments are plotted for each γ in Fig. 2(a). The values are plotted by taking the mean over 15 different samples of the noise vector b. Given that our approach is first to consider the tradeoff between payments and model accuracy for ML models, there is a lack of existing methods in the literature for direct comparison. However, to showcase increase in efficiency of our approach due to addition of extra regularization terms (µ||a||, σ/η), we compare our results with a naive model whose objective does not consider these terms, i.e. Eq. (10) with µ = σ = 0. Finally, all results are benchmarked with the baseline error which is misclassification error of the model in absence of payments and differential privacy guarantees. Additionally, to evaluate the efficiency of all the methods the overall error (misclassification error + γ*payments) is plotted in Fig. 2(b). It is important to note that additional experiments in the appendix provide further insight about the hyperparameters.

Observations and Practical Usage: As depicted in Fig. 2(a), there is a tradeoff between misclassification loss and payments, with an increase in misclassification loss and a decrease in payments as γ rises. Consequently, the platform can tailor γ based on its requirements. For example, if the platform has a budget constraint, the platform can iteratively adjust γ to obtain the optimal estimator within the given budget. Finally, from Fig.

2(b), we see that the incorporation of regularization terms (µ||a||, σ/η) in the model yields a more efficient mechanism with a lower overall error.

Robustness to Correlations: We repeat the above experiment by adding correlations between the data points and their corresponding privacy sensitivities. This is done by mapping elements in c and datapoints using a predefined rule. Specifically, c is sampled and its elements are sorted, while datapoints are sorted based on one of their indices. Consequently, k th datapoint is mapped with k th element of c. These observations are plotted in Fig. 2 and which shows that performance of our algorithm is similar, affirming robustness of our approach to correlations between datapoints and privacy sensitivities. This underscores the adaptability and efficacy of our method even in scenarios where correlations are introduced, further validating its practical utility.

6 Conclusion

We introduce a novel algorithm to design a mechanism that balances competing objectives: achieving a highquality logistic regression model consistent with differential privacy guarantees while minimizing payments made to data providers. Notably, our result in Thm. 2.1 can extend to scenarios where individual data points require different weights in loss calculations. Such weighting enables accommodation of noisy measurements or varying costs associated with sample retrieval. Additionally, we note that our model considers heterogeneous privacy guarantees, acknowledging the diverse privacy needs of individuals. Finally, through Thm. 3.2 we see that the payment mechanism does not depend on choice of loss function. Therefore designing a payment mechanism and minimizing the objective can be effectively decoupled and treated as separate problems. This observation along with Thm. 2.1 which highlights the necessity of additional regularization terms opens avenues for design of mechanisms tailored to ML problems of higher complexity.

References

Jacob D Abernethy, Rachel Cummings, Bhuvesh Kumar, Sam Taggart, and Jamie H Morgenstern.

Learning auctions with robust incentive guarantees. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ c14a2a57ead18f3532a5a8949382c536-Paper.pdf.

Mohammad Alaggan, Sébastien Gambs, and Anne-Marie Kermarrec. Heterogeneous differential privacy.

Journal of Privacy and Confidentiality, 7(2), Jan. 2017. doi: 10.29012/jpc.v7i2.652. URL https:// journalprivacyconfidentiality.org/index.php/jpc/article/view/652.

Yang Cai, Constantinos Daskalakis, and Christos H. Papadimitriou. Optimum statistical estimation with strategic data sources. In Annual Conference Computational Learning Theory, 2014. URL https://api.

semanticscholar.org/CorpusID:1647632.

Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. J. Mach. Learn. Res., 12(null):1069–1109, jul 2011. ISSN 1532-4435.

Yiling Chen and Shuran Zheng. Prior-free data acquisition for accurate statistical estimation. In Proceedings of the 2019 ACM Conference on Economics and Computation. ACM, jun 2019. doi: 10.1145/3328526.3329564.

URL https://doi.org/10.1145%2F3328526.3329564.

Yiling Chen, Nicole Immorlica, Brendan Lucier, Vasilis Syrgkanis, and Juba Ziani. Optimal data acquisition for statistical estimation. In Proceedings of the 2018 ACM Conference on Economics and Computation, EC '18, pp. 27–44, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450358293.

doi: 10.1145/3219166.3219195. URL https://doi.org/10.1145/3219166.3219195.

Rachel Cummings, Katrina Ligett, Aaron Roth, Zhiwei Steven Wu, and Juba Ziani. Accuracy for sale: Aggregating data with a variance constraint. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS '15, pp. 317–324, New York, NY, USA, 2015. Association for Computing Machinery. ISBN 9781450333337. doi: 10.1145/2688073.2688106. URL https://doi.org/10.

1145/2688073.2688106.

Ofer Dekel, Felix Fischer, and Ariel D. Procaccia. Incentive compatible regression learning. Journal of Computer and System Sciences, 76(8):759–777, 2010. ISSN 0022-0000. doi: https://doi.org/10.1016/j.jcss.

2010.03.003. URL https://www.sciencedirect.com/science/article/pii/S0022000010000309.

Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 3574–3583, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.

Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Serge Vaudenay (ed.), Advances in Cryptology - EUROCRYPT 2006, pp. 486–503, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg. ISBN 978-3-540-34547-3.

Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian, and Asuman Ozdaglar. Optimal and differentially private data acquisition: Central and local mechanisms. Operations Research, 0(0):null, 2023. doi: 10.1287/opre.2022.0014. URL https://doi.org/10.1287/opre.2022.0014.

Arpita Ghosh and Aaron Roth. Selling privacy at auction. Games and Economic Behavior, 91:334–346, 2015.

ISSN 0899-8256. doi: https://doi.org/10.1016/j.geb.2013.06.013. URL https://www.sciencedirect.com/ science/article/pii/S0899825613000961.

Arpita Ghosh, Katrina Ligett, Aaron Roth, and Grant Schoenebeck. Buying private data without verification.

In Proceedings of the Fifteenth ACM Conference on Economics and Computation, EC '14, pp. 931–948, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450325653. doi: 10.1145/ 2600057.2602902. URL https://doi.org/10.1145/2600057.2602902.

Nicole Immorlica, Ian A. Kash, and Brendan Lucier. Buying data over time: Approximately optimal strategies for dynamic data-driven decisions. In Information Technology Convergence and Services, 2021. URL https://api.semanticscholar.org/CorpusID:231639067.

Philip Kushmaro. Council post: Why data privacy is a human right (and what businesses should do about it), Jun 2021. URL https://www.forbes.com/sites/forbescommunicationscouncil/2021/06/ 07/why-data-privacy-is-a-human-right-and-what-businesses-should-do-about-it/.

Guocheng Liao, Xu Chen, and Jianwei Huang. Social-aware privacy-preserving mechanism for correlated data. IEEE/ACM Transactions on Networking, 28(4):1671–1683, 2020. doi: 10.1109/TNET.2020.2994213.

Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, and Zhiwei Steven Wu. Accuracy first: Selecting a differential privacy level for accuracy-constrained erm. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 2563–2573, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.

Yang Liu and Yiling Chen. Sequential peer prediction: learning to elicit effort using posted prices. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pp. 607–613. AAAI Press, 2017.

H. B. McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. In International Conference on Learning Representations, 2017. URL https://api.

semanticscholar.org/CorpusID:3461939.

Reshef Meir, Ariel D. Procaccia, and Jeffrey S. Rosenschein. Algorithms for strategyproof classification.

Artificial Intelligence, 186:123–156, 2012. ISSN 0004-3702. doi: https://doi.org/10.1016/j.artint.2012.03.008.

URL https://www.sciencedirect.com/science/article/pii/S000437021200029X.

Roger B. Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58–73, 1981. doi: 10.1287/moor.6.1.58. URL https://doi.org/10.1287/moor.6.1.58.

Kobbi Nissim, Claudio Orlandi, and Rann Smorodinsky. Privacy-aware mechanism design. In Proceedings of the 13th ACM Conference on Electronic Commerce, EC '12, pp. 774–789, New York, NY, USA, 2012.

Association for Computing Machinery. ISBN 9781450314152. doi: 10.1145/2229012.2229073. URL https://doi.org/10.1145/2229012.2229073.

Kobbi Nissim, Salil Vadhan, and David Xiao. Redrawing the boundaries on purchasing data from privacysensitive individuals. In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, ITCS '14, pp. 411–422, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450326988. doi: 10.1145/2554797.2554835. URL https://doi.org/10.1145/2554797.2554835.

Juan Perote-Peña and Javier Perote. The impossibility of strategy-proof clustering. Economics Bulletin, 4 (23):1–9, 2003. URL https://ideas.repec.org/a/ebl/ecbull/eb-02d70012.html.

Eric A. Posner and E. Glen Weyl. Radical markets: Uprooting capitalism and democracy for a just society.

Princeton University Press, 2019.

Aaron Roth and Grant Schoenebeck. Conducting truthful surveys, cheaply. In Proceedings of the 13th ACM Conference on Electronic Commerce, EC '12, pp. 826–843, New York, NY, USA, 2012. Association for Computing Machinery. ISBN 9781450314152. doi: 10.1145/2229012.2229076. URL https://doi.org/10.

1145/2229012.2229076.

UCI. Breast cancer wisconsin dataset. https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+ Wisconsin+%28Diagnostic%29, 1995.

Yue Wang, Cheng Si, and Xintao Wu. Regression model fitting under differential privacy and model inversion attack. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, pp.

1003–1009. AAAI Press, 2015. ISBN 9781577357384.

A Appendix A: Omitted Proofs

In this section of the appendix, we provide the omitted proofs. We start with the following proposition, whose proof is inspired by that in Chaudhuri et al. (2011).

Proposition 1. For any (a, η) ∈ F, the output of Algorithm 1, denoted by wˆ, preserves ϵ differential privacy.

Proof: Let d and d ′ be two vectors over R n with norm at most 1, and y, y ′ being either −1 or 1. Consider two different inputs given by D = {(x 1, y1), . . . ,(x m−1, ym−1),(d, y)} and D′ = {(x 1, y1), . . . ,(x m−1, ym−1),(d ′, y′)}. Since the function Lˆ(D, w, a, η) is strictly convex in w, for every b ′ = 2b η , there is a unique output wˆ for each input. Let us denote the values of b ′for the first and the second input such that the optimal solution is wˆ by b ′1 and b ′2, respectively, with the corresponding densities h(b ′1) and h(b ′2). We know that the derivative is 0 at wˆ . Thus, we have

b1amdy1+eyw^Td=b2amdy1+eyw^Td.{\mathbf{b^{\prime}}}_{1}-{\frac{a_{m}\cdot d y}{1+e^{y{\hat{w}}^{T}{\mathbf{d}}}}}={\mathbf{b^{\prime}}}_{2}-{\frac{a_{m}\cdot{\mathbf{d^{\prime}}}y^{\prime}}{1+e^{y^{\prime}{\hat{w}}^{T}{\mathbf{d}}^{\prime}}}}.

Since 1 1+e ywˆ T d < 1 and 1 1+e y′wˆ T d′ < 1, we have ∥b ′1 − b ′2∥ < 2am, which implies ∥b ′1∥ − ∥b ′2∥ < 2am.

Therefore, for any pairs (d,y), (d ′,y ′), and any set V ⊂ R n, we can write

P[w(x1,,xm=d,y1,,ym=y)V]P[w(x1,,xm=d,y1,,ym=y)V]=h(b1)h(b2)det(J(w^b1D))1det(J(w^b2D))1\frac{\mathbb{P}[\mathbf{w}(\mathbf{x}^{1},\ldots,\mathbf{x}^{m}=\mathbf{d},y^{1},\ldots,y^{m}=y)\in V]}{\mathbb{P}[\mathbf{w}(\mathbf{x}^{1},\ldots,\mathbf{x}^{m}=\mathbf{d}^{\prime},y^{1},\ldots,y^{m}=y^{\prime})\in V]}=\frac{h(\mathbf{b}^{\prime}{}_{1})}{h(\mathbf{b}^{\prime}{}_{2})}\cdot\frac{|det(J(\hat{\mathbf{w}}\to\mathbf{b}^{\prime}{}_{1}|D))|^{-1}}{|det(J(\hat{\mathbf{w}}\to\mathbf{b}^{\prime}{}_{2}|D^{\prime}))|^{-1}}\cdot

where J(wˆ → b ′1|D) is the Jacobian matrix of the mapping from the space of w to b.

We first bound the ratio of the determinants. By taking the gradient at wˆ to be 0, we have

b1=iaixiyi1+eyiw^Txiλw^b^{\prime}{}_{1}=\sum_{i}{\frac{a_{i}\cdot x^{i}y^{i}}{1+e^{y^{i}{\hat{w}}^{T}x^{i}}}}-\lambda{\hat{w}} (14)(14)

Thus, taking the gradient of b ′1 with respect to wˆ , we have

δb1δw^=iaieyiw^Txixi(xi)T(1+eyiw^Txi)2λI{\frac{\delta\mathbf{b^{\prime}}_{1}}{\delta{\hat{\mathbf{w}}}}}=\sum_{i}{\frac{-a_{i}e^{y^{i}{\hat{\mathbf{w}}}^{T}\mathbf{x}^{i}}\cdot\mathbf{x}^{i}(\mathbf{x}^{i})^{T}}{(1+e^{y^{i}{\hat{\mathbf{w}}}^{T}\mathbf{x}^{i}})^{2}}}-\lambda I

We define two matrices A and E such that

If you want to say, $$A=\sum_i\frac{a_i e^{y^i\hat{\mathbf{w}}^T\mathbf{x}^i}\cdot\mathbf{x}^i(\mathbf{x}^i)^T}{(1+e^{y^i\hat{\mathbf{w}}^T\mathbf{x}^i})^2}+\lambda I$$ $$E=\frac{-a_m e^{y^m\hat{\mathbf{w}}^T\mathbf{d}^{'m}}\cdot\mathbf{d}^{\prime}(\mathbf{d}^{\prime})^T}{(1+e^{y^m\hat{\mathbf{w}}^T\mathbf{d}^{'}})^2}-\frac{-a_m e^{y^m\hat{\mathbf{w}}^T\mathbf{d}^{'m}}\cdot\mathbf{d}(\mathbf{d})^T}{(1+e^{y^m\hat{\mathbf{w}}^T\mathbf{d}})^2}$$ (15)(15) (16)(16) Now, det(J(w^b1D))1det(J(w^b2D))1=det(A+E)det(A)\frac{|det(J(\hat{\mathbf{w}}\to\mathbf{b^{\prime}}_{1}|D))|^{-1}}{|det(J(\hat{\mathbf{w}}\to\mathbf{b^{\prime}}_{2}|D^{\prime}))|^{-1}}=\frac{|det(A+E)|}{|det(A)|} Let λ1(M) and λ2(M) denote the first and second largest eigenvalues of a matrix M. Since, E is of rank 2, from Lemma 10 of Chaudhuri et al. (2011), we have

det(A+E)det(A)=1+λ1(A1E)+λ2(A1E)+λ1(A1E)λ2(A1E)\frac{|d e t(A+E)|}{|d e t(A)|}=|1+\lambda_{1}(A^{-1}E)+\lambda_{2}(A^{-1}E)+\lambda_{1}(A^{-1}E)\lambda_{2}(A^{-1}E)|

Now, since we consider logistic loss which is convex, any eigenvalue of A is atleast λ. Thus |λj (A−1E)| ≤ 1 λ |λj (E)|. Now, applying the triangle inequality to the trace norm, we have

λ1(E)+λ2(E)2d2ameymw^Tdm(1+eymw^Tdm)22am|\lambda_{1}(E)|+|\lambda_{2}(E)|\leq2||d||^{2}a_{m}\frac{e^{y^{m}\hat{\mathbf{w}}^{T}\mathbf{d}^{m}}}{(1+e^{y^{m}\hat{\mathbf{w}}^{T}\mathbf{d}^{m}})^{2}}\leq2a_{m} (17)\left(17\right) (18)(18) (19)(19) (20)12(20)^{\frac{1}{2}}

Therefore by AM-GM inequality λ1(E)λ2(E) ≤ a 2 m

Thus|det(A + E)| det(A+E)det(A)(1+amλ)2(1)\frac{|det(A+E)|}{|det(A)|}\leq\left(1+\frac{a_{m}}{\lambda}\right)^{2}\tag{1} Therefore, we have

P[w(x 1, . . . , x m = d, y1, . . . , ym = y) ∈ V ] P[w(x1, . . . , xm = d ′, y1, . . . , ym = y ′) ∈ V ] ≤ e η(∥b ′ 1∥−∥b ′ 2∥)/2· |det(A + E)| |det(A)|≤ e amη ≤ e ϵm, ≤ exp(amη + 2 log(1 + am λ )) ≤ exp amη + 2 log 1 +k mλ ≤ exp(ϵm + 2 log 1 +k mλ ≤ exp(ϵm + ∆) (21) (222)(222) where the last inequality holds because (a, η) ∈ F. Note that, we also consider that am are constrained such that am ≤ k m for some k > 0. Note that ■

Theorem A.1. Given a classification task, let D be the dataset from m users with ∥x i∥ ≤ 1 ∀i. Further, consider that users have differential privacy requirements ϵ = (ϵi) m i=1 ∈ R m

  • , respectively. Also, let L(D, c; ϵ, w) = E[I{sign(wT x)̸=y}] be misclassification loss and Lˆ(D, w, a, η) be as defined in Algorithm 1.

Then, the following holds with probability at least (1 − δ)(1 − δ ′) for every ϵ ∈ R m and (a, η) ∈ F:

supwβL(D,c;ϵ,w)L^(D,a,w,η)μa+ση.\operatorname*{sup}_{\|\mathbf{w}\|\leq\beta}\left|\mathbb{L}(D,\mathbf{c};\mathbf{\epsilon},\mathbf{w})-\hat{\mathbb{L}}(D,\mathbf{a},\mathbf{w},\eta)\right|\leq\mu\|a\|+{\frac{\sigma}{\eta}}.

Thus, the misclassification loss can be upper-bounded by

L(D,c;ϵ,w)L^(D,a,w,η)+μa+ση.\mathbb{L}(D,\mathbf{c};\mathbf{\epsilon},\mathbf{w})\leq\hat{\mathbb{L}}(D,\mathbf{a},\mathbf{w},\eta)+\mu\|a\|+\frac{\sigma}{\eta}. . (22) ∀w s.t. ∥w∥ ≤ β, (a, η) ∈ F.

Proof: For any sample S=(x 1, y1), . . . ,(x m, ym) and any w ∈ R n, we define the empirical loss function as

L^S[w]=i=1mailog(1+eyiwTxi)+bTw,\hat{\mathbb{L}}_{S}[\mathbf{w}]=\sum_{i=1}^{m}a_{i}\log(1+e^{-y^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})+\mathbf{b^{\prime}}^{T}\mathbf{w},

where ∥b ′∥ ∼ Γ(n, 2 η ), and the direction of b ′is chosen uniformly at random. The true loss function is given

by E[I{sign(wTx)y}]E[log(1+eywTx)]=L[w].\mathbb{E}[\mathbb{I}_{\{s i g n(\mathbf{w}^{T}\mathbf{x})\neq y\}}]\leq\mathbb{E}[\log(1+e^{-y\cdot\mathbf{w}^{T}\mathbf{x}})]=\mathbb{L}[\mathbf{w}]. Since Pi ai = 1 and E[w] = 0, we have

\mathbb{E}\big{[}\widehat{\mathbb{L}}_{S}[\mathbf{w}]\big{]}=\sum_{i}a_{i}\mathbb{E}[\log(1+e^{-y\cdot\mathbf{w}^{T}\mathbf{x}})]+\mathbb{E}[\mathbf{b^{\prime}}^{T}\mathbf{w}] $$=\mathbb{E}[\log(1+e^{-y\cdot\mathbf{w}^{T}\mathbf{x}})]=\mathbb{L}[\mathbf{w}].$$

Let ϕ(S) = supw∈Rn (L[w] − LˆS[w]). To bound b ′Tw , we consider the event where b ′Tw < r. This should be true for all w such that ∥w∥ ≤ β. This is possible only when ∥b ′∥ < r/β. Consider that this event happens with probability 1 − δ ′. From the CDF of Γ(n, 2 η ), we get

i=0n1(ηr2β)ii!eηr2β=δ.(10)\sum_{i=0}^{n-1}\frac{(\frac{\eta r}{2\beta})^{i}}{i!}e^{-\frac{\eta r}{2\beta}}=\delta^{\prime}.\tag{10} (23)(23) (24)(24)

Let ηr/β = t and v(t) = Pn−1 i=0 (t/2)i i!e − t2 . Also, it is known that v(t) is a monotonously decreasing function.

Therefore, its inverse exists, and we have

r=βv1(δ)η.r={\frac{\beta v^{-1}(\delta^{\prime})}{\eta}}.

Now, using McDiarmid's inequality,

P(ϕ(S)ES[ϕ(S)]>t)exp(2t2ai2log2(1+eβ)+(2βv1(δ)η)2).P(|\phi(S)-\mathbb{E}_{S}[\phi(S)]|>t)\leq\exp\left({\frac{-2t^{2}}{\sum a_{i}^{2}\log^{2}(1+e^{\beta})+({\frac{2\beta v^{-1}(\delta^{\prime})}{\eta}})^{2}}}\right).

Thus, with probability at least (1 − δ)(1 − δ ′),

\phi(S)\leq\mathbb{E}_{S}[\phi(S)]+\sqrt{\frac{\ln\frac{1}{\delta}\Big{(}\sum a_{t}^{2}\log^{2}(1+e^{\beta})+\big{(}\frac{2\vartheta^{n-1}(\delta^{t})}{\eta}\big{)}^{2}\Big{)}}{2}}.\tag{25}

Moreover, we can write

ES[ϕ(S)] = ES -sup ∥w∥≤β (L[w] − LˆS(w)) = ES -sup ∥w∥≤β ES′ [LˆS′ (w) − LˆS(w)] = ES,S′-sup ∥w∥≤β [LˆS′ (w) − LˆS(w)] = ES,S′,σi sup ∥w∥≤β hXm i=1 aiσi log(1 + e −y ′i·wT x ′i) − log(1 + e −y i·wT x i)i ≤ ES,σi sup ∥w∥≤β hXm i=1 aiσi log(1 + e −y i·wT x i) i+ ES′,σi sup ∥w∥≤β hXm i=1 −aiσi log(1 + e −y ′i·wT x ′i) i = 2Rm(w), where in the above derivations, Rm(w) is given by

Rm(w)=Eσ,S[supwβi=1maiσilog(1+eyiwTxi)],R_{m}(\mathbf{w})=\mathbb{E}_{\sigma,S}[\,\operatorname*{sup}_{\|w\|\leq\beta}\sum_{i=1}^{m}a_{i}\sigma_{i}\log(1+e^{-y^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})],

and the fourth equality is obtained by introducing uniformly independent random variables σi taking values in {−1, 1}. Now, we can again use McDiarmid's inequality to get

Rm(w)R^S(w)+ln1δ(iai2log2(1+eβ)+(2βv1(δ)η)2)2R^S(w)+ln1δ2(ai2log2(1+eβ)+2βv1(δ)η).\begin{array}{c}{{R_{m}(\mathbf{w})\leq\hat{R}_{S}(\mathbf{w})+\sqrt{\frac{\ln\frac{1}{\delta}\left(\sum_{i}a_{i}^{2}\log^{2}(1+e^{\beta})+\left(\frac{2\beta v^{-1}(\delta^{\prime})}{\eta}\right)^{2}\right)}{2}}}}\\ {{\leq\hat{R}_{S}(\mathbf{w})+\sqrt{\frac{\ln\frac{1}{\delta}}{2}}\Bigg(\sqrt{\sum a_{i}^{2}\log^{2}(1+e^{\beta})}+\frac{2\beta v^{-1}(\delta^{\prime})}{\eta}\Bigg).}}\end{array}

Finally, we calculate RˆS(w) as

R^S(w)=Eσ[supwR^niaiσilog(1+eyiwTxi)+bTw]\hat{R}_{S}(\mathbf{w})=\mathbb{E}_{\sigma}[\sup_{\mathbf{w}\in\hat{R}^{n}}\sum_{i}a_{i}\sigma_{i}\log(1+e^{-y^{i}\mathbf{w}^{T}\mathbf{x}^{i}})+\mathbf{b}^{\prime T}\mathbf{w}] $$\leq\frac{1}{\ln2}E_{\sigma}[\sup_{\mathbf{w}}\sum_{i}a_{i}\sigma_{i}(-y^{i}\mathbf{w}^{T}\mathbf{x}^{i})]+\frac{\beta v^{-1}(\delta^{\prime})}{\eta}$$ $$\leq\frac{|\mathbf{w}|}{\ln2}\mathbb{E}{\sigma}[\sum{i}a_{i}\sigma_{i}\mathbf{x}^{i}]+\frac{\beta v^{-1}(\delta^{\prime})}{\eta}$$ $$\leq\frac{\beta}{\ln2}\sqrt{\sum_{i}a_{i}^{2}}+\frac{\beta v^{-1}(\delta^{\prime})}{\eta},$$

where the first inequality holds by the Lipschitz property, and the last inequality uses ∥x∥ ≤ 1 and ∥w∥ ≤ β.

Putting it together, we have

E[I{sign(wT x)̸=y}] ≤ Xm i=1 ailog(1 + e −y iwT x i)) + b ′Tw + "3 ln 1 √ δ 2 log(1 + e β) + β ln 2#sX i a 2 i + 6 ln 1 √ δ 2 + 12βv−1(δ ′) η + λ 2 ||w||2 Thus, it is enough to define µ(δ, β) = 3 ln 1 √ δ 2 log(1 + e β) + β ln 2 and σ(δ, δ′, β) = 6 ln 1 √ δ 2+ 12βv−1(δ ′) . ■ Assumption A.2. The virtual cost Ψi(c) = c + Fi(c) fi(c) is an increasing function of c.

Theorem A.3. Assume that ci is drawn from a known PDF f(·). Given a mechanism design problem with privacy sensitivities c and privacy guarantees ϵ, let the sellers' costs be given by Eq. (4). Then, using the IC and IR constraints, the payments ti(c) can be substituted in the objective by Ψi(ci)uϵi(c)where Ψi(ci) is the virtual cost function given by

Ψi(ci)=ci+Fi(ci)fi(ci)  iN,ciR(26)\Psi_{i}(c_{i})=c_{i}+\frac{F_{i}(c_{i})}{f_{i}(c_{i})}\ \ \forall i\in\mathbb{N},c_{i}\in\mathbb{R}\tag{26}

Proof: The proof follows similar steps as those in Fallah et al. (2023). Let hi(c) = Ec−i [L(D, c; ϵ, θ)], where c is the argument corresponding to the privacy of agent i. Similarly, let ti(c) = Bc−i [ti(D, c, c−i)] and u(ϵi(c)) = Ec−i [u(ϵi(D, c, c−i))]. Using the IC constraint, we have

ciu(ϵi(ci))ti(ci)ciu(ϵi(ci))ti(ci).c_{i}\cdot u(\epsilon_{i}(c_{i}))-t_{i}(c_{i})\leq c_{i}\cdot u(\epsilon_{i}(c_{i}^{\prime}))-t_{i}(c_{i}^{\prime}).

From the IC constraint, the function ci· u(ϵi(c)) − ti(c) has a minima at c = ci. Thus, by equating the derivative to 0 and substituting c = ci, we get

ci(du(ϵi(c))dc)c=ci=ti(ci)(1)c_{i}\cdot\left(\frac{du(\epsilon_{i}(c))}{dc}\right)_{c=c_{i}}=t_{i}^{\prime}(c_{i})\tag{1} (27)(27)

Solving for ci from this equation we get,

ti(ci)=ti(0)+ciu(ϵi(ci))0ciu(ϵi(z))dz.(1)t_{i}(c_{i})=t_{i}(0)+c_{i}u(\epsilon_{i}(c_{i}))-\int_{0}^{c_{i}}u(\epsilon_{i}(z))dz.\tag{1}

If an individual does not participate in estimating the parameter by not giving their data, then their loss function will be 0. Thus, using the IR constraint, for all ci we have

(28)(28) ti(0)0ciu(ϵi(z))dz.t_{i}(0)\geq\int_{0}^{c_{i}}u(\epsilon_{i}(z))d z.

Because u(ϵi(ci)) ≥ 0, it implies

ti(0)0u(ϵi(z))dzt_{i}(0)\geq\int_{0}^{\infty}u(\epsilon_{i}(z))d z

Plugging this relation into Eq (28), we get

ti(ci)ciu(ϵi(ci))+ciu(ϵi(z))dz.t_{i}(c_{i})\geq c_{i}u(\epsilon_{i}(c_{i}))+\int_{c_{i}}u(\epsilon_{i}(z))d z.

Thus, for given c, the payments are calculated to be ciu(ϵi(ci)) + Rci u(ϵi(z))dz. Also, note that the minimum cost required for ci = ∞ would be 0. Therefore, this can also be written as −Rci z d dz u(ϵi(z))dz. This is an interesting observation because the payment obtained in our problem is similar to the Myerson's payment mechanism. Now, we can compute Eci [ti(ci)] as

Eei[ti(ci)]=Eei[ciu(ϵi(ci))]+Eei[ciu(ϵi(z))dz]\mathbb{E}_{e_{i}}[t_{i}(c_{i})]=\mathbb{E}_{e_{i}}[c_{i}u(\epsilon_{i}(c_{i}))]+\mathbb{E}_{e_{i}}[\int\limits_{c_{i}}u(\epsilon_{i}(z))dz] $$=\int_{z_{-i}}\int_{z_{i}}\left(z_{i}u(\epsilon_{i}(z_{i},z_{-i}))+\int_{y_{i}=z_{i}}u(\epsilon_{i}(y,z_{-i}))dy_{i}\right)f_{i}(z_{i})dz_{i}f_{-i}(z_{-i})dz_{-i}.$$

By changing the order of integrals, we have

Eci[ti(ci)]=Ec[Ψi(ci)u(ϵi(c))],\mathbb{E}_{c_{i}}[t_{i}(c_{i})]=\mathbb{E}_{\mathbf{c}}[\Psi_{i}(c_{i})u(\epsilon_{i}(\mathbf{c}))],

where Ψi(ci) = ci + Fi(ci) fi(ci) . Therefore, to minimize the expected error, for any given c ′, one can choose

ti(D,c)=Ψi(ci)u(ϵi(D,c)),t_{i}(D,\mathbf{c}^{\prime})=\Psi_{i}(c_{i})u(\epsilon_{i}(D,\mathbf{c}^{\prime})),

which completes the proof. ■ Theorem A.4. Let (a ∗, w∗, η∗) be the optimal solution for the objective function given in Eq. (10). Then 4

ai=(τγηψi(ci)log(1+eyi(w)Txi))+(aμ),a_{i}^{*}=\left(\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(\mathbf{w}^{*})^{T}\mathbf{x}^{i}})\right)^{+}\left({\frac{\|\mathbf{a}^{*}\|}{\mu}}\right),

with τ such that

i=1m(τγηψi(ci)log(1+eyi(w)Txi))+=μa,\sum_{i=1}^{m}\left(\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(w^{*})^{T}x^{i}})\right)^{+}=\frac{\mu}{\|a^{*}\|},

where η ∗is given by

η=(σ+2bTwγi=1mψi(ci)ai)1/2.\eta^{*}=\left({\frac{\sigma+2b^{T}w}{\gamma\sum_{i=1}^{m}\psi_{i}(c_{i})a_{i}^{*}}}\right)^{1/2}.

Proof: The Lagrangian for the objective function (10) is given by

mina,η,wi=1mailog(1+eyiwTxi)+λ2w2+2bTwη+μa+σ1η+γηaiΨi(ci)\min_{\mathbf{a},\eta,\mathbf{w}}\sum_{i=1}^{m}a_{i}\log(1+e^{-\mathbf{y}^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})+\frac{\lambda}{2}\|\mathbf{w}\|^{2}+\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}+\mu\|\mathbf{a}\|+\sigma\frac{1}{\eta}+\gamma\eta\sum a_{i}\Psi_{i}(c_{i}) $$+\tau(1-\sum_{i=1}^{m}a_{i})-\sum_{i=1}^{m}\zeta_{i}a_{i}-\kappa\eta.$$

Now if (a ∗, w∗, η∗) is the optimal solution, then it should satisfy the first order necessary conditions. Therefore, by taking the derivatives of the Lagrangian function with respect to a and η and setting them to 0, we get

a_{i}^{*}=\Big{(}\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(\mathbf{w}^{*})^{T}\mathbf{x}^{i}})\Big{)}^{+}\Big{(}\frac{\|\mathbf{a}^{*}\|}{\mu}\Big{)}, $$\eta^{}=\Big{(}\frac{\sigma+\mathbf{2b}^{T}\mathbf{w}}{\gamma\sum_{i=1}^{m}\psi_{i}(c_{i})a_{i}^{}}\Big{)}^{1/2}.$$

Finally, using the constraint Pm i=1 ai = 1, we obtain

i=1m(τγηψi(ci)log(1+eyi(w)Txi))+=μa.\sum_{i=1}^{m}\left(\tau-\gamma\eta^{*}\psi_{i}(c_{i})-\log(1+e^{-y^{i}\cdot(\mathbf{w}^{*})^{T}\mathbf{x}^{i}})\right)^{+}={\frac{\mu}{\|\mathbf{a}^{*}\|}}.

4(f(x))+ is used to denote max(0, f(x)). Theorem A.5. Assume that dataset and privacy sensitivities satisfy conditions (a)-(c) given in section 3.6. Furthermore, let ∥b∥ ∼ Γ(n, 1). Then, as m → ∞, there exists a constant d > 0 such that the objective function can be upper-bounded almost surely as

limmminw,e,wβE[I{sign(wTx)y}]+γi=1ϵiΨi(ci)log(1+eδpγ)+2σpγ+2bpγ,\begin{array}{c}{{\operatorname*{lim}_{m\to\infty}\operatorname*{min}_{\mathbf{w},\mathbf{e},\|\mathbf{w}\|\leq\beta}\mathbb{E}[\mathbb{I}_{\{s i g n(\mathbf{w}^{T}\mathbf{x})\neq y\}}]+\gamma\sum_{i=1}\epsilon_{i}\Psi_{i}(c_{i})}}\\ {{\leq\log(1+e^{-\frac{\delta}{\sqrt{p\gamma}}})+2\sqrt{\sigma p\gamma+2\|\mathbf{b}\|\sqrt{p\gamma}},}}\end{array}

wherein the payment is at most pσpγ + 2∥b∥ √pγ/γ. In particular, the inequality is non-trivial if pγ satisfies

log(1+eδpγ)+2σpγ+2bpγ<1.\log(1+e^{-{\frac{\delta}{\sqrt{p\gamma}}}})+2{\sqrt{\sigma p\gamma+2\|\mathbf{b}\|{\sqrt{p\gamma}}}}<1.

Furthermore, as p → 0*, the above limit becomes zero.* Proof: We first consider the case where p > 0. We can write

lim m→∞min a,η,w,∥w∥≤β,β Xm i=1 ailog(1 + e −y i·wT x i) + 2b T w η+ µ∥a∥ + σ 1 η + γηXm i=1 aiΨi(ci) ≤ lim m→∞ min β,η log(1 + e −δ∥x∥·∥w∥) + ∥b∥ 2β η+ µ∥a∥ + σ 1 η + γηXaiΨi(ci) ≤ lim m→∞ min β,η log(1 + e −δβ) + ∥b∥ 2β η+ µ∥a∥ + σ 1 η + γηXaiΨi(ci) ≤ min βlog(1 + e −δβ) + pσ + 2β∥b∥ √pγ ≤ log(1 + e − √δpγ ) + qσpγ + 2∥b∥ √pγ (29) Let us choose ai =1N for ψi(ci) ≤ p +1 mk for some k > 0, where N is a random variable denoting the number of datapoints for which ψi(ci) ≤ p +1 mk . Therefore, N → mPψi(ci) ≤ p +1 mk almost surely. Thus, lim m→∞ ∥a∥ = 0. Further, take η = √ (σ+2β||b||) √pγ. For the last step, we set β = √ 1 pγ . The minimum value of the function will be smaller than this particular choice of variables. Note that a trivial solution can be η = 0.

Also, misclassification loss can be at most 1. Therefore,

(29)(29) limp0limmmin±ν,±ϵE[I{sign(wTx)y}]+γi=1mϵiΨi(ci)1.\operatorname*{lim}_{p\to0}\operatorname*{lim}_{m\to\infty}\operatorname*{min}_{\pm\nu,\pm\epsilon}\mathbb{E}[\mathbb{I}_{\{s i g n(\mathbf{w}^{T}\mathbf{x})\neq y\}}]+\gamma\sum_{i=1}^{m}\epsilon_{i}\Psi_{i}(c_{i})\leq1.

Thus, Eq. (29) is non-trivial when pγ is such that

log(1+espγ)+σpγ+2bpγ<1.\log(1+e^{-{\frac{s}{\sqrt{p\gamma}}}})+{\sqrt{\sigma p\gamma+2||b||{\sqrt{p\gamma}}}}<1.

For p → 0, the above loss converges to 0. Therefore,

limp0limmminw,wE[I{sigm(wTw)η}]+γi=1meiΨi(ci)\lim_{p\to0}\lim_{m\to\infty}\min_{\mathbf{w},\mathbf{w}}\mathbb{E}[\mathbb{I}_{\{sigm(\mathbf{w}^{T}\mathbf{w})\neq\eta\}}]+\gamma\sum_{i=1}^{m}e_{i}\Psi_{i}(c_{i}) $$\leq\lim_{p\to0}\lim_{m\to\infty}a_{\eta,\mathbf{w}}\min_{|\mathbf{w}|\leq\beta,\beta}\left[\sum_{i=1}^{m}a_{i}\log(1+e^{-\eta^{i}\cdot\mathbf{w}^{T}\mathbf{w}^{i}})+\frac{2b^{T}\mathbf{w}}{\eta}+\mu|\mathbf{a}|+\sigma\frac{1}{\eta}\right]+\gamma\eta\sum a_{i}\Psi_{i}(c_{i})=0.$$

Next, we consider the case of p = 0. We choose ai =1N for ψi(ci) ≤1 mk for some k > 0. Therefore, N → mPψi(ci) ≤1 mk almost surely. Thus, lim m→∞ ∥a∥ = 0. Further, take η =1 mk′ , where 0 < k′ < k, and β = mk ′′ , where 0 < k′′ < k′. By substituting these parameters into the above expression, we get

where $0<k^{\prime\prime}<k^{\prime}$. By substituting these parameters into the above expression, we get $$\lim_{m\to\infty}\min_{|\mathbf{w}|\leq B,\beta}\left[\sum_{i=1}^{m}a_{i}\log(1+e^{-\mathbf{y}^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})+\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}+\mu|\mathbf{a}|+\sigma\frac{1}{\eta}+\gamma\eta\sum_{i=1}^{m}a_{i}\Psi_{i}(c_{i})\right]$$ $$\leq\lim_{m\to\infty}\log(1+e^{-\delta m^{\prime\prime}})+\frac{2|b||m^{k^{\prime\prime}}}{m^{k^{\prime}}}+\sigma\frac{1}{m^{k^{\prime}}}+\gamma\frac{m^{k^{\prime}}}{m^{k}}$$ $$=0.$$ This completes the proof. ■ Theorem A.6. Given a classification task, let D be a set of data points from m users with* ∥x i∥ ≤ 1, for each i. Then, there exists a value λconv such that the objective function as defined in Eq. (13) is convex in (w, z) for λ > λconv and µ, σ, η ∈ R+. Let* (wt, z t)t∈N be the sequence of iterates on applying projected gradient descent on f(·) for a fixed η on a convex set S*, and let* f ∗ η = infw,z f(w, z, η). Then, for λ > λconv, there exists 0 < α < 1*, such that*

f(wt,zt,η)fηαt(f(w0,z0,η)fη).f(\mathbf{w}^{t},\mathbf{z}^{t},\eta)-f_{\eta}^{*}\leq\alpha^{t}(f(\mathbf{w}^{0},\mathbf{z}^{0},\eta)-f_{\eta}^{*}).

Proof: We prove that the function

i=1m[ezilog(1+ewTxiyi)+λ0i2w2+γmϵavgeziΨi],\sum_{i=1}^{m}\left[e^{z_{i}}\log(1+e^{-\mathbf{w}^{T}\mathbf{x}^{i}\cdot\mathbf{y}^{i}})+{\frac{\lambda_{0}^{i}}{2}}\cdot\|\mathbf{w}\|^{2}+\gamma\cdot m\epsilon_{a v g}e^{z_{i}}\Psi_{i}\right],

where Pm i=1 λ i 0 = λ, is jointly convex in w and z, which implies that the objective function will also be jointly convex. The Hessian matrix for the i th term of the loss function takes the form of

a b(x i 1 y i) . . . b(x i n y i) b(x i 1 y i) 2λ i 0 + c(x i 1 y i) 2. . . c(x i 1 y i)(x i n y i) b(x i 2 y i) c(x i 1 y i)(x i 2 y i) . . . c(x i 2 y i)(x i n y i) . . . . . . . . . . . . . . . . . . b(x i n y i) c(x i 1 y i)(x i n y i) . . . c(x i n y i) 2 + 2λ i 0
, where a, b, and c in the above matrix are given by

a=ezilog(1+ewTxiyi),b=eziewTxiyi1+ewTxiyi1ln2,c=eziewTxiyi(1+ewTxiyi)21ln2.\begin{array}{l}{{a=e^{z_{i}}\log(1+e^{-\mathbf{w}^{T}\mathbf{x}^{i}\cdot\mathbf{y}^{i}}),}}\\ {{b=e^{z_{i}}\frac{e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}}}{1+e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}}}\frac{1}{\ln2},}}\\ {{c=e^{z_{i}}\frac{e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}}}{(1+e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}})^{2}}\frac{1}{\ln2}.}}\end{array}

Using column elimination, we get

a b(x i 1 y i) 0 . . . 0 b(x i 1 y i) 2λ i 0 + c(x i 1 y i) 2 −2λ i 0 x i 2y i x i 1 yi. . . −2λ i 0 x i ny i x i 1 yi b(x i 2 y i) c(x i 1 y i)(x i 2 y i) 2λ i 0. . . 0 . . . . . . . . . . . . . . . . . . . . . b(x i n y i) c(x i 1 y i)(x i n y i) 0 . . . 2λ i 0 . Thus, the determinant is equal to

a · 2λ i 0 + c(x i 1y i) 2 −2λ i 0 x i 2 y i xi1 yi −2λ i 0 x i 3 y i xi1 yi. . . −2λ i 0 x i ny i xi1 yi c(x i 1y i)(x i 2y i) 2λ i 0 0 . . . 0 . . . . . . . . . . . . . . . . . . . . . c(x i 1y i)(x i ny i) 0 0 . . . 2 ∗ λ i 0 −b(x i 1y i) b(x i 1y i) −2λ i 0 x i 2 y i xi1 yi −2λ i 0 x i 3 y i xi1 yi. . . −2λ i 0 x i ny i xi1 yi b(x i 2y i) 2λ i 0 0 . . . 0 . . . . . . . . . . . . . . . . . . . . . b(x i ny i) 0 0 . . . 2λ i 0 . The first matrix (whose determinant is multiplied by a ) is of the form A + 2λ i 0 I, where A is a rank 1 matrix. Thus, A has (n − 1) eigenvalues of 0, and we calculate the non-zero eigenvalue to be c∥x∥ 2. Hence, its determinant equals (2λ i 0 ) (n−1)(2λ i 0 + c∥x∥ 2). Expanding the second term, we get (2λ i 0 ) (n−1)b 2∥x∥ 2. Thus, to ensure that the Hessian is positive definite, we need

2λ0i>b2acaxi22\lambda_{0}^{i}>\frac{b^{2}-ac}{a}\|\mathbf{x}^{i}\|^{2} $$\Rightarrow2\lambda_{0}^{i}>|\mathbf{x}^{i}|^{2}\bigg{(}\frac{1}{\ln2}\bigg{)}^{2}\cdot e^{z_{i}}\bigg{(}\frac{e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}}}{1+e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}}}\bigg{)}^{2}\bigg{(}\frac{1}{\log(1+e^{-\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}})}-\ln2\cdot e^{\mathbf{w}^{T}\mathbf{x}^{i}\mathbf{y}^{i}}\bigg{)}.$$

Thus, the coefficient of ∥w∥ 2(denoted by λ) needs to be

2λ > X i e zi ∥x i∥ 2 1 ln 22· e −wT x iy i 1 + e−wT xiyi 21 log(1 + e−wT xiyi) − ln 2 · e wT x iy i, ⇒2λ > 1 ln 22· max i∥x i∥ 2 e −wT x iy i 1 + e−wT xiyi 21 log(1 + e−wT xiyi) − ln 2 · e wT x iy i. (30) Next, we proceed to prove the second part. First, observe that for λ > λconv, the function is (λ−λconv)-strictly convex. Let the function be µ-strictly convex. Denote (w, z) by v, and let infv f(v) = L. Then, there exists v ∗ ∈ R m+n, such that ∀ δ > 0

f(v)L<δf(v^{*})-L<\delta

Thus, we have

f(v)L=f(v)f(v)+f(v)Lf(v)-L=f(v)-f(v^{*})+f(v^{*})-L $$\leq\delta+\langle\bigtriangledown f(v),v-v^{}\rangle-\frac{\mu}{2}|v^{}-v|^{2}$$ $$\leq\delta+\frac{1}{2\mu}|\bigtriangledown f(v)|^{2}.\tag{31}$$

Next, we prove that f is L-smooth. Let ▽wf(v) and ▽zf(v) denote the gradient vectors of f with respect to w and z, respectively. Then

∥ ▽ f(v 1) − ▽f(v 2)∥ 2 = ∥ ▽w f(v 1) − ▽wf(v 2)∥ 2 + || ▽z f(v 1) − ▽zf(v 2)∥ 2 = ∥ ▽w f(v1) − ▽wf(v2)∥ 2 + ∥e z 1(log(1 + e −w1T xy) + e z 1 ∥(e z 1)∥ − e z 2(log(1 + e −w2T xy) + e z 2 ∥(e z 2)∥ ∥ 2. Now, the logistic loss is L1-smooth for some L1 > 0. Moreover, since ∥x∥, ∥w∥ is bounded, ∥ log(1 +e −wT xy)∥ is bounded. Further, ∥(1/∥(e z)∥)∥ < √m. Thus, there exists an K such that f is K-smooth.

Consider that the step size for gradient descent is chosen such that, γK ≤ 1. Therefore, by descent lemma, we have

f(vt+1)f(vt)+f(vt),vt+1vt+K2vt+1vt2f(v^{t+1})\leq f(v^{t})+\langle\bigtriangledown f(v^{t}),v^{t+1}-v^{t}\rangle+\frac{K}{2}\|v^{t+1}-v^{t}\|^{2} $$\leq f(v^{t})-\gamma||\bigtriangledown f(v^{t})|^{2}+\frac{K\gamma^{2}}{2}|\bigtriangledown f(v^{t})|^{2}$$ $$=f(v^{t})-\frac{\gamma}{2}(2-K\gamma)|\bigtriangledown f(v^{t})|^{2}$$ $$\leq f(v^{t})-\frac{\gamma}{2}|\bigtriangledown f(v^{t})|^{2}.$$

The second inequality uses the condition for projection, and that projection is non-expansive. Finally, the update rule is substituted to obtain terms with ∥ ▽ f(v t)∥ 2. By using Eq (31) and applying recursion, we get

f(vt)L(1γμ)t(f(v0)L)+δ.f(v^{t})-L\leq(1-\gamma\mu)^{t}(f(v^{0})-L)+\delta.

B Appendix B: Additional Experiments

In this appendix, we perform additional experiments on generated synthetic data to demonstrate more trends in the solution. For synthetic data, we consider the classification boundary to be a linear separator passing through the origin. Input data x iis generated by sampling from i.i.d. zero-mean Gaussian distribution with bounded variance. Corresponding outputs, y i, are generated using the linear separator. Furthermore, c is drawn from U[p, q], where p, q ∈ R. Moreover, unless stated otherwise, we consider the same hyperparameter values as in the case of real data, and γ is taken as 1.0.

Intuition behind hyperparameter µ

22_image_0.png

Figure 3: Algorithm performance with respect to η. Experiment Details: We start with validating our earlier point about the importance of considering generalization loss in the objective function. For this, we solve the logistic regression problem while ensuring heterogeneous differential privacy, i.e., Eq. (3). The misclassification error is compared for different values of µ. A higher value of µ means a higher focus on generalization error. Averaged results for different values of the differential privacy guarantee ϵ and noise vector b are plotted in Figure 3. Train/test misclassification error is the percentage of misclassified samples while empirical loss is given by Pm i=1 ailog(1 + e −y i·wT x i).

Observations: We see that as µ increases, there is a reduction in both train and test misclassification errors at first, and then it increases slightly. Therefore, choosing the correct value of µ is important to achieve the best classification accuracy. Moreover, we observe that empirical loss increases as µ increases. This means that for a smaller µ, the empirical loss is small even if samples are misclassified. Therefore, optimizing over the empirical loss alone might not result in a good logistic regression model. Hence, it is also necessary to consider generalization terms in the optimization.

Performance With Respect To Distribution Of C

In order to observe the effect of the distribution of c on the solution, we vary p and q where c ∼ U[p, q]. The

23_image_0.png

results are illustrated in Figure 4. As expected, it is observed that the variance in a (weight attached to each datapoint) decreases as the variance of c decreases. Moreover, as the variance of c goes to 0, the variance of a also vanishes. This implies that, as the cost per unit loss of privacy (i.e., privacy sensitivity of all the users) becomes the same, the optimal choice for the platform is to treat each data point almost equally by providing them the same privacy guarantee while performing logistic regression.

Figure 4: Effect of the variance of c on the variance of a.

Performance With Respect To Parameter Η

Here, we show how the addition of noise affects the performance of logistic regression. We fix ai = 1 m and vary η. We write η = m · ϵavg and vary ϵavg. Therefore, we solve logistic regression by minimizing the following objective while adding noise to ensure differential privacy.

minw[i=1mailog(1+eyiwTxi)+λ2w2+2bTwη],\operatorname*{min}_{\mathbf{w}}\left[\sum_{i=1}^{m}a_{i}\log(1+e^{-y^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})+{\frac{\lambda}{2}}\|\mathbf{w}\|^{2}+{\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}}\right],

As before, ∥b∥ ∼ Γ(n, 1) and its direction is chosen uniformly at random. Note that this has been done in Chaudhuri et al. (2011). However, we performed the experiments and show our observations to show completeness. The experiments are repeated for different values of b, and the observations are averaged and depicted in Figure 5.

It can be seen that when η increases (and, therefore, the amount of noise decreases), the algorithmic performance increases, as expected. This is seen through a decrease in both train and test misclassification errors.

24_image_0.png

Figure 5: Algorithm performance with respect to η.

C Appendix C: A Simple Example

Consider a simple problem of solving logistic regression using a dataset collected from privacy sensitive sellers.

Also consider that u(x) = x. Therefore, the buyer's objective is as before

Ec[E[I{sign(wTx)y}]+γiti]\mathbb{E}_{\mathbf{c}}\left[\mathbb{E}[\mathbb{I}_{\{s i g n(\mathbf{w}^{T}\mathbf{x})\neq y\}}]+\gamma\sum_{i}t_{i}\right] (32)\left(\text{32}\right). Thus, the problem is formulated as follows

  • Let the input be one-dimensional and the dataset consist of two sellers. Therefore let D = {(x 1, y1),(x 2, y2)} = {(1, 1),(−1, −1)}.

  • Moreover, let the privacy sensitivities be c1 = 0.1, c2 = 0.6. Further, assume that the privacy sensitivities are iid and come from U[0, 1].

  • Therefore, our goal is to find the optimal model weights w, differential privacy guarantees ϵ1, ϵ2 and the payments t1, t2. Additionally, w needs to be calculated such that it is consistent with ϵ1, ϵ2.

To solve this, we will first calculate ψi. For c ∼ U[0, 1]

(33)12(33)^{\frac{1}{2}} ψi=ci+F(ci)/f(ci)=2ci\psi_{i}=c_{i}+F(c_{i})/f(c_{i})=2c_{i}

ψi = ci + F(ci)/f(ci) = 2ci (33) Thus, ψ1(c1) = 0.2, ψ2(c2) = 1.2 Therefore, we calculate w, ϵ1, ϵ2 by optimizing the below equation

\min_{\mathbf{a},\eta,\mathbf{w}}\bigg{[}\sum_{i=1}^{m}a_{i}\log(1+e^{-y^{i}\cdot\mathbf{w}^{T}\mathbf{x}^{i}})+\frac{2\mathbf{b}^{T}\mathbf{w}}{\eta}+\mu\|\mathbf{a}\|+\sigma\frac{1}{\eta}+\gamma\eta\sum_{i=1}^{m}a_{i}\Psi_{i}(c_{i})\bigg{]},\tag{34}

For different values of {µ, σ, γ} we get corresponding values of w, ϵ1, ϵ2. Next, w is evaluated on a validation dataset to get the misclassification error Ec E[I{sign(wT x)̸=y}] . Moreover, we use the payment identity to get the payments corresponding to ϵi. Finally, the best combination of payments and model accuracy is selected based on the platform's needs.