Abstract
In this work, we conduct the first systematic study of stochastic variational inequality (SVI) and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP). We propose two algorithms: Noisy Stochastic Extragradient (NSEG) and Noisy Inexact Stochastic Proximal Point (NISPP). We show that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function; and a sampling with replacement variant achieves optimal risk bounds with respect to a weak gap function. We also show lower bounds of the same order on weak gap function. Hence, our algorithms are optimal. Key to our analysis is the investigation of algorithmic stability bounds, both of which are new even in the nonprivate case. The dependence of the running time of the sampling with replacement algorithms, with respect to the dataset size n, is \(n^2\) for NSEG and \({\widetilde{O}}(n^{3/2})\) for NISPP.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Stochastic variational inequalities (SVI) and stochastic saddle-point (SSP) problems have become a central part of the modern machine learning toolbox. The main motivation behind this line of research is the design of algorithms for multiagent systems and adversarial training, which are more suitably modeled by the language of games, rather than pure (stochastic) optimization. Applications that rely on these methods may often involve the use of sensitive user data, so it becomes important to develop algorithms for these problems with provable privacy-preserving guarantees. In this context, differential privacy (DP) has become the gold standard of privacy-preserving algorithms, thus a natural question is whether it is possible to design DP algorithms for SVI and SSP that attain high accuracy.
Motivated by these considerations, this work provides the first systematic study of differentially-private SVI and SSP problems. Before proceeding to the specific results, we present more precisely the problems of interest. The stochastic variational inequality (SVI) problem is: given a monotone operator \(F:\mathcal {W}\mapsto \mathbb {R}^d\) in expectation form \(F(w)=\mathbb {E}_{\varvec{\beta }\sim {\mathcal P}}[F_{\varvec{\beta }}(w)]\), find \(w^* \in \mathcal {W}\) such that
The closely related stochastic saddle point (SSP) problem is: given a convex-concave real-valued function \(f:{\mathcal W}\mapsto \mathbb {R}\) (here \({\mathcal {W}}={{\mathcal {X}}}\times {{\mathcal {Y}}}\) is a product space), given in expectation form \(f(x,y) = \mathbb {E}_{\varvec{\beta }\sim \mathcal {P}}[f_{\varvec{\beta }}(x, y)]\), the goal is to find \((x^{*},y^{*})\) that solves
In both of these problems, the input to the algorithm is an i.i.d. sample \(\textbf{S}=(\varvec{\beta }_1,\ldots ,\varvec{\beta }_n)\sim {{\mathcal {P}}}^n\). Uncertainty introduced by a finite random sample renders the computation of exact solutions infeasible, so gap (a.k.a. population risk) functions are used to quantify the quality of solutions. Let \({\mathcal {A}}:{{\mathcal {Z}}}^n\mapsto {\mathcal {W}}\) be an algorithm for SVI problems (VI(F)).
We define the strong VI-gap associated with \({\mathcal {A}}\) as
We also define the weak VI-gap as
Here, expectation is taken over both the sample data \(\textbf{S}\) and the internal randomization of \({\mathcal {A}}\). For SSP (SP(f)), given an algorithm \({\mathcal {A}}:{{\mathcal {Z}}}^n\mapsto {{\mathcal {X}}}\times {\mathcal Y}\), and letting \({\mathcal {A}}(\textbf{S})=(x(\textbf{S}),y(\textbf{S}))\), a natural gap function is the following saddle-point (a.k.a. primal-dual) gap
Analogously as above, we define the weak SSP gap asFootnote 1
It is easy to see that in both cases the gap is always nonnegative, and any exact solution must have zero-gap. For examples and applications of SVI and SSP we refer to Sect. 2.1. Despite the fact that the strong VI is a more classical and well-studied quantity, the weak VI gap has been observed to be useful in various contexts. We refer the reader to [50] for more discussions on the weak VI gap.
On the other hand, we are interested in designing algorithms that are differentially private. These algorithms build a solution based on a given dataset \(\textbf{S}\) of random i.i.d. examples from the target distribution, and output a (randomized) feasible solution, \({\mathcal {A}}(\textbf{S})\). We say that two datasets \(\textbf{S}=(\varvec{\beta }_i)_i,\textbf{S}^{\prime }=(\varvec{\beta }_i^{\prime })_i\) are neighbors, denoted \(\textbf{S}\simeq \textbf{S}^{\prime }\), if they only differ in a single entry i. We say that an algorithm \({\mathcal {A}}(\textbf{S})\) is \((\varepsilon ,\eta )\)-differentially private if for every event E in the output spaceFootnote 2
Here \(\varepsilon ,\eta \geqslant 0\) are prescribed parameters that quantify the privacy guarantee. Designing DP algorithms for particular data analysis problems is an active area of research. Optimal risk algorithms for stochastic convex optimization have only very recently been developed, and it is unclear whether these methods are extendable to SVI and SSP settings.
1.1 Summary of contributions
Our work is the first to provide population risk bounds for DP-SVI and DP-SSP problems. Moreover, our algorithms attain provably optimal rates and are computationally efficient. We summarize our contributions as follows:
-
1.
We provide two different algorithms for DP-SVI and DP-SSP: namely, the noisy stochastic extragradient method (NSEG) and a noisy inexact stochastic proximal-point method (NISPP). The NSEG method is a natural DP variant of the well-known stochastic extragradient method [30], where privacy is obtained by Gaussian noise addition; on the other hand, the NISPP method is an approximate proximal point algorithm [28, 43] in which every proximal iterate is made noisy to make it differentially private. Our more basic variants of both of these methods are based on iterations involving disjoint sets of datapoints (a.k.a. single pass method), which are known to typically lead to highly suboptimal rates in DP (see the Related Work Section for further discussion).
-
2.
We derive novel uniform stability bounds for the NSEG and NISSP methods. For NSEG, our stability upper bounds are inspired by the interpretation of the extragradient method as a (second order) approximation of the proximal point algorithm. In particular, we provide expansion bounds for the extragradient iterates, and solve a (stochastic) linear recursion. The stability bounds for NISPP method are based on stability of the (unique) SVI solution in the strongly monotone case. Finally, we investigate the risk attained by multipass versions of the NSEG and NISPP methods, leveraging known generalization bounds for stable algorithms [35]. Here, we show that the optimal risk for DP-SVI and DP-SSP can be attained by running these algorithms with their sampling with replacement variant. In particular, NSEG method requires \(n^2\) stochastic operator evaluations, and NISPP method requires much smaller \({\widetilde{O}}(n^{3/2})\) operator evaluations for both DP-SVI and DP-SSP problems. In particular, these upper bounds also show the dependence of the running time of each of these algorithms w.r.t. the dataset size.
-
3.
Finally, we prove lower bounds on the weak gap function for any DP-SSP and DP-SVI algorithm, showing the risk optimality of the aforementioned multipass algorithms. The main challenge in these lower bounds is showing that existing constructions of lower bounds for DP convex optimization [5, 7, 46] lead to lower bounds on the weak gap of a related SP/VI problem.
The following table provides details of population risk and operator evaluation complexity.
1.2 Related work
We divide our discussion on related work in three main areas. Each of these areas has been extensively investigated, so a thorough description of existing work is not possible. We focus ourselves on the work which is more directly related to our own.
-
1.
Stochastic Variational Inequalities and Saddle-Point Problems: Variational inequalities and saddle-point problems are classical topics in applied mathematics, operations research and engineering (e.g., [3, 18, 33, 38, 40,41,42,43, 45]). Their stochastic counterparts have only gained traction recently, mainly motivated by their applications in machine learning (e.g., [24, 25, 29, 30, 34] and references therein). For the stochastic version of (SP(f)), [39] proposed a robust stochastic approximation method. The first optimal algorithm for SVI with monotone Lipschitz operators was obtained by Juditsky, Nemirovski and Tauvel [30], and very recently Kotsalis, Lan and Li [34] developed optimal variants for the strongly monotone case (in terms of distance to the optimum criterion, rather than VI gap).
It is important to note that naive adaptation of these methods to the DP setting requires adding noise to the operator evaluations at every iteration, which substantially degrades the accuracy of the obtained solution. A careful privacy accounting and minibatch schedule can lead to optimal guarantees for single-pass methods [19], however this requires accuracy guarantees for the last iterate, which is currently an open problem for SVI and SSP (aside from specific cases, typically involving strong monotonicity conditions, e.g., [24, 34]). We circumvent this problem by providing population risk guarantees for multipass methods.
-
2.
Stability and Generalization: Deriving generalization (or population risk) bounds for general-purpose algorithms is a challenging task, actively studied in theoretical machine learning. Bousquet and Elisseeff [8] provided a systematic treatment of this question for algorithms which are stable, with respect to changes of a single element in the training dataset, and a sequence of works have refined these generalization guarantees (see [9, 21] and references therein). This idea has been applied to investigate the generalization properties of regularized empirical risk minimization [8, 44], and more recently to iterative methods, such as stochastic gradient descent [4, 23].
Using stability to obtain population risk bounds in SVI and SSP is substantially more challenging, due to the presence of a supremum in the accuracy measure (see Eqs. (1.1) and (1.3)). Recently, Zhang et al. [50], established stability implies generalization results for the strong SP gap under strong monotonicity assumptions. Their proof strategy applies analogously to address the SVI setting, although this is not carried out in their work. More recently, Lei et al. [35], proved generalization bounds on the weak SP gap without strong monotonicity assumptions. We leverage this result for our algorithms, and further elaborate on its implications for SVI in Sect. 2.2.
-
3.
Differential Privacy: Differential privacy is the gold standard for private data analysis, and it has been studied for nearly 20 years [15, 16]. Beyond its classical definition, multiple variants have been introduced, including local [14, 31], concentrated [10], Rényi [37], and Gaussian [13]. Relevant to the optimization community are the applications of differential privacy to combinatorial optimization [22].
Differentially private empirical risk minimization and stochastic convex optimization have been extensively studied for over a decade (see, e.g. [5, 7, 11, 12, 19, 26, 27, 32, 47]). Relevant to our work are the first optimal risk algorithms for DP-ERM [7] and DP-SCO [5]. Non-Euclidean extensions have also been obtained recently [2, 6]. To the best of our knowledge, our work is the first to address DP algorithms for SVI and SSP. Our approach for generalization of multipass algorithms is inspired by the noisy SGD analysis in [4]. However, our stability analysis differs crucially from [4]: in the case of NSEG, we need to carefully address the double operator evaluation of the extragradient step, which is done by using the fact that the extragradient operator is approximately nonexpansive. In the case of NISPP, we leverage the contraction properties of strongly monotone VI solutions. By contrast, SGD in the nonsmooth case is far from nonexpansive [4]. Alternative approaches to obtain optimal risk in DP-SCO, including privacy amplification by iteration [19, 20], and phased regularization or phased SGD [19], appear to run into fundamental limitations when applied to DP-SVI and DP-SSP. It is an interesting future research direction to obtain faster running times with optimal population risk in DP-SVI and DP-SSP, which may benefit from these alternative approaches.
The main body of this paper is organized as follows. In Sect. 2, we provide the necessary background information on SVI/SSP, uniform stability, and differential privacy, which are necessary for the rest of the paper. In Sect. 3 we introduce the NSEG method, together with its basic privacy and accuracy guarantee for a single pass version. Section 4 provides stability bounds for NSEG method along with the consequent optimal rates for SVI and SSP. In Sect. 5, we introduce the single-pass differentially private NISPP method with bound on expected SVI-gap. Section 6 presents stability analysis of NISPP, together with the resulting optimal rates for SVI/SSP gap. We conclude in Sect. 7 with lower bounds that prove the optimality of the obtained rates.
2 Notation and preliminaries
We work on the Euclidean space \((\mathbb {R}^d, \langle \cdot ,\cdot \rangle )\), where \(\langle \cdot ,\cdot \rangle \) is the standard inner product, and \(\Vert u\Vert =\sqrt{\langle u,u\rangle }\) is the \(\ell _2\)-norm. Throughout, we consider a compact convex set \(\mathcal {W}\subseteq \mathbb {R}^d\) with diameter \(D>0\). We denote the standard Euclidean projection operator on set \(\mathcal {W}\) by \(\varPi _{\mathcal {W}}(\cdot )\). The identity matrix on \(\mathbb {R}^d\) is denoted by \({\mathbb {I}}_d\).
We let \(\mathcal {P}\) denote an unknown distribution supported on an arbitrary set \(\mathcal {Z}\), from which we have access to exactly n i.i.d. datapoints which we denote by sample set \(\textbf{S}\sim {{\mathcal {P}}}^n\). Throughout, we will use boldface characters to denote sources of randomness (coming from the data, or internal algorithmic randomization). We say that two datasets \(\textbf{S}, \textbf{S}'\) are adjacent (or neighbors), denoted by \(\textbf{S}\simeq \textbf{S}'\), if they differ in a single data point. We also denote subsets (a.k.a. batches), or single data points, of \(\textbf{S}\) or \(\mathcal {P}\) by \(\textbf{B}\) and \(\varvec{\beta }\), respectively. Whether \(\varvec{\beta }\) or \(\textbf{B}\) is sampled from \(\mathcal {P}\) or \(\textbf{S}\) is specified explicitly unless it is clear from the context. For a batch \(\textbf{B}\), we denote its size by \(|\textbf{B}|\). Therefore, we have \(|\textbf{S}| = n\). Throughout, we will denote Gaussian random variables by \(\varvec{\xi }\).
We say that \(F:\mathcal {W}\rightarrow \mathbb {R}^d\) is a monotone operator if
Given \(L>0\), we say that F is L-Lipschitz continuous, if
Finally, we say that F is M-bounded if \(\sup _{w\in \mathcal {W}}\Vert F(w)\Vert \leqslant M\). We denote the set of monotone, L-Lipschitz and M-bounded operators by \({\mathcal {M}}_{\mathcal {W}}^1(M,L)\). In this work, we will focus on the case where F is an expectation operator, i.e., \(F(w):= \mathbb {E}_{\varvec{\beta }\sim \mathcal {P}} [F_{\varvec{\beta }}(w)]\), where \(\mathcal {P}\) is an arbitrary distribution supported on \(\mathcal {Z}\),
and for any \(\varvec{\beta }\) in \({{\mathcal {Z}}}\), \(F_{\varvec{\beta }}(\cdot )\in {\mathcal {M}}_{\mathcal {W}}^1(M,L)\), \(\varvec{\beta }\)-a.s.Footnote 3
In the stochastic saddle point problem (SP(f)), we modify the notation slightly. Here, \({{\mathcal {X}}}\subseteq \mathbb {R}^{d_1}\) and \({{\mathcal {Y}}}\subseteq \mathbb {R}^{d_2}\) are compact convex sets, and we will assume that the saddle point functions \(f_{\varvec{\beta }}(\cdot ,\cdot ):{{\mathcal {X}}}\times {{\mathcal {Y}}}\mapsto \mathbb {R}\), satisfy the following conditions \(\varvec{\beta }\)-a.s.
-
\(\nabla _x f_{\varvec{\beta }}(\cdot ,\cdot )\) is \(L_x\)-Lipschitz continuous, and \(\nabla _y f_{\varvec{\beta }}(\cdot ,\cdot )\) is \(L_y\)-Lipschitz continuous, and;
-
\(f_{\varvec{\beta }}(\cdot ,y)\) is convex, for any given \(y\in \mathcal {Y}\), and \(f_{\varvec{\beta }}(x,\cdot )\) is concave, for any given \(x\in \mathcal {X}\) (we will say in this case the function is convex-concave).
If the assumptions above are met, we will denote \(L\triangleq \sqrt{L_x^2 + L_y^2}\). Under the assumptions above, it is well-known that SSP [42, 45] (and SVI [18], respectively) have a solution.
In the case of saddle-point problems, given the convex-concave function \(f_{\varvec{\beta }}(\cdot ,\cdot ):{{\mathcal {X}}}\times {{\mathcal {Y}}}\mapsto \mathbb {R}\), it is well-known that the operator \(F:{{\mathcal {X}}}\times {{\mathcal {Y}}}\mapsto \mathbb {R}^d\times \mathbb {R}^d\) below is monotone
We will call this operator the monotone operator associated with \(f_{\varvec{\beta }}(\cdot ,\cdot )\). Furthermore, if \(\nabla _x f_{\varvec{\beta }}(\cdot ,y)\) has \(L_x\)-Lipschitz continuous gradient and \(\nabla _y f_{\varvec{\beta }}(x,\cdot )\) has \(L_y\)-Lipschitz continuous gradient, then F is \(\sqrt{L_x^2+L_y^2}\)-Lipschitz continuous.
It is easy to see that, given a SSP problem with function \(f_{\varvec{\beta }}(\cdot ,\cdot )\) and sets \({{\mathcal {X}}}\), \({{\mathcal {Y}}}\), an (exact) SVI solution (VI(F)) for the monotone operator associated to \(f(x,y)=\mathbb {E}_{\varvec{\beta }}[f_{\varvec{\beta }}(x,y)]\) over the set \(\mathcal {W}={{\mathcal {X}}}\times {{\mathcal {Y}}}\), yields an exact SSP solution for the starting problem. Unfortunately, such reduction does not directly work for approximate solutions to (1.1) and (1.3), so the analysis must be done separately for both problems.
For batch \(\textbf{B}\), we denote the empirical (a.k.a. sample average) operator \(F_{\textbf{B}}(w):= \frac{1}{|\textbf{B}|} \sum _{\varvec{\beta }\in \textbf{B}} F_{\varvec{\beta }}(w).\) On the other hand, for a batch \(\textbf{B}\), the empirical saddle point function is denoted as \(f_{\textbf{B}}(x, y) = \frac{1}{|\textbf{B}|} \sum _{\varvec{\beta }\in \textbf{B}} f_{\varvec{\beta }}(x, y).\) Given a distribution \({{\mathcal {P}}}\), the expectation operator and function are denoted by \(F_{{\mathcal {P}}}(w):={\mathbb {E}}_{\varvec{\beta }\sim {\mathcal {P}}}[F_{\varvec{\beta }}(w)]\), and \(f_{{\mathcal {P}}}(x,y)=\mathbb {E}_{\varvec{\beta }\sim {{\mathcal {P}}}}[f_{\varvec{\beta }}(x,y)]\), respectively. For brevity, whenever it is clear from context we will drop the dependence on \({{\mathcal {P}}}\).
2.1 Examples and applications of SVI and SSP
An interesting problem which can be formulated as a SSP-problem is the minimization of a max-type convex function:
where \(\phi _j: \mathcal {X}\rightarrow \mathbb {R}\) is a stochastic convex function \(\phi _j(x):= \mathbb {E}_{\varvec{\zeta }_j\sim \mathcal {P}_j}[\phi _{j,\varvec{\zeta }_j}(x)]\) for all \(j \in [m]\). This problem is essentially a structured nonsmooth optimization problem which can be reformulated into a convex-concave saddle point problem:
Here, \(\varvec{\beta }= (\varvec{\zeta }_j)_{j=1}^m\) is the random input to the saddle point problem: \(f_{\varvec{\beta }}(x,y) = \sum _{j = 1}^my_j\phi _{j,\varvec{\zeta }_j}(x)\). Note that a substantial generalization of the max-type problem above is the so called compositional optimization problem:
where \(\phi _j(x)\) are convex maps and \(\varPhi (u_1, \dots , u_m)\) is a real-valued convex function whose Fenchel-type representation is assumed to have the form
where \(\varPhi _{*}\) is a convex, Lipschitz and smooth. Then, overall optimization problem can be reformulated as a convex-concave saddle point problem:
where stochasticity is introduced due to constituent functions \(\phi _j(x) = \mathbb {E}_{\varvec{\zeta }_j} [\phi _{j,\varvec{\zeta }_j}(x)].\)
To conclude, we remark that these types of models have been recently proposed in machine learning to address approximate fairness [49] and federated learning on heterogeneous populations [36]. In these examples, the different indices \(j\in [m]\) may denote different subgroups from a population, and we are interested in bounding the (excess) population risk on these subgroups uniformly (with the motivation of preventing discrimination against any subgroup). This clearly cannot be achieved by a stochastic convex program, and a stochastic saddle-point formulation is effective in certifying accuracy across the different subgroups separately.
For further examples and applications of stochastic variational inequalities and saddle-point problems, we refer the reader to [29, 30, 50].
2.2 Algorithmic stability
In general, an algorithm is a randomized function mapping datasets to candidate solutions, \({\mathcal {A}}:\mathcal {Z}^n\mapsto \mathbb {R}^d\), which is measurable w.r.t. the dataset. Two datasets, \(\textbf{S}=(\varvec{\beta }_1,\ldots ,\varvec{\beta }_n),\,\textbf{S}^{\prime }=(\varvec{\beta }_1^{\prime },\ldots ,\varvec{\beta }_n^{\prime })\in \mathcal {Z}^n\) are said to be neighbors (denoted \(\textbf{S}\simeq \textbf{S}^{\prime }\)) if they only differ in at most one data point, namely
Algorithmic stability is a notion of sensitivity analysis of an algorithm under neighboring datasets. Of particular interest to our work is the notion of uniform argument stability (UAS).
Definition 1
(Uniform Argument Stability) Let \({\mathcal {A}}:\mathcal {Z}^n\mapsto \mathbb {R}^d\) be a randomized mapping and \(\delta >0\). We say that \({\mathcal {A}}\) is \(\delta \)-uniformly argument stable (for short, \(\delta \)-UAS) if
Occasionally, we may denote \(\delta _{\mathcal {A}}(\textbf{S},\textbf{S}^{\prime })\triangleq \Vert {\mathcal {A}}(\textbf{S})-{\mathcal {A}}(\textbf{S}^{\prime })\Vert \), for convenience. The importance of algorithmic stability in machine learning comes from the fact that stability implies generalization in stochastic optimization and stochastic saddle point (SSP) problems [8, 9, 50]. Below, we restate existing results on stability implies generalization for SSP problems below. Before doing so we need to briefly introduce the (strong) empirical gap function: given a dataset \(\textbf{S}\) and an algorithm \(\mathcal {A}\), we define the empirical gap function for a saddle point and variational inequality problem respectively as
Notice that in these definitions the dataset \(\textbf{S}\) is fixed.
Proposition 1
[35, 50] Consider the stochastic saddle point problem (SP(f)) with functions \(f_{\varvec{\beta }}(\cdot , y)\) and \(f_{\varvec{\beta }}(x, \cdot )\) being M-Lipschitz for all \(x \in \mathcal {X}, y \in \mathcal {Y}\) and \(\varvec{\beta }\)-a.s.. Let \(\mathcal {A}: \mathcal {Z}^n \rightarrow \mathcal {X}\times \mathcal {Y}\) be an algorithm, where \(\mathcal {A}(\textbf{S})=(x(\textbf{S}),y(\textbf{S}))\). If \(x(\cdot )\) is \(\delta _x\)-UAS and \(y(\cdot )\) is \(\delta _y\)-UAS, and both are integrable, then
This result can be extended for SVI problems as well. We provide a formal statement below and prove it in Appendix A.
Proposition 2
Consider a stochastic variational inequality with M-bounded operators \(F_{\varvec{\beta }}(\cdot ):\mathcal {W}\mapsto \mathbb {R}^d\). If \({\mathcal {A}}:\mathcal {Z}^n\mapsto \mathcal {W}\) is integrable and \(\delta \)-UAS, then
2.3 Background on differential privacy
Differential privacy is an algorithmic stability type of guarantee for randomized algorithms, that certifies that the output distribution of the algorithm “does not change too much” by changes in a single element from the dataset. The formal definition is provided in Eq. (1.5). Next we provide some basic results in differential privacy, which we will need for our work. For further information on the topic, we refer the reader to the monograph [16].
2.3.1 Basic privacy guarantees
In this work, most of our privacy guarantees will be obtained by the well-known Gaussian mechanism, which performs Gaussian noise addition on a function with bounded sensitivity. Given a function \({\mathcal {A}}:\mathcal {Z}^n\mapsto \mathbb {R}^d\), we define its \(\ell _2\)-sensitivity as
If \({\mathcal {A}}\) is randomized, then the supremum must hold with high-probability over the randomization of \({\mathcal {A}}\) (this will not be a problem in this work, since our randomized algorithms enjoy sensitivity bounds w.p. 1). The Gaussian mechanism (associated to function \({\mathcal {A}}\)) is defined as \({\mathcal {A}}_{{\mathcal {G}}}(\textbf{S})\sim {{\mathcal {N}}}({\mathcal {A}}(\textbf{S}),\sigma ^2I)\).
Proposition 3
Let \({\mathcal {A}}:\mathcal {Z}^n\mapsto \mathbb {R}^d\) be a function with \(\ell _2\)-sensitivity \(s>0\). Then, for \(\sigma ^2=2s^2\ln (1/\eta )/\varepsilon ^2\), the Gaussian mechanism is \((\varepsilon ,\eta )\)-DP.
Our algorithms will adaptively use a DP mechanism such as the above. Certifying privacy of a composition can be achieved in different ways. The most basic result establishes that if we use disjoint batches of data at each iteration, then the composition will preserve the largest privacy parameter among its building blocks. This result is known as parallel composition, and its proof is a direct application of the post-processing property of DP.
Proposition 4
(Parallel composition of differential privacy) Let \(\textbf{S}=(\textbf{S}_1,\ldots ,\textbf{S}_K)\in \mathcal {Z}^n\) be a dataset partitioned on blocks of sizes \(n_1,\ldots ,n_K\), respectively. \({\mathcal {A}}_k:\mathcal {Z}^{n_k}\times \mathbb {R}^{d\times (k-1)}\mapsto \mathbb {R}^d\), \(k=1,\ldots ,K\), be a sequence of mechanisms, and let \({\mathcal {A}}:\mathcal {Z}^n\mapsto \mathbb {R}^d\) be given by
Then, If each \({\mathcal {A}}_k\) is \((\varepsilon _k,\eta _k)\)-DP in its first argument (i.e., w.r.t. \(\textbf{S}_k\)) then \({\mathcal {A}}\) is \((\max _k \varepsilon _k, \max _k \eta _k)\)-DP.
Some of the algorithms we develop in this work make repeated use of the data, and certifying privacy for these algorithms requires the use of adaptive composition results in DP (see, e.g. [16, 17]). For our algorithms, it is particularly important to leverage the sampling with replacement procedure to select the data that is used at each iteration, for which sharp bounds on DP can be obtained by the moments accountant method [1]. Below we summarize a specific version of this method that suffices for our purposes.Footnote 4
Theorem 1
[1] Consider sequence of functions \({\mathcal {A}}_1,\ldots ,{\mathcal {A}}_K\), where \({\mathcal {A}}_k:\mathcal {Z}^{n_k}\times \mathbb {R}^{d\times (k-1)}\mapsto \mathbb {R}^d\) is a function with sensitivity bounded as a function of the last data batch size, as follows
Consider the mechanism obtained by sampling a random subset of size m from the dataset, i.e., letting \(\textbf{S}_k\sim (\text{ Unif }([\textbf{S}]))^m\), and composing it with a Gaussian mechanism with noise \(\sigma ^2\), i.e.
There exists an absolute constant \(c_1>0\), such that if \(\varepsilon <c_1K(m/n)^2\) and the noise parameter \(\sigma \geqslant \sqrt{2K\ln (1/\eta )}sm/[n\varepsilon ]\), then \(\mathcal {A}(\textbf{S}):= \{{\mathcal {B}}_1(\textbf{S}), \ldots , {\mathcal {B}}_K(\textbf{S})\}\) is \((\varepsilon ,\eta )\)-differentially private.
3 The noisy stochastic extragradient method
To solve the DP-SVI problem we propose a noisy stochastic extragradient method (NSEG) in Algorithm 1.
The name noisy and stochastic in Algorithm 1 is justified by the sequence of operators \(F_{1,t},F_{2,t}\) we use:
where \(\textbf{B}_t^1,\textbf{B}_t^2\) are batches extracted from dataset \(\textbf{S}\), and \(\varvec{\xi }_t^1,\varvec{\xi }_t^2{\mathop {\sim }\limits ^{i.i.d.}}{{\mathcal {N}}}(0,\sigma _t^2)\). We will denote the batch size of batch \(\textbf{B}_t^1\) and \(\textbf{B}_t^2\) as \(B_t:=|\textbf{B}_t^1|=|\textbf{B}_t^2|\). The exact details of the sampling method for \(\textbf{B}_t\) will depend on the variant of the algorithm. Here, we detail some key features of the above algorithm. Stochastic extragradient was proposed in [30] where they do not have any noise addition in \(F_{1,t}, F_{2,t}\) (stochasticity only arises from the dataset randomness), and where disjoint batches are used for all iterations, as well as within iterations. This choice is motivated by the goal of extracting population risk bounds for their algorithm.
Another important consideration is that this algorithm can also be applied to an SSP problem by using as stochastic oracle the monotone operator associated to the stochastic convex-concave function (2.1), over the set \(\mathcal {W}={{\mathcal {X}}}\times {{\mathcal {Y}}}\). From here onwards, when we say that a certain SVI algorithm is applied to an SSP, we mean using the choices above for the operator and feasible set, respectively.
We start by stating the convergence guarantees for the single-pass NSEG method. This is obtained as a direct corollary of [30, Thm. 1], where we use an explicit bound on the oracle error with the variance of the Gaussian.
Theorem 2
[30] Consider a stochastic variational inequality (VI(F)), with operators \(F_{\varvec{\beta }}\) in \({\mathcal {M}}^1(L,M)\). Let \(\mathcal {A}\) be the NSEG method (Algorithm 1) where \(0<\gamma _t\leqslant 1/[\sqrt{3} L]\) and \((\textbf{B}_t^1,\textbf{B}_t^2)_{t}\) are independent random variables from a product distribution \(\textbf{B}_t^1,\textbf{B}_t^2\sim {{\mathcal {P}}}^{B_t}\), satisfies
where \(K_0(T)\triangleq \Big ( D^2 +7\sum _{t\in [T]}\gamma _t^2[M^2/2+d\sigma _t^2]\Big )\), \(\varGamma _T = \textstyle {\sum }_{t=1}^T\gamma _t\), \({\mathcal {A}}(\textbf{S})\) is the output of Algorithm 1 on the dataset \(\textbf{S}=\bigcup _t \textbf{B}_t \sim {{\mathcal {P}}}^n\) and expectation in the left hand side is taken over the dataset draws, random sample batch choices, as well as noise \(\varvec{\xi }^t_1, \varvec{\xi }^t_2\).
On the other hand, \(\mathcal {A}\) applied to a stochastic (SP(f)) problem attains saddle point gap
3.1 Differential privacy analysis of NSEG method
We now proceed to establish the privacy guarantees for the single-pass variant of Algorithm 1. This is a direct consequence of Propositions 3 and 4, and the fact that each operator evaluation has sensitivity bounded by \(2M/B_t\).
Proposition 5
Algorithm 1 with batch sizes \((B_t)_{t\in [T]}\) and variance \(\sigma _t^2=\frac{8M^2}{B_{t}^2}\frac{\ln (1/\eta )}{\varepsilon ^2}\) is \((\varepsilon , \eta )\)-differentially private.
We now apply the previous results to obtain population risk bounds for DP-SVI by the NSEG method.
Corollary 1
Algorithm 1 with disjoint batches of size \(B_t=B=\min \{\sqrt{d(\ln (1/\eta )}/\varepsilon , n\}\), constant stepsize \(\gamma _t\equiv \gamma =D/\big [M\sqrt{7T\big (1+\frac{8d}{B^2}\frac{\ln (1/\eta )}{\varepsilon ^2}\big )}\big ]\) and variance \(\sigma _t^2=\sigma ^2=\frac{8M^2}{B^2}\frac{\ln (1/\eta )}{\varepsilon ^2}\) is \((\varepsilon , \eta )\)-differentially private and achieves \(\text{ Gap}_{\textrm{VI}}({\mathcal {A}},F)\) (for SVI) or \(\text{ Gap}_{\textrm{SP}}(\mathcal {A},f)\) (for SSP) of
Remark 1
Notice that in the corollary above, the gap is nontrivial iff \(\sqrt{d\ln (1/\eta )}/[n\varepsilon ]<1\), which means that the left hand side attains the max on the range where the gap is nontrivial.
Proof
Consider a SVI or SSP problem. Let us recall that by Theorem 2, Algorithm 1 achieves expected gap
Choosing \(\gamma =D/\big [M\sqrt{7T\big (1+\frac{8d}{B^2}\frac{\ln (1/\eta )}{\varepsilon ^2}\big )}\big ]\), we obtain an expected gap
where we used that for a single-pass algorithm, \(n=2TB\) (this choice of T exhausts the data when disjoint batches are chosen).
Recalling that \(B=\min \{\sqrt{d\ln (1/\eta )}/\varepsilon , n\}\). Then the expected gap is bounded by
Hence, we conclude the prrof. \(\square \)
We observe that excess risk bounds of the same order for DP-SCO based on noisy SGD and the uniform stability of differential privacy have been established [7]. Improving these bounds in DP-SCO required substantial efforts, which was only achieved recently [4, 5, 19]. Furthermore, to the best of our knowledge, the upper bounds on the risk above are the first of their type for DP-SVI and DP-SSP, respectively. To improve upon them, we will follow the approach of [4], based on a multi-pass empirical error convergence, combined with weak gap generalization bounds based on uniform stability.
4 Stability of NSEG and optimal risk for DP-SVI and DP-SSP
The bounds established for DP-SVI are potentially suboptimal, and many of the past approaches used to attain optimal rates for DP-SCO, such as privacy amplification by iteration, phased regularization, etc. appear to encounter substantial barriers for their application to DP-SVI. In order to resolve this gap, we show that for both DP-SVI and DP-SSP we can indeed obtain optimal rates, which match those of DP-SCO. In order to achieve this we develop a multi-pass variant of the NSEG method, which enjoys generalization performance due to its stability.
4.1 Stability of NSEG method
To analyze the stability of NSEG it is useful to interpret the extragradient method as an approximation of the proximal point algorithm. This connection has been established at least since [38]. Given a monotone and 1-Lipschitz operator \(G:\mathbb {R}^d\mapsto \mathbb {R}^d\), we define the s-extragradient operator inductively as follows. First, \(R_{0}(\cdot ; G):\mathbb {R}^d\mapsto \mathbb {R}^d\) is defined as \(R_{0}(u; G)= \varPi _{\mathcal {W}}(u)\). Then, for \(s\geqslant 0\)
Given such operator, the (deterministic) extragradient method [33] corresponds to, starting from \(u_0\in \mathcal {W}\), iterating
It is known that if G is contractive, the recursion (4.1) leads to a fixed point \(R_{}(u; G)\), satisfying
It is also easy to see that \(R_{}(\cdot ; G):\mathbb {R}^d\mapsto \mathcal {W}\) is nonexpansive.
Proposition 6
(Near nonexpansiveness of the extragradient operator) Let \(F\in {\mathcal {M}}_{\mathcal {W}}^1(L,M)\) and \(\mathcal {W}\subseteq \mathbb {R}^d\) compact convex set with diameter \(D>0\). Then, for all s nonnegative integer, and \(u,v\in \mathbb {R}^d\),
and
Proof
The first part, Eq. (4.3), is proved by induction on s. The result clearly holds for \(s=0\), and if \(s\geqslant 1\), we use (4.1) and (4.2) to obtain
where in the first inequality we used the nonexpansiveness of the projection operator, next we used the L-Lipschitzness of F, and finally we used the inductive hypothesis to conclude.
The second part, Eq. (4.4), is a direct consequence of (4.3), the triangle inequality, and that \(R_{0}(u; \gamma F)\), \(R_{}(u; \gamma F)\), \(R_{0}(v; \gamma F)\), \(R_{}(v; \gamma F)\in \mathcal {W}\). \(\square \)
The next lemma shows an expansion upper bound for extragradient iterations. This type of bound will be later used to establish the uniform argument stability of the NSEG algorithm.
Lemma 1
(Expansion of the extragradient iteration) Let \(F_1,F_2:\mathbb {R}^d\mapsto \mathbb {R}^d\) monotone L-Lipschitz operators, and \(0\leqslant \gamma < 1/L\). Let \(u,v\in \mathcal {W}\), and \(w,z,u^{\prime },v^{\prime }\in \mathcal {W}\) such that
Then,
where \(\widetilde{M}_1\triangleq \Vert F_1(R_{}(u; \gamma F_1))-F_2(R_{}(u; \gamma F_2))\Vert \) and \(\widetilde{M}_2\triangleq \Vert F_1(R_{}(v; \gamma F_1))-F_2(R_{}(v; \gamma F_2))\Vert \).
Proof
By definition of w and z, we have,
where we used the nonexpansiveness of the operator \(R_{}(\cdot ; \gamma F_1)\) and Proposition 6. Moreover, since \(u,v,R_{0}(u; \gamma F_1), R_{0}(v; \gamma F_1) \in \mathcal {W}\), we have \(\Vert w-z\Vert \leqslant \Vert u-v\Vert +2LD\gamma \), proving (4.5).
Next, to prove (4.6), we proceed as follows:
Using again Proposition 6, we have that
An analog bound can be obtained for \(\Vert z-R_{}(v; \gamma F_2)\Vert \):
concluding the claimed bound (4.6):
\(\square \)
The Expansion Lemma above allows us to bound how much would two trajectories of the NSEG method may deviate, given two pairs of sequences of operators \(F_{1,t},F_{2,t}\) and \(F_{1,t}^{\prime },F_{2,t}^{\prime }\). The bounds we will obtain from this analysis will give us direct bounds on the UAS for the NSEG method.
Lemma 2
Let \(F_{1,t},F_{2,t}\) and \(F_{1,t}^{\prime },F_{2,t}^{\prime }\) be L-Lipschitz operators, and \(0\leqslant \gamma _t< 1/L\) for all \(t\in [T]\). Let \(\{(u_t,w_t)\}_{t\in [T]}\) and \(\{(v_t,z_t)\}_{t\in [T]}\) be the sequences resulting from Algorithm 1, with operators \(\{(F_{1,t},F_{2,t})\}_{t\in [T]}\) and \(\{(F_{1,t}^{\prime },F_{2,t}^{\prime })\}_{t\in [T]}\), respectively; and starting from \(u^0=v^0\). Let
then, for al \(t=0,\ldots ,T\),
Proof
Clearly, \(\nu _0=0\). Let us now derive a recurrence for both \(\nu _t\) and \(\delta _t\).
where in the last inequality we used inequality (4.5). Let us bound now the rightmost term above,
We conclude that
Now,
where in inequality (i), we used Lemma 1 (more precisely, inequality (4.6)), and in inequality (ii), we used (4.9). Unraveling the above recursion, we get that for all \(t\in [T]\),
Finally, we combine the bound above with (4.10), to conclude that for all \(t\in [T]\):
\(\square \)
The next theorem provides in-expectation and high probability upper bounds for the NSEG method. Despite the fact that we will not particularly apply the latter bounds, we believe these may be of independent interest.
Theorem 3
The NSEG method (Algorithm 1) for closed and convex domain \(\mathcal {W}\subseteq \mathbb {R}^d\) with diameter D, operators in \({\mathcal {M}}_{\mathcal {W}}^1(L, M)\) and stepsizes \(0<\gamma _t\leqslant 1/L\), satisfies the following uniform argument stability bounds:
-
1.
Let \({\mathcal {A}}_{\mathrm{batch-EG}}\) denote the Batch method where given dataset \(\textbf{S}\), \(F_{1,t}=F_{\textbf{S}}+\varvec{\xi }_1^t\), and \(F_{2,t}=F_{\textbf{S}}+\varvec{\xi }_2^t\). Then, in expectation,
$$\begin{aligned}&\sup _{\textbf{S}\simeq \textbf{S}^{\prime }}\mathbb {E}_{{\mathcal {A}}_{\mathrm{batch-EG}}}[\delta _{{\mathcal {A}}_{\mathrm{batch-EG}}}(\textbf{S},\textbf{S}^{\prime })]\\&\quad \leqslant \sum _{t=0}^{T-1}\Big ( [4M+2LD+4\sqrt{d}\sigma ]L\gamma _t^2+\frac{2ML}{n}\gamma _t^2+\frac{2M}{n}\gamma _t\Big )\\&\qquad + \frac{1}{T}\sum _{t=1}^{T}\left( \frac{2ML}{n}+2LD\right) \gamma _t, \end{aligned}$$and for constant stepsize \(\gamma _t\equiv \gamma \), there exists a universal constant \(K>0\), such that for any \(0<\theta <1\), with probability \(1-\theta \):
$$\begin{aligned} \sup _{\textbf{S}\simeq \textbf{S}^{\prime }}\delta _{{\mathcal {A}}_{\mathrm{batch-EG}}}(\textbf{S},\textbf{S}^{\prime })&\leqslant 4[T\sqrt{d}\sigma +\sigma \sqrt{Kd\ln (1/\theta )}]L\gamma ^2+ [4M+2LD]LT\gamma ^2\nonumber \\&\quad +\frac{2ML}{n}T\gamma ^2+\frac{2M}{n}T\gamma + \left( \frac{2ML}{n}+2LD\right) \gamma . \end{aligned}$$(4.11) -
2.
Let \({\mathcal {A}}_{\mathrm{repl-EG}}\) denote Sampled with replacement method where given dataset \(\textbf{S}\), \(F_{1,t}=F_{\varvec{\beta }_{i(1,t)}}+\varvec{\xi }_1^t\), and \(F_{2,t}=F_{\varvec{\beta }_{i(2,t)}}+\varvec{\xi }_2^t\), for \(i(1,t),i(2,t)\sim \text{ Unif }([n])\), independently. Then, in expectation,
$$\begin{aligned}{} & {} \sup _{\textbf{S}\simeq \textbf{S}^{\prime }}\mathbb {E}[\delta _{{\mathcal {A}}_{\mathrm{repl-EG}}}(\textbf{S},\textbf{S}^{\prime })]\nonumber \\{} & {} \quad \leqslant \sum _{t=0}^{T-1}\Big ( [4M+2LD+4\sqrt{d}\sigma ]L\gamma _t^2+\frac{2 ML}{n}\gamma _t^2+\frac{2 M}{n}\gamma _t\Big )\nonumber \\{} & {} \qquad + \frac{1}{T}\sum _{t=1}^{T}\left( \frac{2ML}{n}+2LD\right) \gamma _t. \end{aligned}$$(4.12)And for constant stepsize \(\gamma _t\equiv \gamma \), there exists a universal constant \(K>0\), such that for any \(0<\theta <1/[2n]\), with probability \(1-\theta \):
$$\begin{aligned} \sup _{\textbf{S}\simeq \textbf{S}^{\prime }}\delta _{{\mathcal {A}}_{\mathrm{repl-EG}}}(\textbf{S},\textbf{S}^{\prime })\leqslant & {} 4[T\sqrt{d}\sigma +\sigma \sqrt{Kd\ln (2/\theta )}]L\gamma ^2\nonumber \\{} & {} + [4M+2LD]LT\gamma ^2 +2LD\gamma \nonumber \\{} & {} +\big (1+3\log \big (\frac{2n}{\theta }\big )\big )\frac{2MT}{n}(L\gamma ^2+\gamma /T+\gamma ). \end{aligned}$$(4.13)
Proof
Let \(\textbf{S}\simeq \textbf{S}^{\prime }\). Then
-
1.
Batch method. Notice that for the batch case \(F_{1,t}=F_{\textbf{S}}+\varvec{\xi }_1^t\), and \(F_{1,t}^{\prime }=F_{\textbf{S}^{\prime }}+\varvec{\xi }_1^t\); and \(F_{2,t}=F_{\textbf{S}}+\varvec{\xi }_2^t\), and \(F_{2,t}^{\prime }=F_{\textbf{S}^{\prime }}+\varvec{\xi }_2^t\). Then, it is easy to see that \(\varDelta _{1,t}\leqslant 2M/n\) and \(\varDelta _{2,t}\leqslant 2M/n\). On the other hand, since the operators are M bounded and since noise addition is Gaussian
$$\begin{aligned} \mathbb {E}[\widetilde{M}_{1,t}]= & {} \mathbb {E}[\Vert F_{1,t}(R_{}(u_{t-1}; \gamma F_{1,t})) -F_{2,t}(R_{}(u_{t-1}; \gamma F_{2,t}))\Vert ] \nonumber \\\leqslant & {} \mathbb {E}[\Vert F_{\textbf{B}_t^1}(R_{}(u_{t-1}; \gamma F_{1,t}))+\varvec{\xi }_1^t\Vert ] +\mathbb {E}[\Vert F_{\textbf{B}_t^2}(R_{}(u_{t-1}; \gamma F_{2,t}))+\varvec{\xi }_2^t\Vert ] \nonumber \\\leqslant & {} 2M+\mathbb {E}[\Vert \varvec{\xi }_1^t\Vert +\Vert \varvec{\xi }_2^t\Vert ] \leqslant 2[M+\sqrt{d}\sigma ], \end{aligned}$$(4.14)and an analog bound holds for \(\mathbb {E}[\widetilde{M}_{2,t}]\). Hence, by Lemma 2:
$$\begin{aligned} \mathbb {E}_{{\mathcal {A}}_{\mathrm{batch-EG}}}[\delta _{{\mathcal {A}}_{\mathrm{batch-EG}}}(S,S^{\prime })]\leqslant & {} \sum _{t=0}^{T-1}\Big ( [4M+2LD+4\sqrt{d}\sigma ]L\gamma _t^2+\frac{2ML}{n}\gamma _t^2+\frac{2M}{n}\gamma _t\Big )\\{} & {} + \frac{1}{T}\sum _{t=1}^{T}\big (\frac{2ML}{n}+2LD\big )\gamma _t, \end{aligned}$$which proves the claimed bound.
For the high probability bound, we use that the norm of a Gaussian vector is \(Kd\sigma ^2\)-subgaussian, for a universal constant \(K>0\) (see, e.g. [48, Thm. 3.1.1]), and therefore \(\mathbb {E}[\exp \{\lambda (\Vert \varvec{\xi }_i^t\Vert -\sigma \sqrt{d})\}]\leqslant \exp \{Kd\sigma ^2\lambda ^2\}\); hence by the Chernoff-Crámer bound, for any \(\alpha >0\)
$$\begin{aligned} \mathbb {P}\left[ \sum _{t\in [T]} \big (\Vert \varvec{\xi }_1^t\Vert +\Vert \varvec{\xi }_2^t\Vert \big ) >(2+\alpha )T\sqrt{d}\sigma )\right]\leqslant & {} \exp \{-\lambda \alpha T\sqrt{d}\sigma \}\Big ( \exp \{2Kd\sigma ^2\lambda ^2\} \Big )^T\\= & {} \exp \{T(2Kd\sigma ^2\lambda ^2-\alpha \sqrt{d}\sigma \lambda )\}. \end{aligned}$$Choosing \(\lambda =\alpha /[4K\sqrt{d}\sigma ]\) and \(\alpha =\frac{2\sqrt{K}}{T}\sqrt{\ln (1/\theta )}\), we get
$$\begin{aligned} \mathbb {P}\left[ \sum _{t\in [T]} \big (\Vert \varvec{\xi }_1^t\Vert +\Vert \varvec{\xi }_2^t\Vert \big ) > 2T\sqrt{d}\sigma +2\sigma \sqrt{Kd\ln (1/\theta )}\right] \leqslant \theta . \end{aligned}$$(4.15)This guarantee, together with the rest of the terms appearing in our previous stability bound (which hold w.p. 1) proves (4.11).
-
2.
Sampled with replacement. Let \(i\in [n]\) be the coordinate where \(\textbf{S}\) and \(\textbf{S}^{\prime }\) may differ. Let \(i(1,t),i(2,t)\sim \text{ Unif }([n])\) i.i.d., for \(t\in [T]\). Now we apply Lemma 2 with \(F_{1,t}=F_{\varvec{\beta }_{i(1,t)}}+\varvec{\xi }_1^t\), and \(F_{1,t}^{\prime }=F_{\varvec{\beta }_{i(1,t)}^{\prime }}+\varvec{\xi }_1^t\); and \(F_{2,t}=F_{\varvec{\beta }_{i(2,t)}}+\varvec{\xi }_2^t\), and \(F_{2,t}^{\prime }=F_{\varvec{\beta }_{i(2,t)}^{\prime }}+\varvec{\xi }_2^t\). Hence we have that \((\varDelta _{1,t})_{t\in [T]}\) and \((\varDelta _{2,t})_{t\in [T]}\) are sequences of independent r.v. with expectation bounded by 2M/n. Therefore, by Lemma 2 (Eq. (4.8)), and following the steps that lead to inequality (4.14), we have:
$$\begin{aligned} \mathbb {E}[\delta _{\mathcal {A}}(\textbf{S},\textbf{S}^{\prime })]\leqslant & {} \sum _{t=0}^{T-1}\left( [4M+2LD+4\sqrt{d}\sigma ]L\gamma _t^2+\frac{2ML}{n}\gamma _t^2+\frac{2M}{n}\gamma _t\right) \\{} & {} + \frac{1}{T}\sum _{t=1}^{T}\left( \frac{2ML}{n}+2LD\right) \gamma _t. \end{aligned}$$Finally for the high-probability bound, note that for any realization of the algorithm randomness, we have
$$\begin{aligned} \delta _{\mathcal {A}}(\textbf{S},\textbf{S}^{\prime })\leqslant & {} \sum _{t=1}^T[4M+2(\Vert \varvec{\xi }_1^t\Vert +\Vert \varvec{\xi }_2^t\Vert )+2LD]L\gamma _t^2\\{} & {} +\frac{2LD}{T}\sum _{t=1}^T\gamma _t +L\sum _{t=1}^T\gamma _t^2\varDelta _{1,t}+\frac{1}{T}\sum _{t=1}^T\varDelta _{1,t}\gamma _t+\sum _{t=1}^T\varDelta _{2,t}\gamma _t. \end{aligned}$$We additionally assume constant stepsize, \(\gamma _t\equiv \gamma >0\). Hence, we can resort on concentration of sums of Bernoulli random variables, which guarantees that
$$\begin{aligned} \mathbb {P}\left[ \sum _{t=1}^T\varDelta _{1,t} > (1+3\log (2/\theta ))\frac{2MT}{n} \right] \leqslant \exp \{-\log (2/\theta )\}=\frac{\theta }{2}. \end{aligned}$$An analog bound can be established for \(\varDelta _{2,t}\), which together with bound (4.15) leads to
$$\begin{aligned}{} & {} \mathbb {P}_{{\mathcal {A}}_{\mathrm{repl-EG}}}\Big [ \delta _{{\mathcal {A}}_{\mathrm{repl-EG}}}(\textbf{S},\textbf{S}^{\prime })> 4[T\sqrt{d}\sigma +\sigma \sqrt{Kd\ln (1/\theta )}]L\gamma ^2\\{} & {} \quad + [4M+2LD]LT\gamma ^2 +2LD\gamma \\{} & {} \quad +\big (1+3\log \big (\frac{2}{\theta }\big )\big )\frac{2MT}{n}(L\gamma ^2+\gamma /T+\gamma ) \Big ]\leqslant 2\theta . \end{aligned}$$Notice this bound only depends on our choice of i, and it is otherwise uniform over all \(\textbf{S}\simeq \textbf{S}^{\prime }\). Finally, by a union bound on \(i\in [n]\) (together with a renormalization of \(\theta \)), we have that
$$\begin{aligned}{} & {} \mathbb {P}_{{\mathcal {A}}_{\mathrm{repl-EG}}}\Big [ \sup _{\textbf{S}\simeq \textbf{S}^{\prime }} \delta _{{\mathcal {A}}_{\mathrm{repl-EG}}}(\textbf{S},\textbf{S}^{\prime })> 4[T\sqrt{d}\sigma +\sigma \sqrt{Kd\ln (2/\theta )}]L\gamma ^2\\{} & {} \quad + [4M+2LD]LT\gamma ^2 +2LD\gamma \\{} & {} \quad +\big (1+3\log \big (\frac{4n}{\theta }\big )\big )\frac{2MT}{n}(L\gamma ^2+\gamma /T+\gamma ) \Big ]\leqslant \theta . \end{aligned}$$
\(\square \)
4.2 Optimal risk for DP-SVI and DP-SSP by NSEG method
Now we use our stability and risk bounds for NSEG to derive optimal risk bounds for DP-SSP. For this, we use the sampled with replacement variant, \({\mathcal {A}}_{\mathrm{repl-EG}}\).
Using the moments accountant method (Theorem 1) one can show the following.
Proposition 7
(Privacy of sampled with replacement NSEG) Algorithm 1 with operators given by Eq. (4.16) and \(\sigma _t^2=8M^2\log (1/\eta )/\varepsilon ^2\), is \((\varepsilon ,\eta )\)-differentially private.
Theorem 4
(Excess risk of sampled with replacement NSEG) Consider an instance of the (VI(F)) or (SP(f)) problem. Let \(\mathcal {A}\) be the sampled with replacement variant (4.16) of NSEG method (Algorithm 1), with \(\gamma _t=\gamma =\min \{D/M,1/L\}/[n\max \{\sqrt{n}, \sqrt{d \ln (1/\eta )}/\varepsilon \}]\), \(\sigma _t^2=8\,M^2\log (1/\eta )/\varepsilon ^2\), \(T=n^2\). Then, \(\text{ WeakGap}_{\textrm{VI}}(\mathcal {A},F)\) (for SVI) or \(\text{ WeakGap}_{\textrm{SP}}(\mathcal {A},f)\) (for SSP) are bounded by
Remark 2
Notice that assuming \(n=\varOmega (\min \{\sqrt{L},\sqrt{M/D}\})\) the bound of the Theorem simplifies to
This is quite a mild sample size requirement. In this range, when \(M\geqslant LD\), our upper bound matches the excess risk bounds for DP-SCO [5], and we will show these rates are indeed optimal for DP-SVI and DP-SSP as well
Proof
Given that our bounds for SVI and SSP are analogous, we proceed indistinctively for both problems.
First, let us bound the empirical accuracy of the method. By Theorem 2, together with the fact that sampling with replacement is an unbiased stochastic oracle for the empirical operator:
where \(\text{ EmpGap }(\mathcal {A},\textbf{S})\) is \(\text{ EmpGap}_{\textrm{VI}}(\mathcal {A},F_{\textbf{S}})\) or \(\text{ EmpGap}_{\textrm{SP}}(\mathcal {A},f_{\textbf{S}})\) for an SVI or SSP problem, respectively.
Next, by Theorem 3, we have that \({\mathcal {A}}\) (or \(x(\textbf{S})\) and \(y(\textbf{S})\), for the SSP case) are UAS with parameter
Hence, noting that empirical risk upper bounds weak empirical gap and using Proposition 1 or Proposition 2 (depending on whether the problem is an SSP or SVI, respectively), we have that the risk is upper bounded by its empirical risk plus \(M\delta \), where \(\delta \) is the UAS parameter of the algorithm; in particular, is bounded by
Similar claims can be made \(\text{ WeakGap}_{\textrm{SP}}(\mathcal {A},f_\textbf{S})\). Hence, we conclude the proof. \(\square \)
5 The noisy inexact stochastic proximal point method
In the previous sections, we presented NSEG method with its single-pass and multipass variants and provided optimal risk guarantees for DP-SVI and DP-SSP problems in \(O(n^2)\) stochastic operator evaluations. In the rest of the paper, our aim is to provide another algorithm that can achieve the optimal risk for both of these problems with much less computational effort. Towards that end, consider the following algorithm:
In the above algorithm, we leave a few things unspecified which will be stated later during convergence and privacy analysis. Here, we detail some key features of the above algorithm. In line 3, we sample a batch \(\textbf{B}_{k+1}\) of size \(B_{k+1}=|\textbf{B}_{k+1}|\). Similar to the NSEG, we will look at two different variants of NISPP method: single-pass and multi-pass. Depending on the type of the method, we will specify the sampling mechanism. In line 4 of Algorithm 2, we have \(u_{k+1}\) is a \(\nu \)-approximate strong VI solution of the mentioned VI problem for some \(\nu \geqslant 0\), i.e.,
Note that if \(\nu = 0\) then this is an exact solution satisfying (VI(F)) with operator F replaced by \(F(\cdot ) + \lambda _k(\cdot - w_k)\). For \(\nu >0\), we obtain that \(u_{k+1}\) is an inexact solution satisfying solution criterion up to \(\nu \) additive error. In line 5, we add a Gaussian noise to \(u_{k+1}\) in order to preserve privacy. The resulting iterate \(w_{k+1}\) can be potentially outside the set \(\mathcal {W}\). Hence, in line 7, the ergodic average \({\bar{w}}_K\) can be outside \(\mathcal {W}\). In order to preserve feasibility of the solution, we project \({\bar{w}}_K\) onto set \(\mathcal {W}\) and output it as a solution in line 8. Projection of the average in line 8, as opposed to projection individual \(w_{k+1}\) in line 5 is crucial for convergence guarantee of Algorithm 2.
In the rest of this section, we exclusively deal with the single-pass version of NISPP method, i.e., we assume that batches \(\{\textbf{B}_{k+1}\}_{k=0,\ldots ,K}\) are disjoint subsets of the dataset \(\textbf{S}\). We start with the convergence guarantees of single-pass NISPP method. In order to prove convergence, we show a useful bound on \({\text {dist}}_\mathcal {W}({\bar{w}}_K):= \min _{w\in \mathcal {W}}\Vert w-{\bar{w}}_K\Vert _{}^{}\).
Proposition 8
Let \({\bar{u}}_K:= \frac{1}{\varGamma _K} \textstyle {\sum }_{k = 0}^K\gamma _ku_{k+1}\). Then,
Moreover, we have
Proof
Note that \({\bar{u}}_K \in \mathcal {W}\). Hence, first relation in (5.2) follows by definition of \({\text {dist}}_\mathcal {W}(\cdot )\) function. Equality follows from definition of \({\bar{u}}_K\) and \({\bar{w}}_K\). To obtain (5.3), note that \(\{\varvec{\xi }_k\}_{k = 1}^{K+1}\) are i.i.d. random variable with mean 0. Expanding \(\Vert \textstyle {\sum }_{k = 0}^K \gamma _k\varvec{\xi }_{k+1}\Vert _{}^{2}\), using linearity of expectation and noting \(\mathbb {E}[\varvec{\xi }_i^T\varvec{\xi }_j] = 0\) for all \(i \ne j\), we conclude (5.3). Hence, we conclude the proof. \(\square \)
We prove the following convergence rate result for Algorithm 2 for the risk of SVI/SSP problem. In particular, we assume that the algorithm performs a single-pass over the dataset \(\textbf{S}\sim \mathcal {P}^n\) containing n i.i.d. datapoints.
Theorem 5
Consider the stochastic (VI(F)) problem with operators \(F_{\varvec{\beta }} \in \mathcal {M}^1_\mathcal {W}(L, M)\). Let \(\mathcal {A}\) be the single-pass NISPP method (Algorithm 2) where sequence \(\{\gamma _k\}_{k \geqslant 0}, \{\lambda _k\}_{k \geqslant 0}\) satisfy
for all \(k \geqslant 0\). Moreover, \(\textbf{B}_{k+1}\) are independent samples from a product distribution \(\textbf{B}_{k+1} \sim \mathcal {P}^{B_{k+1}}\) and \(\textbf{B}_{k+1} \subset \textbf{S}\). Then, we have
where, \(Z_0(K):= \frac{3\gamma _0\lambda _0}{2}D^2 + \frac{4\,M^2 + 3\,L^2D^2}{\gamma _0\lambda _0}\textstyle {\sum }_{k = 0}^K\gamma _k^2 + \frac{5\gamma _0\lambda _0d}{2}\textstyle {\sum }_{k =1}^K\sigma _{k+1}^2\) and \(\varGamma _K:= \textstyle {\sum }_{k = 0}^K \gamma _k\).
Similarly, \(\mathcal {A}\) applied to stochastic (SP(f)) problem achieves
Proof
Let \(w\in \mathcal {W}\). Then
We will analyze each term above separately. First, note that
Note that
where first inequality follows from monotonicity and second inequality follows from (5.1). Now note that
where last inequality follows from L-Lipschitz continuity of F and \(F_{\textbf{B}_{k+1}}\). Noting that \(\Vert u_{k+1}-w\Vert _{}^{} \leqslant D\) for all \(w \in \mathcal {W}\) and using the above bound in (5.9), we have
Letting \(u_0:= w_0\) and consequently \(\varvec{\xi }_0 = \textbf{0}\), we have from (5.10)
Let us define an auxiliary sequence \(\{z_k\}_{k \geqslant 0}\), where \(z_0 = w_0\) and for all \( k \geqslant 1\), we have
Then, due to the mirror-descent bound, we have
Moreover, noting that
Combining above relation with (5.12), we have
Now multiplying (5.7) by \(\gamma _k\) then summing from \(k = 0\) to K; noting the definition of \({\bar{w}}_K\) and \(\varGamma _K\); and using (5.8), (5.11) and (5.13) along with assumption (5.4) implies
Now note that
Define \(\varvec{\varDelta }_k:= \frac{1}{\lambda _k}(F(w_k) - F_{\textbf{B}_{k+1}}(w_{k}))\). Note that \(\mathbb {E}_{\textbf{B}_{k+1}}[\varvec{\varDelta }_k| w_k] = 0\). Moreover, define an auxiliary sequence \(\{h_k\}_{k \geqslant 0}\) with \(h_0:= w_0\) and
Then due to mirror descent bound, we have
Moreover,
Using above relation along with (5.16), we have
Using the above relation in (5.15), we have
Finally, note that for all valid k, we have
where, in (5.19), we used the fact that \(F_{\varvec{\beta }})\) is M-bounded for all \(\varvec{\beta } \in \textbf{S}\).
Now, using (5.17) in relation (5.14), noting the bound on \({\text {dist}}_\mathcal {W}({\bar{w}}_K)\) from Proposition 8 (in particular (5.3)), taking supremum with respect to \(w \in \mathcal {W}\), then taking expectation and noting (5.18)-(5.21), we have
where in the first inequality, we used the fact that \(\mathbb {E}[{\text {dist}}_\mathcal {W}({\bar{w}}_K)] \leqslant \sqrt{\mathbb {E}[{\text {dist}}_\mathcal {W}({\bar{w}}_K)^2]}\). Hence, we conclude the proof of (5.5).
Now, we extend this for (SP(f)). Denote \(u^{k+1} = ({\widetilde{x}}^{k+1}, {\widetilde{y}}^{k+1})\). Then, we have
Using the above in (5.11), we obtain,
Now, using Proposition 8 to bound the distance between points \(\tfrac{1}{\varGamma _K}(\textstyle {\sum }_{k =0}^K\gamma _k{\widetilde{x}}_{k+1}, \textstyle {\sum }_{k =0}^K\gamma _k{\widetilde{y}}_{k+1})\) and \((\varPi _\mathcal {X}[{\bar{x}}_K], \varPi _\mathcal {Y}[{\bar{y}}_K])\) and using Jensen’s inequality to conclude that
and retracing the steps of this proof from (5.11), we obtain (5.6). Hence, we conclude the proof. \(\square \)
5.1 Differential privacy of the NISPP method
First, we show a simple bound on \(\ell _2\)-sensitivity for updates of NISPP method.
Proposition 9
Suppose \(\nu \leqslant \frac{2M^2}{\lambda _kB_{k+1}^2}\) then \(\ell _2\)-sensitivity of updates of Algorithm 2 is at most \(\frac{4M}{\lambda _kB_{k+1}}\)
where \(B_{k+1} = |\textbf{B}_{k+1}|\) is the batch size of k-th iteration.
Proof
Let \(w_k\) be an iterate in the start of k-th iteration of Algorithm 2. Suppose \(\textbf{B}_{k+1}\) and \(\textbf{B}_{k+1}'\) be two different batches used in k-th iteration to obtain \(u_{k+1}\) and \(u_{k+1}'\), respectively. Also note that \(\textbf{B}_{k+1}\) and \(\textbf{B}_{k+1}'\) differ in only single datapoint. Then, due to (5.1), we have for all \(w \in \mathcal {W}\)
Using \(w = u_{k+1}'\) in the first relation and \(w = u_{k+1}\) in the second relation above and then summing, we obtain
Now, noting that \(F_{\textbf{B}_{k+1}}\) is a monotone operator and denoting \(\textbf{a}_{k+1}:= \Vert F_{\textbf{B}_{k+1}'}(u_{k+1}')- F_{\textbf{B}_{k+1}}(u_{k+1}')\Vert _{}^{}\), \(p_{k+1}:= \Vert w_{k+1} - w'_{k+1}\Vert _{}^{} = \Vert u_{k+1} - u'_{k+1}\Vert _{}^{}\) we have
Finally noting that if \(\varvec{\beta }\) and \(\varvec{\beta }'\) are the differing datapoints in \(\textbf{B}_{k+1}\) and \(\textbf{B}_{k+1}'\), then
Using the above relation in (5.22) and noting that \(\ell _2\)-sensitivity \(p_{k+1} = \Vert w_{k+1}-w_{k+1}'\Vert _{}^{} = \Vert u_{k+1} -u_{k+1}'\Vert _{}^{}\), we have, \(p_{k+1}\) satisfies
This implies
Setting \(\nu \leqslant \frac{2M^2}{\lambda _kB_{k+1}^2}\), we have \(p_{k+1} \leqslant \frac{4\,M}{\lambda _kB_{k+1}}.\) Hence, we conclude the proof. \(\square \)
Using the \(\ell _2\)-sensitivity result above along with Proposition 3 and 4, we immediately obtain the following:
Proposition 10
Algorithm 2 with batch sizes \((B_{k+1})_{k \in [K]_0}\), parameters \((\lambda _k)_{k \in [K]_0}\), variance \(\sigma _{k+1}^2 = \frac{32\,M^2}{\lambda _k^2B_{k+1}^2}\frac{\ln (1/\eta )}{\varepsilon ^2}\) and \(\nu \) satisfying assumptions of Proposition 9 is \((\varepsilon , \eta )\)-differentially private.
Now, we provide a policy for setting \(\gamma _k, \lambda _k\) and \(B_{k+1}\) to obtain population risk bounds for DP-SVI and DP-SSP problem by the NISPP method.
Corollary 2
Algorithm 2 with disjoint batches \(\textbf{B}_{k+1}\) is of size \(B_{k+1} = B:= n^{1/3}\) for all \(k \geqslant 0\) and the following parameters
is \((\varepsilon , \eta )\)-differentially private and achieves expected SVI-gap (SSP-gap, respectively)
Proof
Note that values of \(\nu \), \(\sigma _{k+1}\) and other required conditions proposed in Propositions 9 and 10 are satisfied. Hence, this algorithm is \((\varepsilon , \eta )\)-differentially private.
Moreover, all requirements of Theorem 5 are satisfied. In order to maintain single pass over the dataset, we require \(K = \frac{n}{B} = n^{2/3}\) iterations. Then, we provide individual bounds on the terms of (5.5) ((5.6), respectively) and conclude the corollary using Theorem 5.
Note that we are using a constant parameter policy. Hence, \(\sigma _{k+1} = \sigma = \frac{4\,M}{\rho B\lambda _0}\) for all \(k \geqslant 0\). Substituting appropriate parameter values, we have
Substituting these bounds in Theorem 5, we conclude the proof. \(\square \)
Remark 3
We have the following remarks for NISPP method:
-
1.
In order to obtain \(\nu \)-approximate solution of the subproblem of NISPP method satisfying (5.1), we can use the Operator Extrapolation (OE) method (see Theorem 2.3 [34]). OE method outputs a solution \(u_{k+1}\) satisfying \(\Vert u_{k+1} -w^*_{k+1}\Vert _{}^{} \leqslant \zeta \) in \( \frac{L+\lambda _0}{\lambda _0}\ln (\frac{D}{\zeta })\) iterations, where \(w_{k+1}^*\) is an exact SVI solution for problem (5.1). Furthermore, we have for all \(w \in \mathcal {W}\),
$$\begin{aligned} 0&\leqslant \langle F(w^*_{k+1}) + \lambda _k(w^*_{k+1}-w_k),w-w^*_{k+1}\rangle \\&= \langle F(u_{k+1}) + F(w^*_{k+1}) - F(u_{k+1}) + \lambda _k (u_{k+1}- w_k)\\&\qquad + \lambda _k(w^*_{k+1}- u_{k+1}) ,{w -w^*_{k+1}}\rangle \\&\leqslant \langle F(u_{k+1}) + \lambda _k (u_{k+1}- w_k),w -w^*_{k+1}\rangle \\&\qquad + (L+\lambda _k)\Vert u_{k+1}-w^*_{k+1}\Vert _{}^{}\Vert w -w^*_{k+1}\Vert _{}^{}\\&\leqslant \langle F(u_{k+1}) + \lambda _k (u_{k+1}- w_k),w -w^*_{k+1}\rangle \\&\qquad + (L+\lambda _k)D\Vert u_{k+1}-w^*_{k+1}\Vert _{}^{}\\&= \langle F(u_{k+1}) + \lambda _k (u_{k+1}- w_k),w- u_{k+1} + u_{k+1}-w^*_{k+1}\rangle \\&\qquad + (L+\lambda _k)D\Vert u_{k+1}-w^*_{k+1}\Vert _{}^{}\\&\leqslant \langle F(u_{k+1}) + \lambda _k (u_{k+1}- w_k),w- u_{k+1}\rangle + \Vert F(u_{k+1})\\&\qquad + \lambda _k (u_{k+1}- w_k)\Vert _{}^{}\Vert u_{k+1}-w^*_{k+1}\Vert _{}^{}\\&\quad + (L+\lambda _k)D\Vert u_{k+1}-w^*_{k+1}\Vert _{}^{}\\&\leqslant \langle F(u_{k+1}) + \lambda _k (u_{k+1}- w_k),w- u_{k+1}\rangle \\&\qquad + [LD+M+2\lambda _kD]\Vert u_{k+1}-w^*_{k+1}\Vert _{}^{} \end{aligned}$$Setting \(\zeta =\nu /[LD+M+2\lambda _kD]\), we obtain that \(u_{k+1}\) is a \(\nu \)-approximate solution satisfying (5.1). Using the convergence rate above, we require \(\frac{L+\lambda _0}{\lambda _0}\ln \frac{MD+LD^2+2\lambda _kD^2}{\nu }\) operator evaluations.
Note that since, \(\lambda _0 \geqslant L\), we have \(\frac{L+\lambda _0}{\lambda _0} \leqslant 2\). Moreover,
$$\begin{aligned} \ln {\frac{MD+LD^2 + 2\lambda _kD^2}{\nu }}&\leqslant \ln {\frac{4\lambda _kD^2}{\nu }}\nonumber \\&= \ln \left( \frac{2\lambda _0^2D^2B^2}{M^2}\right) \nonumber \\&= \ln \left( n^{2/3}\max \left\{ n^{2/3}, \frac{d\ln (1\eta )}{\varepsilon ^2}\right\} \max \left\{ 1, \frac{L^2D^2}{M^2}\right) \right\} \end{aligned}$$(5.23)Hence, each iteration of NISPP method requires \(O(\log {n})\) iterations of OE method for solving the subproblem. Moreover, each iteration of the OE method requires 2B stochastic operator evaluations. Hence, we require \(O(KB\log {n})\) stochastic operator evaluations in the entire run of NISPP (Algorithm 2). Noting that \(KB = n\), we conclude that this is a near linear time algorithm and also performs only a single pass over the data in the stochastic outer-loop. We provide the details of OE method in the Appendix B.
-
2.
For non-DP version of NISPP method, i.e., \(\sigma _k = 0\) for all k, we can easily obtain population risk bound of \(O(\frac{MD}{\sqrt{n}})\) by setting \(\lambda _0 = \frac{M}{D}\sqrt{n}\), \(B = 1\) (or \(K = n\)) and \(\nu = \frac{MD}{\sqrt{n}}\) in Corollary 2.
In view of Corollary 2, it seems that running NISPP method for \(n^{3/2}\) stochastic operator evaluations may provide optimal risk bounds. However, running that many stochastic operator evaluations requires multi-pass over the dataset so, in principle, this would only provide bounds in the empirical risk. In order to compute the population risk of this multi-pass version, we analyze the stability of NISPP and provide generalization guarantees which result in optimal population risk.
6 Stability of NISPP and optimal risk for DP-SVI and DP-SSP
In this section, we develop a multi-pass variant of NISPP method, and prove its stability to extrapolate empirical performance to population risk bounds.
6.1 Stability of NISPP method
Let us start with two adjacent datasets \(\textbf{S}\simeq \textbf{S}'\). Suppose we run NISPP method on both datasets starting from the same point \(w_0 \in \mathcal {W}\). Then, in the following lemma, we provide bound on the how far apart trajectories of these two runs can drift.
Lemma 3
Let \((u_{k+1}, w_{k+1})_{k\geqslant 0}\) and \((u'_{k+1}, w'_{k+1})_{k\geqslant 0}\) be two trajectories of the NISPP method (Algorithm 2) for any adjacent datasets \(\textbf{S}\simeq \textbf{S}'\) whose batches are denotes by \(\textbf{B}_{k+1}, \textbf{B}'_{k+1}\) respectively. Moreover, denote \(\textbf{a}_{k+1}:= \Vert F_{\textbf{B}_{k+1}}(u_{k+1}) - F_{\textbf{B}'_{k+1}}(u_{k+1})\Vert _{}^{}\) and \(\delta _{k+1}:= \Vert u_{k+1}-u'_{k+1}\Vert _{}^{} (= \Vert w_{k+1}-w'_{k+1}\Vert _{}^{})\) for k-th iteration of Algorithm 2. Then, if \(i = \inf \{k: \textbf{B}_{k+1} \ne \textbf{B}'_{k+1}\}\),
Proof
It is clear from the definition i that \(\textbf{B}_j = \textbf{B}_j'\) for all \(j \leqslant i\). This implies \(u_j = u'_j\) and \(w_j = w_j'\) for all \(j \leqslant i\). Hence, we conclude first case of (6.1).
Using (5.1) for \(\nu \)-approximate strong VI solution, we have,
Then, adding (6.2) with \(w = u'_{k+1}\) and (6.3) with \(w = u_{k+1}\), we have
Also note that
where the last inequality follows from monotonicity of \(F_{\textbf{B}'_{k+1}}\). Using above relation along with (6.4), we obtain
where we used the definition \(a_k\) along with Cauchy-Schwarz inequality. Solving for the quadratic inequality in \(\delta _{k+1}\), we obtain the following recursion
which can be further simplified to
Solving this recursion and noting the base case that \(\delta _i = 0\), we obtain (6.1). \(\square \)
A direct consequence of the previous analysis are in-expectation and high probability uniform argument stability upper bounds for the sampling with replacement variant of Algorithm 2.
Theorem 6
Let \(\mathcal {A}\) denote the sampling with replacement NISPP method (Algorithm 2) where \(\textbf{B}_k\) is chosen uniformly at random from subsets of \(\textbf{S}\) of a given size \(B_k\). Then \(\mathcal {A}\) satisfies the following uniform argument stability bounds:
Furthermore, if \(|\textbf{B}_{k}|= B\) and \(\lambda _k= \lambda \) for all k (i.e., constant batch size and regularization parameter throughout iterations) then w.p. at least \(1-n\exp \{-KB/[4n]\}\) (over both sampling and noise addition)
Proof
Let \(\textbf{S}\simeq \textbf{S}^{\prime }\) and \((u_{k+1})_k\), \((u_{k+1}^{\prime })_k\) the trajectories of the algorithm on \(\textbf{S}\) and \(\textbf{S}^{\prime }\), respectively. By Lemma 3, letting \(\delta _{k+1}=\Vert {\tilde{w}}_{k+1}-{\tilde{w}}_{k+1}^{\prime }\Vert \), we get \(\delta _{j} \leqslant \sum _{k=1}^{j}\Big ( \frac{2\textbf{a}_k}{\lambda _k}+\sqrt{\frac{4\nu }{\lambda _k}} \Big ),\) where \(\textbf{a}_k=\Vert F_{\textbf{B}_{k+1}}(u_{k+1})-F_{\textbf{B}_{k+1}'}(u_{k+1}')\Vert _{}^{}\) is a random variable. By the law of total probability, \(\mathbb {E}[\textbf{a}_k] \leqslant \frac{|\textbf{B}_{k+1}|}{n}\frac{2\,M}{|\textbf{B}_{k+1}|}+\big (1-\frac{|\textbf{B}_{k+1}|}{n}\big )\cdot 0=\frac{2\,M}{n}.\) Hence, \(\mathbb {E}[\delta _{j}]\leqslant \sum _{k=1}^{j}\Big ( \frac{2\,M}{n\lambda _k}+\sqrt{\frac{4\nu }{\lambda _k}} \Big ) \leqslant \sum _{k=1}^{K}\Big ( \frac{2\,M}{n\lambda _k}+\sqrt{\frac{4\nu }{\lambda _k}} \Big ).\) Since \( \Vert \varPi _{\mathcal {W}}({\bar{w}}_K)-\varPi _{\mathcal {W}}({\bar{w}}_K^{\prime })\Vert _{}^{} \leqslant \Vert {\bar{w}}_K - {\bar{w}}'_K\Vert _{}^{} \leqslant \max _{k\in [K]}\delta _k\), and since \(\textbf{S}\simeq \textbf{S}^{\prime }\) are arbitrary,
We proceed now to the high-probability bound. Let \(\varvec{r}_k\sim \text{ Ber }(p)\), for \(k\in [K]\), with \(Kp< 1\). Then, for any \(0<\theta <1/2\),
Choosing \(\theta =\tau /[2Kp]<1/2\), we get that the probability above is upper bounded by \(\exp \{-\tau ^2/[4Kp]\}\). Finally, choosing \(\tau =Kp\), we get
Next, fix the coordinate i where \(S\simeq S^{\prime }\) may differ. Noticing that \(\textbf{a}_k\) is a.s. upper bounded by \((2M/B)\varvec{r}_k\) with \(\varvec{r}_k\sim \text{ Ber }(p)\), with \(p=B/n\), we get
In particular, w.p. at least \(1-\exp \{-\frac{KB}{4n}\}\), we have \( \textbf{a}_k \leqslant \frac{4\,M}{\lambda n}+\sqrt{\frac{4\nu }{\lambda }}.\) Using a union bound over \(i\in [n]\) (and noticing that averaging preserves the stability bound), we conclude that
Hence, we conclude the proof. \(\square \)
Remark 4
An important observation, for the high-probability guarantee to be useful, is necessary that the algorithm is run for sufficiently many iterations; in particular, we require \(K=\omega (n/B)\). Whether this assumption can be completely avoided is an interesting question. Nevertheless, as we will see in the section, our policy for DP-SVI and DP-SSP problem satisfies this requirement.
6.2 Optimal risk for DP-SVI and DP-SSP by the NISPP method
In previous three sections, we provided bounds on optimization error, generalization error and value of \(\sigma \) for obtaining \((\varepsilon ,\eta )\)-differential privacy. In this section, we specify a policy for selecting \(\lambda _k, B_k, \gamma _k, \sigma _k \) and \(\nu \) such that requirement in the previous three sections are satisfied and we can get optimal risk bounds while maintaining \((\varepsilon ,\eta )\)-privacy. In particular, consider the multi-pass NISPP method where each sample batch \(\textbf{B}_k\) is chosen randomly from subsets of \(\textbf{S}\) with replacement. Then, we have the following theorem:
Theorem 7
Let \(\mathcal {A}\) be the multi-pass NISPP method (Algorithm 2). Set the following constant stepsize and batchsize policy for \(\mathcal {A}\):
Then, Algorithm 2 is \((\varepsilon , \eta )\)-differential private. Moreover, output \(\mathcal {A}(\textbf{S})\) satisfies the following bound on \(\mathbb {E}_{{\mathcal {A}}}[\text{ WeakGap}_{\textrm{VI}}(\mathcal {A}(\textbf{S}),F)]\) for SVI problem (or \(\mathbb {E}_{{\mathcal {A}}}[\text{ WeakGap}_{\textrm{SP}}(\mathcal {A}(\textbf{S}),f)]\) for SSP problem)
Moreover, such solution is obtained in total of \({\widetilde{O}}(n\sqrt{n})\) stochastic operator evaluations.
Proof
Note that since \(\nu \) satisfies assumption in Proposition 9, we have \(\ell _2\)-sensitivity of the update of \(u_{k+1}\) is \(\frac{4M}{\lambda _0B_{k+1}}\). Then, in view of Theorem 1 along with value of \(\sigma _{k+1}\), we conclude that Algorithm 2 is \((\varepsilon , \eta )\)-differential private.
Now, for convergence, we first bound the empirical gap. Given that our bounds for (VI(F)) and (SP(f)) are analogous, we proceed indistinctively for both problems. By Theorem 5, along with the fact that sampling with replacement is an unbiased stochastic oracle for the empirical operator, we have for any \(\textbf{S}\)
Similar claims can be made for empirical gap for (SP(f)) problem: \(\mathbb {E}_{{\mathcal {A}}}\big [ \text{ EmpGap }(\mathcal {A}, f_{\textbf{S}})\big ]\) where output of \(\mathcal {A}\) is \((x(\textbf{S}), y(\textbf{S}))\).
Next, by Theorem 6, we have that \(\mathcal {A}(\textbf{S})\) (or \(x(\textbf{S})\) and \(y(\textbf{S})\) for the SSP case) are UAS with parameter
Hence, noting that empirical risk upper bounds weak VI or SP gap, i.e., using Proposition 1 or Proposition 2 (depending on whether the problem is an SSP or SVI, respectively), we have that the risk is upper bounded by its empirical risk plus \(M\delta \), where \(\delta \) is the UAS parameter of the algorithm; in particular, if \(\text{ WeakGap }(\mathcal {A};\textbf{S})\) is the (SVI or SSP, respectively) gap function for the expectation objective, then
Similar claim can be made for \(\text{ WeakGap}_{\textrm{SP}}(\mathcal {A},f)\).
Finally, we analyze the running time performance. As in Remark 3, number of OE method iterations for obtaining \(\nu \)-approximate solution is \(O\big (\frac{L+\lambda _0}{\lambda _0}\ln \big (\frac{LD^2+MD+\lambda _0D^2}{\nu }\big )\big )\). Now note that \(\frac{L+\lambda _0}{\lambda _0} \leqslant \frac{\sqrt{n}+1}{\sqrt{n}} \leqslant 2\) since \(n \geqslant 1\). Moreover, in view of (5.23), we have \(\ln \big (\frac{LD^2+MD+\lambda _0D^2}{\nu }\big ) \leqslant \ln \left( \frac{4\lambda _0D^2}{\nu }\right) \leqslant \ln \left( n^2\max \lbrace 1, \frac{L^2D^2}{M^2}\rbrace \max \lbrace n, \frac{d\ln (1/\eta )}{\varepsilon ^2}\rbrace \right) \). Each iteration of OE method costs B stochastic operator evaluations and we run outer-loop of NISPP method K times. Hence, total stochastic operator evaluations (after ignoring the \(\ln \)-term) \({\widetilde{O}}(KB) = {\widetilde{O}}(n\sqrt{n})\). Hence, we conclude the proof. \(\square \)
7 Lower bounds and optimality of our algorithms
In this section, we show the optimality of our obtained rates from Sects. 4.1 and 6.1. The first observation is that, since DP-SCO corresponds to a DP-SSP problem where \({{\mathcal {Y}}}\) is a singleton, the complexity of DP-SSP is lower bounded by \(\varOmega (MD \big ( \frac{1}{\sqrt{n}}+\min \big \{1,\frac{\sqrt{d}}{\varepsilon n}\big \} \big )\big )\): this is a known lower bound for DP-SCO [5]. It is important to note as well that this reduction applies to the weak generalization gap, as defined in (1.4), as in the case \({{\mathcal {Y}}}=\{\bar{y}\}\) is a singleton:
which is simply the expected optimality gap. Using this reduction, together with a lower bound for DP-SCO [5], we conclude that
Proposition 11
Let \(n,d\in \mathbb {N}\), \(L,M,D,\varepsilon >0\) and \(\delta =o(1/n)\). The class of problems DP-SSP with gradient operators within the class \({\mathcal {M}}_{\mathcal {W}}^1(L,M)\), and domain \({\mathcal {W}}\) containing an Euclidean ball of diameter D/2, satisfies the lower bound
Next, we study the case of DP-SVI. The situation is more subtle here. Our approach is to first prove a reduction from population weak VI gap to empirical strong VI gap for the case where operators are constant w.r.t. w. In fact, it seems unlikely this reduction works for more general monotone operators, however this suffices for our purposes, as we will later prove a lower bound construction used for DP-ERM [7] leads to a lower bound for strong VI gap with constant operators.
The formal reduction to the empirical version of the problem is presented in the following lemma. Its proof follows closely the reduction from DP-SCO to DP-ERM from [5]. Below, given a dataset \(\textbf{S}\in {{\mathcal {Z}}}^n\), let \({{\mathcal {P}}}_{\textbf{S}}=\frac{1}{n}\sum _{\varvec{\beta }\in \textbf{S}}\delta _{\varvec{\beta }}\) be the empirical distribution associated with \(\textbf{S}\).
Lemma 4
Let \({\mathcal {A}}\) be an \((\varepsilon /[4\log (1/\eta )], e^{-\varepsilon }\eta /[8\log (1/\eta )])\)-DP algorithm for SVI problems. Then there exists an \((\varepsilon ,\eta )\)-DP algorithm \({\mathcal {B}}\) such that for any empirical VI problem with constant operators,
Proof
Consider the algorithm \({\mathcal {B}}\) that does the following: first, it extracts a sample \(\textbf{T}\sim ({{\mathcal {P}}}_{\textbf{S}})^n\); next, executes \({\mathcal {A}}\) on \(\textbf{T}\); and finally, outputs \({\mathcal {A}}(\textbf{T})\). We claim that this algorithm is \((\varepsilon ,\eta )\)-DP w.r.t. \(\textbf{S}\), which follows easily by bounding the number of repeated examples with high probability, together with the group privacy property applied to \({\mathcal {A}}\) (for a more detailed proof, see Appendix C in [5]). Now, given a constant operator \(F_{\varvec{\beta }}(w)\), let \(R(\varvec{\beta })\in {\mathbb {R}}^d\) be its unique evaluation. Let now \(R_{\textbf{S}}\) be the unique evaluation of \(F_{\textbf{S}}\), and given a distribution \({{\mathcal {P}}}\) let \(R_{{\mathcal {P}}}\) be the unique evaluation of \(F_{{{\mathcal {P}}}}(w)={\mathbb {E}}_{\varvec{\beta }\sim {{\mathcal {P}}}}[F_{\varvec{\beta }}(w)]\).
Noting that \({\mathbb {E}}_{\textbf{T}}[R_{\textbf{T}}]=R_{\textbf{S}},\) we have that
where third equality holds since the optimal choice of w is independent of \(\textbf{T}\), and the last equality holds by definition of the weak gap function and the fact that \(\textbf{T}\sim ({{\mathcal {P}}}_{\textbf{S}})^n\).
\(\square \)
Next, we prove a lower bound for the empirical VI problem over constant operators.
Proposition 12
Let \(n,d\in \mathbb {N}\), \(L,M,D,\varepsilon >0\) and \(2^{-o(n)}\leqslant \delta \leqslant o(1/n)\). The class of DP empirical VI problems with constant operators within the class \({\mathcal {M}}_{\mathcal {W}}^1(L,M)\), and domain \({\mathcal {W}}\) containing an Euclidean ball of diameter D/2 satisfies the lower bound
Proof
Consider the following empirical VI problem: \(F_{\varvec{\beta }}(u)= M\varvec{\beta }\), \(\mathcal {W}={\mathcal {B}}(0,D)\) and dataset \(\textbf{S}\) with points contained in \(\{-1/\sqrt{d},+1/\sqrt{d}\}^d\). Notice that, since the operator in this case is constant, the VI gap coincides with the excess risk of the associated convex optimization problem
Indeed, for any \(u\in {\mathcal {W}}\),
This, together with the lower bounds on excess risk proved for this problem in [7, Appendix C] and [46, Theorem 5.1] show that any \((\varepsilon ,\eta )\)-DP algorithm for this problem must incur in worst-case VI gap \(\varOmega (MD\min \{1,\frac{\sqrt{d\log (1/\eta )}}{\varepsilon n}\})\), which proves the result. \(\square \)
The two results above provide the claimed lower bound for the weak SVI gap of any differentially private algorithm.
Theorem 8
Let \(n,d\in \mathbb {N}\), \(L,M,D,\varepsilon >0\) and \(2^{-o(n)}\leqslant \delta \leqslant o(1/n)\). The class of DP-SVI problems with operators within the class \({\mathcal {M}}_{\mathcal {W}}^1(L,M)\), and domain \({\mathcal {W}}\) containing an Euclidean ball of diameter D/2 satisfies a lower bound for the weak VI gap
Before we prove the result, let us observe that the presented lower bound shows the optimality of our algorithms in the range where \(M\geqslant LD\). Obtaining a matching lower bound for any choice of M, L, D is an interesting question, which unfortunately our proof technique does not address: this is a limitation that the lower bound is based on constant operators, and therefore their Lipschitz constants are always zero.
Proof
Let \({\mathcal {A}}\) be any algorithm for SVI. By the classical (nonprivate) lower bounds for SVI [30, 40], we have that the minimax SVI gap attainable is lower bounded by \(\varOmega (MD/\sqrt{n})\). On the other hand, using Lemma 4 the accuracy of any \((\varepsilon ,\eta )\)-DP algorithm for weak SVI gap is lower bounded by the strong gap achieved by \((4\varepsilon \ln (1/\eta ),e^{\varepsilon }{\tilde{O}}(\eta ))\)-DP algorithms on empirical VI problems with constant operators. Finally, by Proposition 12, the latter class of problems enjoys a lower bound \(\varOmega (\min \{1,\sqrt{d\ln (1/[e^{\varepsilon }{\tilde{O}}(\eta )])}/[\varepsilon n\ln (1/\eta )]\})=\tilde{\varOmega }(\min \{1,\sqrt{d}/[n\varepsilon ])\), which implies a lower bound on the former class of this order. We conclude by combining both the private and nonprivate lower bounds established above. \(\square \)
Notes
The denominations of weak and strong gap functions used in this paper are not standard, but we believe are the most appropriate in this context. For example, in [50] used the terms weak and strong generalization measure for (1.4) and (1.3) respectively, but it is clear that these quantities do not refer to standard generalization measures used in stochastic optimization.
Note that the probabilities in the definition of DP only involve the probability space of algorithmic randomization, and not of the datasets, which is emphasized by the notation \(\mathbb {P}_{\mathcal {A}}\). The datasets must be neighbors, but they are otherwise arbitrary, and this is crucial to certify the privacy for any user.
Here, we mean that for almost every \(\varvec{\beta }\), we have \(F_{\varvec{\beta }}\in {\mathcal {M}}_{\mathcal {W}}^1(M,L)\).
In our case we use uniform sampling on each iteration, as opposed to the Poisson sampling of [1]; however, it is possible to verify that similar moment estimates lead to our stated result.
References
Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318. ACM (2016)
Asi, H., Feldman, V., Koren, T., Talwar, K.: Private stochastic convex optimization: optimal rates in l1 geometry. In: M. Meila, T. Zhang (eds.) Proceedings of the 38th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 139, pp. 393–403. PMLR (2021)
Auslender, A., Teboulle, M.: Interior projection-like methods for monotone variational inequalities. Math. Program. 104(1), 39–68 (2005)
Bassily, R., Feldman, V., Guzmán, C., Talwar, K.: Stability of stochastic gradient descent on nonsmooth convex losses. In: H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, H. Lin (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 4381–4391. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper/2020/file/2e2c4bf7ceaa4712a72dd5ee136dc9a8-Paper.pdf
Bassily, R., Feldman, V., Talwar, K., Guha Thakurta, A.: Private stochastic convex optimization with optimal rates. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
Bassily, R., Guzman, C., Nandi, A.: Non-euclidean differentially private stochastic convex optimization. In: M. Belkin, S. Kpotufe (eds.) Proceedings of Thirty Fourth Conference on Learning Theory, Proceedings of Machine Learning Research, vol. 134, pp. 474–499. PMLR (2021)
Bassily, R., Smith, A., Thakurta, A.: Differentially private empirical risk minimization: efficient algorithms and tight error bounds (2014)
Bousquet, O., Elisseeff, A.: Stability and generalization. J. Mach. Learn. Res. 2, 499–526 (2002). https://doi.org/10.1162/153244302760200704
Bousquet, O., Klochkov, Y., Zhivotovskiy, N.: Sharper bounds for uniformly stable algorithms. In: J. Abernethy, S. Agarwal (eds.) Proceedings of Thirty Third Conference on Learning Theory, Proceedings of Machine Learning Research, vol. 125, pp. 610–626. PMLR (2020)
Bun, M., Steinke, T.: Concentrated differential privacy: Simplifications, extensions, and lower bounds. In: Theory of Cryptography Conference, pp. 635–658. Springer (2016)
Chaudhuri, K., Monteleoni, C.: Privacy-preserving logistic regression. In: NIPS (2008)
Chaudhuri, K., Monteleoni, C., Sarwate, A.D.: Differentially private empirical risk minimization. J. Mach. Learn. Res. 12(Mar), 1069–1109 (2011)
Dong, J., Roth, A., Su, W.J., et al.: Gaussian differential privacy. J. R. Stat. Soc. B 84(1), 3–37 (2022)
Duchi, J.C., Ruan, F.: The right complexity measure in locally private estimation: it is not the fisher information. arXiv preprint arXiv:1806.05756 (2018)
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Proceedings of the 3rd Conference on Theory of Cryptography, TCC’06, pp. 265–284. Springer, Berlin (2006). https://doi.org/10.1007/11681878_14
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014). https://doi.org/10.1561/0400000042
Dwork, C., Rothblum, G.N., Vadhan, S.P.: Boosting and differential privacy. In: 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, October 23–26, 2010, Las Vegas, Nevada, USA, pp. 51–60. IEEE Computer Society (2010). https://doi.org/10.1109/FOCS.2010.12
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2007)
Feldman, V., Koren, T., Talwar, K.: Private stochastic convex optimization: optimal rates in linear time. In: Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, pp. 439–449. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3357713.3384335
Feldman, V., Mironov, I., Talwar, K., Thakurta, A.: Privacy amplification by iteration. In: M. Thorup (ed.) 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7–9, 2018, pp. 521–532. IEEE Computer Society (2018). https://doi.org/10.1109/FOCS.2018.00056
Feldman, V., Vondrak, J.: High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. In: A. Beygelzimer, D. Hsu (eds.) Proceedings of the Thirty-Second Conference on Learning Theory, Proceedings of Machine Learning Research, vol. 99, pp. 1270–1279. PMLR, Phoenix, USA (2019)
Gupta, A., Ligett, K., McSherry, F., Roth, A., Talwar, K.: Differentially private combinatorial optimization. In: M. Charikar (ed.) Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19, 2010, pp. 1106–1125. SIAM (2010)
Hardt, M., Recht, B., Singer, Y.: Train faster, generalize better: Stability of stochastic gradient descent. In: M.F. Balcan, K.Q. Weinberger (eds.) Proceedings of The 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 48, pp. 1225–1234. PMLR, New York (2016)
Hsieh, Y., Iutzeler, F., Malick, J., Mertikopoulos, P.: On the convergence of single-call stochastic extra-gradient methods. In: H.M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E.B. Fox, R. Garnett (eds.) Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8–14 December 2019, Vancouver, BC, Canada, pp. 6936–6946 (2019). http://papers.nips.cc/paper/8917-on-the-convergence-of-single-call-stochastic-extra-gradient-methods
Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Variance-based extragradient methods with line search for stochastic variational inequalities. SIAM J. Optim. 29(1), 175–206 (2019). https://doi.org/10.1137/17M1144799
Jain, P., Kothari, P., Thakurta, A.: Differentially private online learning. In: 25th Annual Conference on Learning Theory (COLT), pp. 24.1–24.34 (2012)
Jain, P., Thakurta, A.: (near) dimension independent risk bounds for differentially private learning. In: ICML (2014)
Jalilzadeh, A., Shanbhag, U.V.: A proximal-point algorithm with variable sample-sizes (ppawss) for monotone stochastic variational inequality problems. In: 2019 Winter Simulation Conference (WSC), pp. 3551–3562 (2019). https://doi.org/10.1109/WSC40007.2019.9004836
Juditsky, A., Nemirovski, A.: First order methods for nonsmooth convex large-scale optimization, i: General Purpose Methods. MIT Press (2012)
Juditsky, A., Nemirovski, A.S., Tauvel, C.: Solving variational inequalities with Stochastic Mirror-Prox algorithm. Stoch. Syst. 1(1), 17–58 (2011). https://doi.org/10.1214/10-SSY011
Kasiviswanathan, S.P., Lee, H.K., Nissim, K., Raskhodnikova, S., Smith, A.: What can we learn privately? In: 2008 49th Annual IEEE Symposium on Foundations of Computer Science, pp. 531–540. IEEE (2008)
Kifer, D., Smith, A., Thakurta, A.: Private convex empirical risk minimization and high-dimensional regression. In: Conference on Learning Theory, pp. 25–1 (2012)
Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Kotsalis, G., Lan, G., Li, T.: Simple and optimal methods for stochastic variational inequalities, i: operator extrapolation (2020)
Lei, Y., Yang, Z., Yang, T., Ying, Y.: Stability and generalization of stochastic gradient methods for minimax problems. In: International Conference on Machine Learning, pp. 6175–6186. PMLR (2021)
Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020. OpenReview.net (2020). https://openreview.net/forum?id=ByexElSYDr
Mironov, I.: Rényi differential privacy. In: Proceedings of 30th IEEE Computer Security Foundations Symposium (CSF), pp. 263–275 (2017). https://arxiv.org/abs/1702.07476
Nemirovski, A.: Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004). https://doi.org/10.1137/S1052623403425629
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
Nemirovsky, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)
Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109(2), 319–344 (2007)
Neumann, Jv.: Zur theorie der gesellschaftsspiele. Math. Ann. 100, 295–320 (1928)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 14(5), 877–898 (1976)
Shalev-Shwartz, S., Shamir, O., Srebro, N., Sridharan, K.: Learnability, stability and uniform convergence. J. Mach. Learn. Res. 11, 2635–2670 (2010)
Sion, M.: On general minimax theorems. Pac. J. Math. 8(1), 171–176 (1958)
Steinke, T., Ullman, J.R.: Between pure and approximate differential privacy. J. Priv. Confident. 7(2) (2016)
Ullman, J.: Private multiplicative weights beyond linear queries. In: Proceedings of the 34th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, pp. 303–312. ACM (2015)
Vershynin, R.: High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press (2018). https://doi.org/10.1017/9781108231596
Williamson, R., Menon, A.: Fairness risk measures. In: K. Chaudhuri, R. Salakhutdinov (eds.) Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 97, pp. 6786–6797. PMLR (2019)
Zhang, J., Hong, M., Wang, M., Zhang, S.: Generalization bounds for stochastic saddle point problems. In: Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, vol. 130, pp. 568–576. PMLR (2021)
Acknowledgements
CG would like to thank Roberto Cominetti for valuable discussions on stochastic variational inequalities and nonexpansive iterations. Part of this work was done while CG was at the University of Twente.
Funding
Open access funding provided by SCELC, Statewide California Electronic Library Consortium
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
DB was partially supported by the NSF grants CCF 1909298, CCF 2245705. CG was partially supported by INRIA through the INRIA Associate Teams project, ANID—Millenium Science Initiative Program—NCN17_059, and FONDECYT 1210362 project.
Appendices
Proof of Proposition 2
Let \(\textbf{S}' = (\varvec{\beta }'_1, \dots , \varvec{\beta }'_n)\) be independent of \(\textbf{S}\). For \(i \in [n]\), we denote \(\textbf{S}^{i} = (\varvec{\beta }_1, \dots , \varvec{\beta }_{i-1}, \varvec{\beta }'_{i}, \varvec{\beta }_{i+1}, \dots , \varvec{\beta }_n)\). Then, for any \(w \in \mathcal {W}\), we have
Now, taking supremum over \(w \in \mathcal {W}\) and taking expectation over \(\mathcal {A}\) which is \(\delta \)-UAS, we have,
Operator extrapolation method [34]
Suppose we want to solve VI problem associated with operator \(F_k(\cdot ) = F(\cdot )+ \lambda _k(\cdot -w_k)\) whose (unique) solution be \(w_{k+1}^*\). It is clear that \(F_k\) is an \((L+\lambda _k)\)-Lipschitz continuous operator which is \(\lambda _k\)-strongly monotone as well. Denote \(\kappa := \frac{\lambda _k}{L+\lambda _k} + 1\) and consider the following algorithm for solving this problem:
We have the following convergence guarantee for this algorithm:
In particular, in order to ensue that \(\Vert z_T-w_{k+1}^*\Vert _{}^{} \leqslant \frac{\nu }{LD^2+MD+ 2\lambda _kD^2}\), we require
itearations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Boob, D., Guzmán, C. Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems. Math. Program. 204, 255–297 (2024). https://doi.org/10.1007/s10107-023-01953-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-023-01953-5
Keywords
- Variational inequalities
- Saddle-point problems
- Differential privacy
- Stochastic algorithms
- Algorithmic stability