Abstract
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator T and a single-valued monotone, Lipschitz continuous, and expectation-valued operator V. We draw motivation from the seminal work by Attouch and Cabot (Attouch in AMO 80:547–598, 2019, Attouch in MP 184: 243–287) on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward–backward-forward method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that our method produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an \(\epsilon \)-solution improves to \({\mathcal {O}}(1/\epsilon )\). Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of \({\mathcal {O}}(1/k)\) while the oracle complexity of computing a suitably defined \(\epsilon \)-solution is \({\mathcal {O}}(1/\epsilon ^{1+a})\) where \(a > 1\). Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to competitors.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Problem formulation and motivation
A wide range of problems in areas such as optimization, variational inequalities, game theory, signal processing, or traffic theory, can be reduced to solving inclusions involving set-valued operators in a Hilbert space \({\mathsf {H}}\), i.e. to find a point \(x\in {\mathsf {H}}\) such that \(0\in F(x)\), where \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is a set-valued operator. In many applications such inclusion problems display specific structure revealing that the operator F can be additively decomposed. This leads us to the main problem we consider in this paper.
Problem 1
Let \({\mathsf {H}}\) be a real separable Hilbert space with inner product \(\left\langle \cdot ,\cdot \right\rangle \) and associated norm \(\left\| \cdot \right\| =\sqrt{\left\langle \cdot ,\cdot \right\rangle }\). Let \(T:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) and \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) be maximally monotone operators, such that V is L-Lipschitz continuous. The problem is to
We assume that Problem 1 is well-posed:
Assumption 1
\({\mathsf {S}}\triangleq \mathsf {Zer}(F)\ne \varnothing \).
We are interested in the case where (MI) is solved by an iterative algorithm based on a stochastic oracle (SO) representation of the operator V. Specifically, when solving the problem, the algorithm calls to the SO. At each call, the SO receives as input a search point \(x\in {\mathsf {H}}\) generated by the algorithm on the basis of past information so far, and returns the output \({\hat{V}}(x,\xi )\), where \(\xi \) is a random variable defined on some given probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\), taking values in a measurable set \(\Xi \) with law \({{\mathsf {P}}}={\mathbb {P}}\circ \xi ^{-1}\). In most parts of this paper, and the vast majority of contributions on stochastic variational problems in general, it is assumed that the output of the SO is unbiased,
Such stochastic inclusion problems arise in numerous problems of fundamental importance in mathematical optimization and equilibrium problems, either directly or through an appropriate reformulation. An excellent survey on the existing techniques for solving problem (MI) can be found in [3] (in general Hilbert spaces) and [4] (in the finite-dimensional case).
1.2 Motivating examples
In what follows, we provide some motivating examples.
Example 1
(Stochastic Convex Optimization) Let \({\mathsf {H}}_{1},{\mathsf {H}}_{2}\) be separable Hilbert spaces. A large class of stochastic optimization problems, with wide range of applications in signal processing, machine learning and control, is given by
where \(h:{\mathsf {H}}_{1}\rightarrow {\mathbb {R}}\) is a convex differentiable function with a Lipschitz continuous gradient \(\nabla h\), represented as \(h(u)={\mathbb {E}}_{\xi }[{\hat{h}}(u,\xi )]\). \(f:{\mathsf {H}}_{1}\rightarrow (-\infty ,\infty ]\) and \(g:{\mathsf {H}}_{2}\rightarrow (-\infty ,\infty ]\) are proper, convex lower semi-continuous functions, and \(L:{\mathsf {H}}_{1}\rightarrow {\mathsf {H}}_{2}\) is a bounded linear operator. Problem (2) gains particular relevance in machine learning, where usually h(u) is a convex data fidelity term (e.g. a population risk functional), and g(Lu) and f(u) embody penalty or regularization terms; see e.g. total variation [5], hierarchical variable selection [6, 7], and graph regularization [8, 9]. Applications in control and engineering are given in [10, 11]. We refer to (2) as the primal problem. Using Fenchel-Rockafellar duality [3, ch.19], the dual problem of (2) is given by
where \(g^{*}\) is the Fenchel conjugate of g and \((f+h)^{*}(w)=f^{*}\square h^{*}(w)=\inf _{u\in {\mathsf {H}}_{1}}\{f^{*}(u)+h^{*}(w-u)\}\) represents the infimal convolution of the functions f and h. Combining the primal problem (2) with its dual (3), we obtain the saddle-point problem
Following classical Karush-Kuhn-Tucker theory [12], the primal-dual optimality conditions associated with (4) are concisely represented by the following monotone inclusion: Find \({\bar{x}}=({\bar{u}},{\bar{v}})\in {\mathsf {H}}_{1}\times {\mathsf {H}}_{2}\equiv {\mathsf {H}}\) such that
We may compactly summarize these conditions in terms of the zero-finding problem (MI) using the operators V and T, defined as
Note that the operator \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is the sum of a maximally monotone and a skew-symmetric operator. Hence, in general, it is not cocoercive. Conditions on the data guaranteeing Assumption 1 are stated in [13].
Since h(u) is represented as an expected value, we need to appeal to simulation based methods to evaluate its gradient. Also, significant computational speedups can be made if we are able to sample the skew-symmetric linear operator \((u,v)\mapsto (L^{*}u,-Lu)\) in an efficient way. Hence, we assume that there exists a SO that can provide unbiased estimator to the gradient operators \(\nabla h(u)\) and \((L^{*}v,-Lu)\). More specifically, given the current position \(x=(u,v)\in {\mathsf {H}}_{1}\times {\mathsf {H}}_{2}\), the oracle will output the random estimators \({\hat{H}}(u,\xi ),{\hat{L}}_{u}(u,\xi ),{\hat{L}}_{v}(v,\xi )\) such that
This oracle feedback generates the random operator \({\hat{V}}(x,\xi )=({\hat{H}}(u,\xi )+{\hat{L}}_{v}(v,\xi ),-{\hat{L}}_{u}(u,\xi ))\), which allows us to approach the saddle-point problem (4) via simulation-based techniques.
Example 2
(Stochastic variational inequality problems) There are a multitude of examples of monotone inclusion problems (MI) where the single-valued map V is not the gradient of a convex function. An important model class where this is the case is the stochastic variational inequality (SVI) problem. Due to their huge number of applications, SVI’s received enormous interest over the last several years from various communities [14,15,16,17]. This problem emerges when V(x) is represented as an expected value as in (1) and \(T(x)=\partial g(x)\) for some proper lower semi-continuous function \(g:{\mathsf {H}}\rightarrow (-\infty ,\infty ]\). In this case, the resulting structured monotone inclusion problem can be equivalently stated as
An important and frequently studied special case of (6) arises if g is the indicator function of a given closed and convex subset \({\mathsf {C}}\subset {\mathsf {H}}\). In this cases the set-valued operator T becomes the normal cone map
This formulation includes many fundamental problems including fixed point problems, Nash equilibrium problems and complementarity problems [4]. Consequently, the equilibrium condition (6) reduces to
1.3 Contributions
Despite the advances in stochastic optimization and variational inequalities, the algorithmic treatment of general monotone inclusion problems under stochastic uncertainty is a largely unexplored field. This is rather surprising given the vast amount of applications of maximally monotone inclusions in control and engineering, encompassing distributed computation of generalized Nash equilibria [18,19,20], traffic systems [21,22,23], and PDE-constrained optimization [24]. The first major aim of this manuscript is to introduce and investigate a relaxed inertial stochastic forward-backward-forward (RISFBF) method, building on an operator splitting scheme originally due to Paul Tseng [25]. RISFBF produces three sequences \(\{(X_{k},Y_{k},Z_{k});k\in {\mathbb {N}}\}\), defined as
The data involved in this scheme are explained as follows:
-
\(A_k(Z_{k})\) and \(B_k(Y_{k})\) are random estimators of V obtained by consulting the SO at search points \(Z_k\) and \(Y_k\), respectively;
-
\((\alpha _{k})_{k\in {\mathbb {N}}}\) is a sequence of non-negative numbers regulating the memory, or inertia of the method;
-
\((\lambda _{k})_{k\in {\mathbb {N}}}\) is a positive sequence of step-sizes;
-
\((\rho _{k})_{k\in {\mathbb {N}}}\) is a non-negative relaxation sequence.
If \(\alpha _{k}=0\) and \(\rho _{k}=1\) the above scheme reduces to the stochastic forward-backward-forward method developed in [26, 27], with important applications in Gaussian communication networks [16] and dynamic user equilibrium problems [28]. However, even more connections to existing methods can be made.
Stochastic Extragradient If \(T = \{0\}\), we obtain the inertial extragradient method
If \(\alpha _{k}=0\), this reduces to a generalized extragradient method
recently introduced in [29].
Proximal Point Method If \(V=0\), the method reduces to the well-known deterministic proximal point algorithm [2], overlaid by inertial and relaxation effects. The scheme reads explicitly as
The list of our contributions reads as follows:
-
(i)
Wide Applicability A key argument in favor of Tseng’s operator splitting method is that it is provably convergent when solving structured monotone inclusions of the type (MI), without imposing cocoercivity of the single-valued part V. This is a remarkable advantage relative to the perhaps more familiar and direct forward-backward splitting methods (aka projected (stochastic) gradient descent in the potential case). In particular, our scheme is applicable to the primal-dual splitting described in Example 1.
-
(ii)
Asymptotic guarantees We show that under suitable assumptions on the relaxation sequence \((\rho _k)_{k\in {\mathbb {N}}}\), the non-decreasing inertial sequence \((\alpha _k)_{k\in {\mathbb {N}}}\), and step-length sequence \((\lambda _k)_{k\in {\mathbb {N}}}\), the generated stochastic process \((X_{k})_{k\in {\mathbb {N}}}\) weakly almost surely converges to a random variable with values in \({\mathsf {S}}\). Assuming demiregularity of the operators yields strong convergence in the real (possibly infinite-dimensional) Hilbert space.
-
(iii)
Non-asymptotic linear rate under strong monotonicity of V When V is strongly monotone, strong convergence of the last iterate is shown and the sequence admits a non-asymptotic linear rate of convergence without a conditional unbiasedness of the SO. In particular, we show that the iteration and oracle complexity of computing an \(\epsilon \)-solution is no worse than \({\mathcal {O}}(\log (\tfrac{1}{\epsilon }))\) and \({\mathcal {O}}(\tfrac{1}{\epsilon })\), respectively.
-
(iv)
Non-asymptotic sublinear rate under monotonicity of V When V is monotone, by leveraging the Fitzpatrick function [3, 30, 31] associated with the structured operator \(F=T+V\), we propose a restricted gap function. We then prove that the expected gap of an averaged sequence diminishes at the rate of \({\mathcal {O}}(\tfrac{1}{k})\). This allows us to derive an \({\mathcal {O}}(\tfrac{1}{\epsilon })\) upper bound on the iteration complexity, and an \({\mathcal {O}}(\tfrac{1}{\epsilon ^{2+\delta }})\) upper bound (for \(\delta >0)\) on the oracle complexity for computing an \(\epsilon \)-solution.
The above listed contributions shed new light on a set of open questions, which we summarize below:
-
(i)
Absence of rigorous asymptotics So far no aymptotic convergence guarantees have been available when considering relaxed inertial FBF schemes when T is maximally monotone and V is a single-valued monotone expectation-valued map.
-
(ii)
Unavailability of rate statements We are not aware of any known non-asymptotic rate guarantees for algorithms solving (MI) under stochastic uncertainty. A key barrier in monotone and stochastic regimes in developing such statements has been in the availability of a residual function. Some recent progress in the special stochastic variational inequality case has been made by [26, 32, 33], but the general Hilbert-space setting involving set-valued operators seems to be largely unexplored (we will say more in Sect. 1.4).
-
(iii)
Bias requirements A standard assumption in stochastic optimization is that the SO generates signals which are unbiased estimators of the deterministic operator V(x). Of course, the requirement that the noise process is unbiased may often fail to hold in practice. In the present Hilbert space setting this is in some sense even expected to be the rule rather than the exception, since most operators are derived from complicated dynamical systems or the optimization method is applied to discretized formulations of the original problem. See the recent work [34, 35] for an interesting illustration in the context of PDE-constrained optimization. Some of our results go beyond the standard unbiasedness assumption.
1.4 Related research
Understanding the role of inertial and relaxation effects in numerical schemes is a line of research which received enormous interest over the last two decades. Below, we try to give a brief overview about related algorithms.
Inertial, Relaxation, and Proximal schemes
In the context of convex optimization, Polyak [36] introduced the Heavy-ball method. This is a two-step method for minimizing a smooth convex function f. The algorithm reads as
The difference from the gradient method is that the base point of the gradient descent step is taken to be the extrapolated point \(Z_{k}\), instead of \(X_{k}\). This small difference has the surprising consequence that (HB) attains optimal complexity guarantees for strongly convex functions with Lipschitz continuous gradients. Hence, (HB) resembles an optimal method [37]. The acceleration effects can be explained by writing the process entirely in terms of a single updating equation as
Choosing \(\alpha _{k}=1-a_{k}\delta _{k}\) and \(\lambda _{k}=\gamma _{k}\delta ^{2}_{k}\) for \(\delta _{k}\) a small parameter, we arrive at
This can be seen as a discrete-time approximation of the second-order dynamical system
introduced by [38]. Since then, it has received significant attention in the potential, as well as in the non-potential case (see e.g [39,40,41] for an appetizer). As pointed out in [42], if \(\gamma (t)=1\), the above system reduces to a continuous version of Nesterov’s fast gradient method [43]. Recently, [44] defined a stochastic version of the Heavy-ball method.
Motivated by the development of such fast methods for convex optimization, Attouch and Cabot [1] studied a relaxed-inertial forward-backward algorithm, reading as
If \(V=0\), this reduces to a relaxed inertial proximal point method analyzed by Attouch and Cabot [2]. If \(\rho _{k}=1\), an inertial forward-backward splitting method is recovered, first studied by Lorenz and Pock [45].
Convergence guarantees for the forward-backward splitting rely on the cocoercivity (inverse strong monotonicity) of the single-valued operator V. Example 1, in which V is given by a monotone plus a skew-symmetric linear operator, illustrates an important instance for which this assumption is not satisfied (see [46] for further examples). A general-purpose operator splitting framework, relaxing the cocoercivity property, is the forward-backward-forward (FBF) method due to Tseng [25]. Inertial [47] and relaxed-inertial [48] versions of FBF have been developed. An all-encompassing numerical scheme can be compactly described as
Weak and strong convergence under appropriate conditions on the involved operators and parameter sequences are established in [48], but no rate statements are given.
Related work on stochastic approximation Efforts in extending stochastic approximation methods to variational inequality problems have considered standard projection schemes [14] for Lipschitz and strongly monotone operators. Extragradient and (more generally) mirror-prox algorithms [49, 50] can contend with merely monotone operators, while iterative smoothing [51] schemes can cope with with the lack of Lipschitz continuity. It is worth noting that extragradient schemes have recently assumed relevance in the training of generative adversarial networks (GANS) [52, 53]. Rate analysis for stochastic extragradient (SEG) have led to optimal rates for Lipschitz and monotone operators [50], as well as extensions to non-Lipschitzian [51] and pseudomonotone settings [32, 54]. To alleviate the computational complexity single-projection schemes, such as the stochastic forward-backward-forward (SFBF) method [26, 27], as well as subgradient-extragradient and projected reflected algorithms [55] have been studied as well.
SFBF has been shown to be nearly optimal in terms of iteration and oracle complexity, displaying significant empirical improvements compared to SEG. While the role of inertia in optimization is well documented, in stochastic splitting problems, the only contribution we are aware of is the work by Rosasco et al. [56]. In that paper asymptotic guarantees for an inertial stochastic forward-backward (SFB) algorithm are presented under the hypothesis that the operators V and T are maximally monotone and the single-valued operator V is cocoercive.
Variance reduction approaches Variance-reduction schemes address the deterioration in convergence rate and the resulting poorer practical behavior via two commonly adopted avenues:
-
(i)
If the single-valued part V appears as a finite-sum (see e.g. [52, 57]), variance-reduction ideas from machine learning [58] can be used.
-
(ii)
Mini-batch schemes that employ an increasing batch-size of gradients [59] lead to deterministic rates of convergence for stochastic strongly convex [60], convex [61], and nonconvex optimization [62], as well as for pseudo-monotone SVIs via extragradient [32], and splitting schemes [26].
In terms of run-time, improvements in iteration complexities achieved by mini-batch approaches are significant; e.g. in strongly monotone regimes, the iteration complexity improves from \({\mathcal {O}}(\tfrac{1}{\epsilon })\) to \({\mathcal {O}}(\ln (\tfrac{1}{\epsilon }))\) [27, 55]. Beyond run-time advantages, such avenues provide asymptotic and rate guarantees under possibly weaker assumptions on the problem as well as the oracle; in particular, mini-batch schemes allow for possibly biased oracles and state-dependency of the noise [55]. Concerns about the sampling burdens are, in our opinion, often overstated since such schemes are meant to provide \(\epsilon \)-solutions; e.g. if \(\epsilon =10^{-3}\) and the obtained rate is \({\mathcal {O}}(1/k)\), then the batch-size \(m_k = \lfloor k^a \rfloor \) where \(a > 1\), implying that the batch-sizes are \({\mathcal {O}}(10^{3a})\), a relatively modest requirement, given the advances in computing.
Outline The remainder of the paper is organized in five sections. After dispensing with the preliminaries in Sect. 2, we present the (RISFBF) scheme in Sect. 3. Asymptotic and rate statements are developed in Sect. 4 and preliminary numerics are presented in Sect. 5. We conclude with some brief remarks in Sect. 6. Technical results are collected in Appendix 1.
2 Preliminaries
Throughout, \({\mathsf {H}}\) is a real separable Hilbert space with scalar product \(\left\langle \cdot ,\cdot \right\rangle \), norm \(\left\| \cdot \right\| \), and Borel \(\sigma \)-algebra \({\mathcal {B}}\). The symbols \(\rightarrow \) and \(\rightharpoonup \) denote strong and weak convergence, respectively. \({{\,\mathrm{Id}\,}}:{\mathsf {H}}\rightarrow {\mathsf {H}}\) denotes the identity operator on \({\mathsf {H}}\). Stochastic uncertainty is modeled on a complete probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\), endowed with a filtration \({{\mathbb {F}}}=({\mathcal {F}}_{k})_{k\in {\mathbb {N}}_{0}}\). By means of the Kolmogorov extension theorem, we assume that \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) is large enough so that all random variables we work with are defined on this space. A \({\mathsf {H}}\)-valued random variable is a measurable function \(X:(\Omega ,{\mathcal {F}})\rightarrow ({\mathsf {H}},{\mathcal {B}})\). Let \({\mathcal {G}}\subset {\mathcal {F}}\) be a given sub-sigma algebra. The conditional expecation of the random variable X is denoted by \({\mathbb {E}}(X\vert {\mathcal {G}})\). If \({\mathcal {A}}\subset {\mathcal {G}}\subset {\mathcal {F}}\), the tower-property says that
We denote by \(\ell ^{0}({{\mathbb {F}}})\) the set of sequences of real-valued random variables \((\xi _{k})_{k\in {\mathbb {N}}}\) such that, for every \(k\in {\mathbb {N}}\), \(\xi _{k}\) is \({\mathcal {F}}_{k}\)-measurable. For \(p\in [1,\infty ]\), we set
We denote the set of summable non-negative sequences by \(\ell ^{1}_{+}({\mathbb {N}})\).
We now collect some concepts from monotone operator theory. For more details, we refer the reader to [3]. Let \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) be a set-valued operator. Its domain and graph are defined as \({{\,\mathrm{dom}\,}}F\triangleq \{x\in {\mathsf {H}}\vert F(x)\ne \varnothing \},\text { and }{{\,\mathrm{gr}\,}}(F)\triangleq \{(x,u)\in {\mathsf {H}}\times {\mathsf {H}}\vert u\in F(x)\},\) respectively. A single-valued operator \(C:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is cocoercive if there exists \(\beta >0\) such that \(\left\langle C(x)-C(y),x-y\right\rangle \ge \beta \left\| C(x)-C(y)\right\| ^{2}\). A set-valued operator \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is called monotone if
The set of zeros of F, denoted by \(\mathsf {Zer}(T)\), defined as \(\mathsf {Zer}(F)\triangleq \{x\in {\mathsf {H}}\vert 0\in T(x)\}\). The inverse of F is \(F^{-1}:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}},u\mapsto F^{-1}(u)=\{x\in {\mathsf {H}}\vert u\in F(x)\}\). The resolvent of F is \(J_{F}\triangleq ({{\,\mathrm{Id}\,}}+ F)^{-1}.\) If F is maximally monotone, then \(J_{F}\) is a single-valued map. We also need the classical notion of demiregularity of an operator.
Definition 1
An operator \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is demiregular at \(x\in {{\,\mathrm{dom}\,}}(F)\) if for every sequence \(\{(y_{n},u_{n})\}_{n\in {\mathbb {N}}}\subset {{\,\mathrm{gr}\,}}(F)\) and every \(u\in F(y)\), we have
The notion of demiregularity captures various properties typically used to establish strong convergence of dynamical systems. [10] exhibits a large class of possibly set-valued operators F which are demiregular. In particular, demiregularity holds if F is uniformly or strongly monotone, or when F is the subdifferential of a uniformly convex lower semi-continuous function f. We often use the Young inequality
3 Algorithm
Our aim is to solve the monotone inclusion problem (MI) under the following assumption:
Assumption 2
Consider Problem 1. The set-valued operator \(T:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is maximally monotone with an efficiently computable resolvent. The single-valued operator \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is maximally monotone and L-Lipschitz continuous (\(L>0)\) with full domain \({{\,\mathrm{dom}\,}}V={\mathsf {H}}\).
Assumption 2 guarantees that the operator \(F=T+V\) is maximally monotone [3, Corolllary 24.4].
For numerical tractability, we make a finite-dimensional noise assumption, common to stochastic optimization problems in (possibly infinite-dimensional) Hilbert spaces [63].Footnote 1
Assumption 3
(Finite-dimensional noise) All randomness can be described via a finite dimensional random variable \(\xi :(\Omega ,{\mathcal {F}})\rightarrow (\Xi ,{\mathcal {E}})\), where \(\Xi \subseteq {\mathbb {R}}^{d}\) is a measurable set with Borel sigma algebra \({\mathcal {E}}\). The law of the random variable \(\xi \) is denoted by \({{\mathsf {P}}}\), i.e. \({{\mathsf {P}}}(\Gamma )\triangleq {\mathbb {P}}(\{\omega \in \Omega \vert \xi (\omega )\in \Gamma \})\) for all \(\Gamma \in {\mathcal {E}}\).
To access new information about the values of the operator V(x), we adopt a stochastic approximation (SA) approach where samples are accessed iteratively and online: At each iteration, we assume to have access to a stochastic oracle (SO) which generates some estimate on the value of the deterministic operator V(x) when the current position is x. This information is obtained by drawing an iid sample form the law \({{\mathsf {P}}}\). These fresh samples are then used in the numerical algorithm after an initial extrapolation step delivering the point \(Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1})\), for some extrapolation coefficient \(\alpha _{k}\in [0,1]\). Departing from \(Z_{k}\), we call the SO to retrieve the mini-batch estimator with sample rate \(m_{k}\in {\mathbb {N}}\):
\(\xi _{k}\triangleq (\xi _{k}^{(1)},\ldots ,\xi _{k}^{(m_{k})})\) is the data sample employed by the SO to return the estimator \(A_{k}(Z_{k})\). Subsequently we perform a forward-backward update with step size \(\lambda _{k}>0\):
In the final updates, a second independent call of the SO is made, using the data set \(\eta _{k}=(\eta ^{(1)}_{k},\ldots ,\eta ^{(m_{k})}_{k})\), yielding the estimator
and the new state
This iterative procedure generates a stochastic process \(\{(Z_{k},Y_{k},X_{k})\}_{k\in {\mathbb {N}}}\), defining the relaxed inertial stochastic forward-backward-forward (RISFBF) scheme. A pseudocode is given as Algorithm 1 below.
Note that RISFBF is still conceptual since we have not explained how the sequences \((\alpha _{k})_{k\in {\mathbb {N}}},(\lambda _{k})_{k\in {\mathbb {N}}}\) and \((\rho _{k})_{k\in {\mathbb {N}}}\) should be chosen. We will make this precise in our complexity analysis, starting in Sect. 4.
3.1 Equivalent form of RISFBF
We can collect the sequential updates of RISFBF as the fixed-point iteration
where \(\Phi _{k,\lambda }:{\mathsf {H}}\times \Omega \rightarrow {\mathsf {H}}\) is the time-varying map given by
Formulating the algorithm in this specific way establishes the connection between RISFBF and the heavy-ball system. Indeed, combining the iterations in (15) in one, we get a second-order difference equations, closely resembling the structure present in (HB):
Also, it reveals the Markovian nature of the process \((X_{k})_{k\in {\mathbb {N}}}\); It is clear from the formulation (15) that \(X_{k}\) is Markov with respect to the sigma-algebra \(\sigma (\{X_{0},\ldots ,X_{k-1}\})\).
3.2 Assumptions on the stochastic oracle
In order to tame the stochastic uncertainty in RISFBF, we need to impose some assumptions on the distributional properties of the random fields \((A_{k}(x))_{k\in {\mathbb {N}}}\) and \((B_{k}(x))_{k\in {\mathbb {N}}}\). One crucial statistic we need to control is the SO variance. Define the oracle error at a point \(x \in {\mathsf {H}}\) as
Assumption 4
(Oracle Noise) We say that the SO
-
(i)
is conditionally unbiased if \({\mathbb {E}}_{\xi }[\varepsilon (x,\xi ) \vert x]=0\) for all \(x\in {\mathsf {H}}\);
-
(ii)
enjoys a uniform variance bound: \({\mathbb {E}}_{\xi }[\left\| \varepsilon (x,\xi )\right\| ^{2}\vert x]\le \sigma ^{2}\) for some \(\sigma > 0\) and all \(x\in {\mathsf {H}}\).
Define
The introduction of these two processes allows us to decompose the random estimator into a mean component and a residual, so that
If Assumption 4(i) holds true then \({\mathbb {E}}[W_{k}\vert {\hat{{\mathcal {F}}}}_{k}]=0={\mathbb {E}}[U_{k}\vert {\mathcal {F}}_{k}]=0\). Hence, under conditional unbiasedness, the processes \(\{(U_{k},{\mathcal {F}}_{k});k\in {\mathbb {N}}\}\) and \(\{(W_{k},\hat{{\mathcal {F}}_{k}});k\in {\mathbb {N}}\}\) are martingale difference sequences, where the filtrations are defined as \({\mathcal {F}}_{0}\triangleq {\hat{{\mathcal {F}}}}_{0}\triangleq {\mathcal {F}}_{1}\triangleq \sigma (X_{0},X_{1})\), and iteratively, for \(k\ge 1\),
Observe that \({\mathcal {F}}_{k}\subseteq {\hat{{\mathcal {F}}}}_{k}\subseteq {\mathcal {F}}_{k+1}\) for all \(k\ge 1\). The uniform variance bound, Assumption 4(ii), ensures that the processes \(\{(U_{k},{\mathcal {F}}_{k});k\in {\mathbb {N}}\}, \{(W_{k},\hat{{\mathcal {F}}_{k}});k\in {\mathbb {N}}\}\) have finite second moment.
Remark 1
For deriving the stochastic estimates in the analysis to come, it is important to emphasize that \(X_{k}\) is \({\mathcal {F}}_{k}\)-measurable for all \(k\ge 0\), and \(Y_{k}\) is \({\hat{{\mathcal {F}}}}_{k}\)-measurable.
The mini-batch sampling technology implies an online variance reduction effect, summarized in the next lemma, whose simple proof we omit.
Lemma 1
(Variance of the SO) Suppose Assumption 4 holds. Then for \(k \ge 1\),
We see that larger sampling rates lead to more precise point estimates of the single-valued operator. This comes at the cost of more evaluations of the stochastic operator. Hence, any mini-batch approach faces a trade-off between the oracle complexity and the iteration complexity. We want to use mini-batch estimators to achieve an online variance reduction scheme, motivating the next assumption.
Assumption 5
(Batch Size) The batch size sequence \((m_{k})_{k\in {\mathbb {N}}}\) is non-decreasing and satisfies \(\sum _{k=1}^{\infty }\frac{1}{m_{k}}<\infty \).
4 Analysis
This section is organized into three subsections. The first subsection derives asymptotic convergence guarantees, while the second and third subsections provides linear and sublinear rate statements in strongly monotone and monotone regimes, respectively.
4.1 Asymptotic convergence
Given \(\lambda >0\), we define the residual function for the monotone inclusion (MI) as
Clearly, for every \(\lambda >0\), \(x\in {\mathsf {S}}\Leftrightarrow \mathsf {res}_{\lambda }(x)=0\). Hence, \(\mathsf {res}_{\lambda }(\cdot )\) is a merit function for the monotone inclusion problem. To put this merit function into context, let us consider the special case where T is the subdifferential of a lower semi-continuous convex function \(g:{\mathsf {H}}\rightarrow (-\infty ,\infty ]\), i.e. \(T=\partial g\). In this case, the resolvent \(J_{\lambda T}\) reduces to the well-known proximal-operator
In the potential case, where \(V(x)=\nabla f(x)\) for some smooth convex function \(f:{\mathsf {H}}\rightarrow {\mathbb {R}}\), the residual function is thus seen to be a constant multiple of the norm of the so-called gradient mapping \(\left\| x-{{\,\mathrm{prox}\,}}_{\lambda g}(x-\lambda V(x))\right\| \), which is a standard merit function in convex [64] and stochastic [65, 66] optimization. We use this function to quantify the per-iteration progress of RISFBF. The main result of this subsection is the following.
Theorem 2
(Asymptotic Convergence) Let \({\bar{\alpha }},{\bar{\varepsilon }}\in (0,1)\) be fixed parameters. Suppose that Assumption 1-5 hold true. Let \((\alpha _{k})_{k\in {\mathbb {N}}}\) be a non-decreasing sequence such that \(\lim _{k\rightarrow \infty }\alpha _{k}={\bar{\alpha }}\). Let \((\lambda _{k})_{k\in {\mathbb {N}}}\) be a converging sequence in \((0,\frac{1}{4L})\) such that \(\lim _{k\rightarrow \infty }\lambda _{k}=\lambda \in (0,\frac{1}{4L})\). If \(\rho _{k}=\frac{5(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{4(2\alpha _{k}^{2}-\alpha _{k}+1)(1+L\lambda _{k})}\) for all \(k\ge 1\), then
-
(i)
\(\lim _{k\rightarrow \infty }\mathsf {res}_{\lambda _{k}}(Z_{k})=0\) in \(L^{2}({\mathbb {P}})\);
-
(ii)
the stochastic process \((X_{k})_{k\in {\mathbb {N}}}\) generated by algorithm RISFBF weakly converges to a \({\mathsf {S}}\)-valued limiting random variable X;
-
(iii)
\(\sum _{k=1}^{\infty }\left[ (1-\alpha _{k})\left( \frac{5(1-\alpha _{k}}{4\rho _{k}(1+L\lambda _{k})}-1\right) -2\alpha _{k}^{2}\right] \left\| X_{k}-X_{k-1}\right\| ^{2}<\infty \quad {\mathbb {P}}\)-a.s.
We prove this Theorem via a sequence of technical Lemmas.
Lemma 3
For all \(k\ge 1\), we have
Proof
By definition,
where the last inequality uses the non-expansivity property of the resolvent operator. Rearranging terms gives the claimed result. \(\square \)
Next, for a given pair \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we define the stochastic processes \((\Delta M_{k})_{k\in {\mathbb {N}}}, (\Delta N_{k}(p,p^{*}))_{k\in {\mathbb {N}}}\), and \((\mathtt {e}_k)_{k\in {\mathbb {N}}}\) as
Key to our analysis is the following energy bound on the evolution of the anchor sequence \(\left( \left\| X_{k}-p\right\| ^{2}\right) _{k\in {\mathbb {N}}}\).
Lemma 4
(Fundamental Recursion) Let \((X_{k})_{k\in {\mathbb {N}}}\) be the stochastic process generated by RISFBF with \(\alpha _{k}\in (0,1)\), \(0\le \rho _{k}<\frac{5}{4(1+L\lambda _{k})}\), and \(\lambda _{k}\in (0,1/4L)\). For all \(k\ge 1\) and \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we have
Proof
To simplify the notation, let us call \(A_{k}\equiv A_{k}(Z_{k})\) and \(B_{k}\equiv B_{k}(Y_{k})\). We also introduce the intermediate update \(R_{k}\triangleq Y_{k}+\lambda _{k}(A_{k}-B_{k})\). For all \(k\ge 0\), it holds true that
Since
Introducing the process \((\mathtt {e}_{k})_{k\in {\mathbb {N}}}\) from eq. (22), the aforementioned set of inequalities reduces to
Hence,
But \(Y_{k}+\lambda _{k}T(Y_{k})\ni Z_{k}-\lambda _{k}A_{k}\), implying that
Pick \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), so that \(p^{*}-V(p)\in T(p)\). Then, the monotonicity of T yields the estimate
This is equivalent to
This implies that
Hence, we obtain the following,
Rearranging terms, we arrive at the following bound on \(\left\| R_{k}-p\right\| ^{2}\):
Next, we observe that \(\left\| X_{k+1}-p\right\| ^{2}\) may be bounded as follows.
We may then derive a bound on the expression in (25),
By invoking (19), we arrive at the estimate
Furthermore,
which implies that
Multiplying both sides by \(\frac{\rho _{k}(1/2-2L\lambda _{k})}{1+L\lambda _{k}}\), a positive scalar since \(\lambda _k \in (0,\tfrac{1}{4L})\), we obtain
Rearranging terms, and noting that \((1/2-2L\lambda _{k})(1+L\lambda _{k})\le 1/2-2L^{2}\lambda ^{2}_{k}\), the above estimate becomes
Substituting this bound into the first majorization of the anchor process \(\left\| X_{k+1}-p\right\| ^{2}\), we see
Observe that
and Lemma 16 gives
By hypothesis, \(\alpha _{k},\rho _{k},\lambda _{k}\) are defined such that \(\frac{5/2-2\rho _{k}(1+L\lambda _{k})}{2\rho _{k}(1+L\lambda _{k})}>0\). Then, using both of these relations in the last estimate for \(\left\| X_{k+1}-p\right\| ^{2}\), we arrive at
Using the respective definitions of the stochastic increments \(\Delta M_{k},\Delta N_{k}(p,p^{*})\) in (20) and (21), we arrive at
\(\square \)
Recall that \(Y_{k}\) is \({\hat{{\mathcal {F}}}}_{k}\)-measurable. By the law of iterated expectations, we therefore see
for all \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\). Observe that if we choose \((p,0)\in {{\,\mathrm{gr}\,}}(F)\), meaning that \(p\in {\mathsf {S}}\), then \(\Delta N_{k}(p,0)\equiv \Delta N_{k}(p)\) is a martingale difference sequence. Furthermore, for all \(k\ge 1\),
where \(\mathtt {a}_{k}\triangleq \lambda ^{2}_{k}\left( \frac{10\rho _{k}}{1+L\lambda _{k}}+\frac{\rho _{k}}{2}\right) \).
To prove the a.s. convergence of the stochastic process \((X_{k})_{k\in {\mathbb {N}}}\), we rely on the following preparations. Motivated by the analysis of deterministic inertial schemes, we are interested in a regime under which \(\alpha _{k}\) is non-decreasing.
For a fixed reference point \(p\in {\mathsf {H}}\), define the anchor sequences \(\phi _{k}(p)\triangleq \frac{1}{2}\left\| X_{k}-p\right\| ^{2}\), and the energy sequence \(\Delta _{k}\triangleq \frac{1}{2}\left\| X_{k}-X_{k-1}\right\| ^{2}.\) In terms of these sequences, we can rearrange the fundamental recursion from Lemma 4 to obtain
For a given pair \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), define
Then, in terms of the sequence
and using the monotonicity of V, guaranteeing that \(\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle \ge 0\), we get
Defining
we arrive at
Our aim is to use \(Q_{k}(p)\) as a suitable energy function for RISFBF. For that to work, we need to identify a specific parameter sequence pair \((\rho _{k},\alpha _{k})\) so that \(\beta _{k}\ge 0\) and \(\theta _{k} \ge 0\), taking the following design criteria into account:
-
1.
\(\alpha _{k}\in (0,{\bar{\alpha }}]\subset (0,1)\) for all \(k\ge 1\);
-
2.
\(\alpha _{k}\) is non-decreasing with
$$\begin{aligned} \sup _{k\ge 1}\alpha _{k}={\bar{\alpha }},\text { and} \inf _{k\ge 1}\alpha _{k}>0. \end{aligned}$$(38)
Incorporating these two restrictions on the inertia parameter \(\alpha _{k}\), we are left with the following constraints:
To identify a constellation of parameters \((\alpha _{k},\rho _{k})\) satisfying these two conditions, define
Then,
which gives
Solving this condition for \(\rho _{k}\) reveals that \(\frac{1}{\rho _{k}}\ge \frac{4(2\alpha ^{2}_{k}-\alpha _{k}+1)(1+L\lambda _{k})}{5(1-\alpha _{k})^{2}}.\) Using the design condition \(\alpha _{k}\le {\bar{\alpha }}<1\), we need to choose the relaxation parameter \(\rho _{k}\) so that \(\rho _{k}\le \frac{5(1-\alpha _k)^{2}}{4(1+L\lambda _{k})(2\alpha ^{2}_{k}-\alpha _{k}+1)}\). This suggests to use the relaxation sequence \(\rho _{k}=\rho _k(\alpha _k,\lambda _k)\triangleq \frac{5(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{4(1+L\lambda _{k})(2\alpha ^{2}_{k}-\alpha _{k}+1)}\). It remains to verify that with this choice we can guarantee \(\beta _{k}\ge 0.\) This can be deduced as follows: Recalling (40), we get
In particular, we note that if \(f(\alpha ) \triangleq {\tfrac{(1-\alpha )(2\alpha ^2-\alpha +1)}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}}+\alpha -1\), then
We consider two cases:
Case 1: \({\bar{\alpha }}\le 1/2\). In this case
Case 2: \(1/2<{\bar{\alpha }}<1\). In this case
Thus, \(f(\alpha )\) is decreasing in \(\alpha \in (0,{\bar{\alpha }}]\), where \(0<{\bar{\alpha }}<1\).
Using these relations, we see that (37) reduces to
where \(\theta _{k}\ge 0\). This is the basis for our proof of Theorem 2.
Proof of Theorem 2
We start with (i). Consider (42), with the special choice \(p^{*}=0\), so that \(p\in {\mathsf {S}}\). Taking conditional expectations on both sides of this inequality, we arrive at
where \(\psi _{k}\triangleq \frac{\mathtt {a}_{k}\sigma ^{2}}{2m_{k}}\). By design of the relaxation sequence \(\rho _{k}\), we see that
Since \(\lim _{k\rightarrow \infty }\lambda _{k}=\lambda \in (0,1/4L)\), and \(\lim _{k\rightarrow \infty }\alpha _{k}={\bar{\alpha }}\in (0,1)\), we conclude that the sequence \((\mathtt {a}_{k})_{k\in {\mathbb {N}}}\) is bounded. Consequently, thanks to Assumption 5, the sequence \((\psi _{k})_{k\in {\mathbb {N}}}\) is in \(\ell ^{1}_{+}({\mathbb {N}})\). We next claim that \(Q_{k}(p)\ge 0\). To verify this, note that
where the first and second inequality uses \({\bar{\varepsilon }}<1\) and \(\alpha _{k}\le {\bar{\alpha }}\in (0,1)\), the third inequality makes use of the Young inequality: \(\frac{1-a}{2a}\left\| X_{k}-p\right\| ^{2}+\frac{a}{2(1-a)}\left\| X_{k}-X_{k-1}\right\| ^{2}\ge \left\| X_{k}-p\right\| \cdot \left\| X_{k}-X_{k-1}\right\| \). Finally, the fourth inequality uses the triangle inequality \(\left\| X_{k-1}-p\right\| \le \left\| X_{k}-X_{k-1}\right\| +\left\| X_{k}-p\right\| \). Lemma 17 readily yields the existence of an a.s. finite limiting random variable \(Q_{\infty }(p)\) such that \(Q_{k}(p)\rightarrow Q_{\infty }(p)\), \({\mathbb {P}}\)-a.s., and \((\theta _{k})_{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({{\mathbb {F}}})\). Since \(\lambda _{k}\rightarrow \lambda \), we get \(\lim _{k\rightarrow \infty }\rho _{k}=\frac{5(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{4(1+L\lambda )(2{\bar{\alpha }}^{2}+1-{\bar{\alpha }})}\). Hence,
\({\mathbb {P}}\)-a.s. We conclude that \(\lim _{k\rightarrow \infty }\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})=0\), \({\mathbb {P}}\)-a.s..
To prove (ii) observe that, since \({\bar{\varepsilon }}\in (0,1)\) and \(\lim _{k\rightarrow \infty }\alpha _{k}={\bar{\alpha }}\), it follows
Consequently, \(\lim _{k\rightarrow \infty }\left\| X_{k}-X_{k-1}\right\| ^2 =0\), \({\mathbb {P}}\)-a.s., and \(\left( \phi _{k}(p)-\alpha _{k}\phi _{k-1}(p)\right) _{k\in {\mathbb {N}}}\) is almost surely bounded. Hence, for each \(\omega \in \Omega \), there exists a bounded random variable \(C_{1}(\omega )\in [0,\infty )\) such that
Iterating this relation, using the fact that \({\bar{\alpha }}\in [0,1)\), we easily derive
Hence, \((\phi _{k}(p))_{k\in {\mathbb {N}}}\) is a.s. bounded, which implies that \((X_{k})_{k\in {\mathbb {N}}}\) is bounded \({\mathbb {P}}\)-a.s. We next claim that \((\left\| X_{k}-p\right\| )_{k\in {\mathbb {N}}}\) converges to a \([0,\infty )\)-valued random variable \({\mathbb {P}}\)-a.s. Indeed, take \(\omega \in \Omega \) such that \(\phi _{k}(p,\omega )\equiv \phi _{k}(\omega )\) is bounded. Suppose there exists \(\mathtt {t}_{1}(\omega )\in [0,\infty ),\mathtt {t}_{2}(\omega )\in [0,\infty )\), and subsequences \((\phi _{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\) and \((\phi _{l_{j}}(\omega ))_{j\in {\mathbb {N}}}\) such that \(\phi _{k_{j}}(\omega )\rightarrow \mathtt {t}_{1}(\omega )\) and \(\phi _{l_{j}}(\omega )\rightarrow \mathtt {t}_{2}(\omega )>\mathtt {t}_{1}(\omega )\). Then, \(\lim _{j\rightarrow \infty }Q_{k_{j}}(p)(\omega )=Q_{\infty }(p)(\omega )=(1-{\bar{\alpha }})\mathtt {t}_{1}(\omega )<(1-{\bar{\alpha }})\mathtt {t}_{2}(\omega )=\lim _{j\rightarrow \infty }Q_{l_{j}}(p)(\omega )=Q_{\infty }(p)(\omega )\), a contradiction. It follows that \(\mathtt {t}_{1}(\omega )=\mathtt {t}_{2}(\omega )\) and, in turn, \(\phi _{k}(\omega )\rightarrow \mathtt {t}(\omega )\). Thus, for each \(p\in {\mathsf {S}}\), \(\phi _{k}(p)\rightarrow \mathtt {t}\) \({\mathbb {P}}\)-a.s.
Since we assume that \({\mathsf {H}}\) is separable, [67, Prop 2.3(iii)] guarantees that there exists a set \(\Omega _{0}\in {\mathcal {F}}\) with \({\mathbb {P}}(\Omega _{0})=1\), and, for every \(\omega \in \Omega _{0}\) and every \(p\in {\mathsf {S}}\), the sequence \((\left\| X_{k}(\omega )-p\right\| )_{k\in {\mathbb {N}}}\) converges.
We next show that all weak limit points of \((X_{k})_{k\in {\mathbb {N}}}\) are contained in \({\mathsf {S}}\). Let \(\omega \in \Omega \) such that \((X_{k}(\omega ))_{k\in {\mathbb {N}}}\) is bounded. Thanks to [3, Lemma 2.45], we can find a weakly convergent subsequence \((X_{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\) with limit \(\chi (\omega )\), i.e. for all \(u\in {\mathsf {H}}\) we have \(\lim _{j\rightarrow \infty }\left\langle X_{k_{j}}(\omega ),u\right\rangle =\left\langle \chi (\omega ),u\right\rangle \). This implies
showing that \(Z_{k_{j}}(\omega )\rightharpoonup \chi (\omega )\). Along this weakly converging subsequence, define
Clearly, \(\mathsf {res}_{\lambda _{k_{j}}}(Z_{k_{j}}(\omega ))=\left\| r_{k_{j}}(\omega )\right\| \), so that \(\lim _{j\rightarrow \infty }r_{k_{j}}(\omega )=0\). By definition
Since V and \(F=T+V\) are maximally monotone, their graphs are sequentially closed in the weak-strong topology \({\mathsf {H}}^{\text {weak}}\times {\mathsf {H}}^{\text {strong}}\) [3, Prop. 20.33(ii)]. Therefore, by the strong convergence of the sequence \((r_{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\), we deduce weak convergence of the sequence \((Z_{k_{j}}(\omega )-r_{k_{j}}(\omega ),Z_{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\rightharpoonup (\chi (\omega ),\chi (\omega ))\). Therefore \(\frac{1}{\lambda }r_{k_{j}}(\omega )-V(Z_{k_{j}}(\omega ))+V\left( Z_{k_{j}}(\omega )-r_{k_{j}}(\omega )\right) \rightarrow 0\). Hence, \(0\in (T+V)(\chi (\omega ))\), showing that \(\chi (\omega )\in {\mathsf {S}}\). Invoking [67, Prop 2.3(iv)], we conclude that \((X_{k})_{k\in {\mathbb {N}}}\) converges weakly \({\mathbb {P}}\)-a.s to an \({\mathsf {S}}\)-valued random variable.
We now establish (iii). Let \(q_{k}\triangleq {\mathbb {E}}[Q_{k}(p)]\), so that (42) yields the recursion
By Assumption 5, and the definition of all sequences involved, we see that \(\sum _{k=1}^{\infty }\psi _{k}<\infty \). Hence, a telescopian argument gives
Hence, for all \(k\ge 1\), rearranging the above reveals
Letting \(k\rightarrow \infty \), we conclude \(\left( {\mathbb {E}}[\theta _{k}]\right) _{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({\mathbb {N}})\). Classically, this implies \(\theta _{k}\rightarrow 0\) \({\mathbb {P}}\)-a.s. By a simple majorization argument, we deduce that \({\mathbb {P}}\)-a.s.
\(\square \)
Remark 2
The above result gives some indication of the balance between the inertial effect and the relaxation effect. Our analysis revealed that the maximal value of the relaxation parameter is \(\rho \le \frac{5(1-{\bar{\varepsilon }})(1-\alpha )^{2}}{4(1+L\lambda )(2\alpha ^{2}-\alpha +1)}\). This is closely aligned with the maximal relaxation value exhibited in Remark 2.13 of [2]. Specifically, the function \(\rho _{m}(\alpha ,\varepsilon )=\frac{5(1-\varepsilon )(1-\alpha )^{2}}{4(1+L\lambda )(2\alpha ^{2}-\alpha +1)}\). This function is decreasing in \(\alpha \). For this choice of parameters, one observes that for \(\alpha \rightarrow 0\) we get \(\rho \rightarrow \frac{5(1-\varepsilon )}{4(1+L\lambda )}\) and for \(\alpha \rightarrow 1\) it is observed \(\rho \rightarrow 0\).
As an immediate corollary of Theorem 2, we obtain a convergence result when all parameter sequences are constant.
Corollary 5
(Asymptotic convergence under constant inertia and relaxation) Let the same Assumptions as in Theorem 2 hold. Consider Algorithm RISFBF with the constant parameter sequences \(\alpha _{k}\equiv \alpha \in (0,1),\lambda _{k}\equiv \lambda \in (0,\tfrac{1}{4L})\) and \(\rho _{k}=\rho <\frac{5(1-\alpha )^{2}}{4(1+L\lambda )(2\alpha ^{2}+1-\alpha )}\). Then \((X_{k})_{k\in {\mathbb {N}}}\) converges weakly \({\mathbb {P}}\)-a.s. to a limiting random variable with values in \({\mathsf {S}}\).
In fact, the a.s. convergence with a larger \(\lambda _k\) is allowed as shown in the following corollary.
Corollary 6
(Asymptotic convergence under larger steplength) Let the same Assumptions as in Theorem 2 hold. Consider Algorithm RISFBF with the constant parameter sequences \(\alpha _{k}\equiv \alpha \in (0,1),\lambda _{k}\equiv \lambda \in (0,\tfrac{1-\nu }{2L})\) and \(\rho _{k}=\rho <\frac{(3-\nu )(1-\alpha )^{2}}{2(1+L\lambda )(2\alpha ^{2}+1-\alpha )}\), where \(0<\nu <1\). Then \((X_{k})_{k\in {\mathbb {N}}}\) converges weakly \({\mathbb {P}}\)-a.s. to a limiting random variable with values in \({\mathsf {S}}\).
Proof
First we make a slight modification to (27) that the following relation holds for \(0<\nu <1\)
Then similarly with (29), we multiply both sides of (28) by \(\frac{\rho _{k}((1-\nu )-2L\lambda _{k})}{1+L\lambda _{k}}\), which is positive since \(\lambda _k \in (0,\tfrac{1-\nu }{2L})\). The convergence follows in a similar fashion to Theorem 2. \(\square \)
Another corollary of Theorem 2 is a strong convergence result, assuming that F is demiregular (cf. Definition 1).
Corollary 7
(Strong Convergence under demiregularity) Let the same Assumptions as in Theorem 2 hold. If \(F=T+V\) is demiregular, then \((X_{k})_{k\in {\mathbb {N}}}\) converges strongly \({\mathbb {P}}\)-a.s. to a \({\mathsf {S}}\)-valued random variable.
Proof
Set \(y_{k_{j}}(\omega )\triangleq Z_{k_{j}}(\omega )-r_{k_{j}}(\omega )\), and \(u_{k_{j}}(\omega )\triangleq \frac{1}{\lambda }r_{k_{j}}(\omega )-V(Z_{k_{j}}(\omega ))+V(Z_{k_{j}}(\omega )-r_{k_{j}}(\omega ))\). We know from the proof of Theorem 2 that \(y_{k_{j}}(\omega )\rightharpoonup \chi (\omega )\) and \(u_{k_{j}}(\omega )\rightarrow 0\). If \(F=T+V\) is demiregular then \(y_{k_{j}}(\omega )\rightarrow \chi (\omega )\). Since we know \(r_{k_{j}}(\omega )\rightarrow 0\), we conclude \(Z_{k_{j}}(\omega )\rightarrow \chi (\omega )\). Since \(Z_{k}\) and \(X_{k}\) have the same limit points, it follows \(X_{k}\rightarrow \chi \). \(\square \)
4.2 Linear convergence
In this section, we derive a linear convergence rate and prove strong convergence of the last iterate in the case where the single-valued operator V is strongly monotone. Various linear convergence results in the context of stochastic approximation algorithms for solving fixed-point problems are reported in [68] in the context of the random sweeping processes. In a general structured monotone inclusion setting [69] derive rate statements for cocoercive mean operators in the context of forward-backward splitting methods. More recently, Cui and Shanbhag [27] provide linear and sublinear rates of convergence for a variance-reduced inexact proximal-point scheme for both strongly monotone and monotone inclusion problems. However, to the best of our knowledge, our results are the first published for a stochastic operator splitting algorithm, featuring relaxation and inertial effects. Notably, this result does not require imposing Assumption 4(i) (i.e. the noise process be conditionally unbiased.) Instead our derivations hold true under a weaker notion of an asymptotically unbiased SO.
Assumption 6
(Asymptotically unbiased SO) There exists a constant \(\mathtt {s}>0\) such that
for all \(k\ge 1\).
This definition is rather mild and is imposed in many simulation-based optimization schemes in finite dimensions. Amongst the more important ones is the simultaneous perturbation stochastic approximation (SPSA) method pioneered by Spall [70, 71]. In this scheme, it is required that the gradient estimator satisfies an asymptotic unbiasedness requirement; in particular, the bias in the gradient estimator needs to diminish at a suitable rate to ensure asymptotic convergence. In fact, this setting has been investigated in detail in the context of stochastic Nash games [72]. Further examples for stochastic approximation schemes in a Hilbert-space setting obeying Assumption 6 are [73, 74] and [35]. We now discuss an example that further clarifies the requirements on the estimator.
Example 3
Let \(\{{\hat{V}}_{k}(x,\xi )\}_{k\in {\mathbb {N}}}\) be a collection of independent random \({\mathsf {H}}\)-valued vector fields of the form \({\hat{V}}_{k}(x,\xi )=V(x)+\varepsilon _{k}(x,\xi )\) such that
where \({\hat{\sigma }}>0\) and \({\tilde{b}} > 0\) such that \((B_{k})_{k\in {\mathbb {N}}}\) is an \({\mathsf {H}}\)-valued sequence satisfying \(\Vert B_k\Vert ^2 \le {\hat{b}}^2\) in an a.s. sense. These statistics can be obtained as
Setting \(\mathtt {s}^2 \triangleq {\hat{\sigma }}^{2}+{\hat{b}}^{2}\), we see that condition (43) holds. A similar estimate holds for the random noise \(\left\| W_{k}\right\| ^{2}\).
Assumption 7
\(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is \(\mu \)-strongly monotone (\(\mu >0\)), i.e.
Combined with Assumption 1, strong monotonicity implies that \({\mathsf {S}}=\{{\bar{x}}\}\) for some \({\bar{x}}\in {\mathsf {H}}\).
Remark 3
In the context of a structured operator \(F=T+V\), the assumption that the single-valued part V is strongly monotone can be done without loss of generality. Indeed, if instead T is assumed to be \(\mu \)-strongly monotone, then \((V+\mu {{\,\mathrm{Id}\,}})+(T-\mu {{\,\mathrm{Id}\,}})\) is maximally monotone and Lipschitz continuous while \({{\tilde{V}}} \triangleq V+\mu {{\,\mathrm{Id}\,}}\) may be seen to be \(\mu \)-strongly monotone operator.
Our first result establishes a “perturbed linear convergence” rate on the anchor sequence \(\left( \left\| X_{k}-{\bar{x}}\right\| ^{2}\right) _{k\in {\mathbb {N}}}\), similar to the one derived in [68, Corollary 3.2] in the context of randomized fixed point iterations.
Theorem 8
(Perturbed linear convergence) Consider RISFBF with \(X_{0}=X_{1}\). Suppose Assumptions 1-3, Assumption 6 and Assumption 7 hold. Let \({\mathsf {S}}=\{{\bar{x}}\}\) denotes the unique solution of (MI). Suppose \(\lambda _k\equiv \lambda \le \min \left\{ \tfrac{a}{2\mu },b\mu ,\tfrac{1-a}{2{\tilde{L}}}\right\} \), where \(0<a,b<1\), \({\tilde{L}}^2\triangleq L^2+\tfrac{1}{2}\), \(\eta _k\equiv \eta \triangleq (1-b)\lambda \mu \). Define \(\Delta M_{k}\triangleq 2\rho _{k}\left\| W_{k}\right\| ^{2}+\frac{(3-a)\rho _{k}\lambda _{k}^{2}}{1+{\tilde{L}}\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}\). Let \((\alpha _k)_{k\in {\mathbb {N}}}\) be a non-decreasing sequence such that \(0<\alpha _k\le {\bar{\alpha }}<1\), and define \(\rho _k\triangleq \tfrac{(3-a)(1-\alpha _k)^2}{2(2\alpha _k^2-0.5\alpha _k+1)(1+{\tilde{L}}\lambda )}\) for every \(k \in {\mathbb {N}}\). Set
where \(q=1-\rho \eta \in (0,1)\), \(\rho =\frac{16(3-a)(1-{\bar{\alpha }})^{2}}{31(1+{\tilde{L}}\lambda )}\). Then the following hold:
-
(i)
\(({\bar{c}}_{k})_{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({\mathbb {N}})\).
-
(ii)
For all \(k\ge 1\)
$$\begin{aligned} {\mathbb {E}}[H_{k+1}\vert {\mathcal {F}}_{1}]\le q^{k}H_{1}+{\bar{c}}_{k}. \end{aligned}$$(46)In particular, this implies a perturbed linear rate of the sequence \((\Vert X_{k}-{\bar{x}}\Vert ^2)_{k\in {\mathbb {N}}}\) as
$$\begin{aligned} {\mathbb {E}}[\left\| X_{k+1}-{\bar{x}}\right\| ^{2}\vert {\mathcal {F}}_{0}]\le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\tfrac{2}{1-{\bar{\alpha }}}{\bar{c}}_{k}. \end{aligned}$$(47) -
(iii)
\(\sum _{k=1}^{\infty }(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda )}-1\right) \left\| X_{k}-X_{k-1}\right\| ^2<\infty \) \({\mathbb {P}}\text {-a.s.}\).
Proof
Our point of departure for the analysis under the stronger Assumption 7 is eq. (23), which becomes
Repeating the analysis of the previous section with reference point \(p={\bar{x}}\) and \(p^{*}=0\), the unique solution of (MI), yields the bound
The triangle inequality \(\left\| Z_{k}-{\bar{x}}\right\| ^{2}\le 2\left\| Y_{k}-Z_{k}\right\| ^{2}+2\left\| Y_{k}-{\bar{x}}\right\| ^{2}\) gives
By (9), we have for all \(c>0\)
Observe that this estimate is crucial in weakening the requirement of conditional unbiasedness. Choose \(c=\frac{\lambda _{k}}{2}\) to get
Assume that \(\lambda _{k}\mu \le \frac{a}{2}<1\). Then,
where \({\tilde{L}}^2\triangleq L^2+1/2\). Moreover, choosing \(\lambda _{k}\le b\mu \), we see
Using these bounds, we readily deduce for \(0<\lambda _{k}\le \min \{\frac{a}{2\mu },b\mu \}\), that
Proceeding as in the derivation of eq. (30), one sees first that
and therefore,
Define \(\eta _{k}=(1-b)\lambda _k\mu \). Using the equality (25),
with stochastic error term \(\Delta M_{k}\triangleq 2\rho _{k}\left\| W_{k}\right\| ^{2}+\frac{(3-a)\rho _{k}\lambda _{k}^{2}}{1+{\tilde{L}}\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}\). From here, it follows that
Since \(\lambda _k = \lambda \), and \(\rho _k=\tfrac{(3-a)(1-\alpha _k)^2}{2(2\alpha _k^2-0.5\alpha _k+1)(1+{\tilde{L}}\lambda )}\), we claim that \(\rho _k \le \tfrac{1-\alpha _k}{(1+4\alpha _k)\eta }\) for \(\eta \equiv (1-b)\lambda \mu \). Indeed,Footnote 2
In particular, this implies \(\eta \rho _{k}\in (0,1)\) for all \(k\in {\mathbb {N}}\). We then have
Next, we show that \(H_k\ge \tfrac{1-{\bar{\alpha }}}{2}\Vert X_k-{\bar{x}}\Vert ^2\), for \(H_{k}\) defined in (45). This can be seen from the next string of inequalities:
In this derivation we have used the (9) to estimate \(\frac{1-\alpha _k}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2}+\frac{2\alpha _k^{2}}{1-\alpha _k}\left\| X_{k}-X_{k-1}\right\| ^{2}\ge 2\alpha _k\left\| X_{k}-{\bar{x}}\right\| \cdot \left\| X_{k}-X_{k-1}\right\| ,\) and the specific choice \(\rho _k = \frac{(3-a)(1-\alpha _k)^{2}}{2(2\alpha _k^{2}-\frac{1}{2}\alpha _k+1)(1+{\tilde{L}}\lambda )}\).
By recalling (51) and invoking (52), we are left with the stochastic recursion
where \(q_k \triangleq 1-\rho _k \eta \) and \({\tilde{b}}_k \triangleq \alpha _k \rho _k \eta _k\Vert X_k-{\bar{x}}\Vert ^2.\) Since \(\rho _k = \frac{(3-a)(1-\alpha _k)^{2}}{2(2\alpha _k^{2}-\frac{1}{2}\alpha _k+1)(1+{\tilde{L}}\lambda )} \ge \rho =\frac{16(3-a)(1-{\bar{\alpha }})^{2}}{31(1+{\tilde{L}}\lambda )}\) for every k, we have that \(q_k \le q =1-\eta \rho \) for every k. Furthermore, \(1>\eta \rho _{k}\ge \eta \rho \), so that \(q \in (0,1)\). Taking conditional expectations on both sides on (53), we get
using the notation \(c_{k}\triangleq {\mathbb {E}}[\Delta M_{k}\vert {\mathcal {F}}_{k}]\). Applying the operator \({\mathbb {E}}[\cdot \vert {\mathcal {F}}_{k-1}]\) and using the tower property of conditional expectations, this gives
Proceeding inductively, we see that
This establishes eq. (46). To validate eq. (47), recall that we assume \(X_{1}=X_{0}\), so that \(H_{1}=(1-\alpha _{1})\left\| X_{1}-{\bar{x}}\right\| ^{2}\). Furthermore, \(H_{k+1}\ge \frac{1-{\bar{\alpha }}}{2}\left\| X_{k+1}-{\bar{x}}\right\| ^{2}\), so that
We now show that \(({\bar{c}}_{k})_{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({\mathbb {N}})\). Simple algebra, combined with Assumption 6, gives
Hence, since \((\rho _{k})_{k\in {\mathbb {N}}}\) is bounded, Assumption 5 gives \(\lim _{k\rightarrow \infty }c_{k}=0\) a.s. Using again the tower property, we see \({\mathbb {E}}[c_{k}\vert {\mathcal {F}}_{1}]={\mathbb {E}}\left[ {\mathbb {E}}(c_{k}\vert {\mathcal {F}}_{k})\vert {\mathcal {F}}_{1}\right] \le \kappa \frac{\rho _{k}\mathtt {s}^{2}}{m_{k}} \le \kappa \frac{{\bar{\rho }}\mathtt {s}^{2}}{m_{k}}\), where \(\rho _k = \frac{(3-a)(1-\alpha _k)^{2}}{2(2\alpha _k^{2}-\frac{1}{2}\alpha _k+1)(1+{\tilde{L}}\lambda )} \le {\bar{\rho }} =\frac{3-a}{2(1+{\tilde{L}}\lambda )} \) for every k. Consequently, the discrete convolution \(\left( \sum _{i=1}^{k-1}q^{k-i}{\mathbb {E}}[c_{i}\vert {\mathcal {F}}_{1}]\right) _{k\in {\mathbb {N}}}\) is summable. Therefore \(\sum _{k\ge 1}{\mathbb {E}}[H_{k}]<\infty \) and \(\sum _{k\ge 1}{\mathbb {E}}[{\tilde{b}}_{k}]<\infty \). Clearly, this implies \(\lim _{k\rightarrow \infty }{\mathbb {E}}[{{\tilde{b}}}_{k}]=0,\) and consequently the subsequently stated two implication follow as well:
\(\square \)
Remark 4
It is worth remarking that the above proof does not rely on unbiasedness of the random estimators. The reason why we can lift this rather typical assumption lies in our application Young’s inequality in the estimate (48). The only assumption needed is a summable oracle variance as formulated in Assumption 6 to get the above result working.
Remark 5
The above result illustrates again nicely the well-known trade-off between relaxation and inertial effects (cf. Remark 2). Indeed, up to constant factors, the coupling between inertia and relaxation is expressed by the function \(\alpha \mapsto \frac{(1-\alpha )^{2}}{2\alpha ^{2}-\frac{1}{2}\alpha +1}\). Basic calculus reveals that this function is decreasing for \(\alpha \) increasing. In the extreme case when \(\alpha \uparrow 1\), it is necessary to let \(\rho \downarrow 0\), and vice versa. When \(\alpha \rightarrow 0\) then the limiting value of our specific relaxation policy is \(\frac{3-a}{1+{\tilde{L}}\lambda }\). In practical applications, it is advisable to choose b small in order to make q large. The value a must be calibrated in a disciplined way in order to allow for a sufficiently large step size \(\lambda \). This requires some knowledge of the condition number of the problem \(\mu /L\). As a heuristic argument, a good strategy, anticipating that b should be close to 0, is to set \(\frac{a}{2\mu }=\frac{1-a}{2{\tilde{L}}}\). This means \(a=\frac{\mu }{{\tilde{L}}+\mu }\).
We obtain a full linear rate of convergence when a more aggressive sample rate is employed in the SO. We achieve such global linear rates, together with tuneable iteration and oracle complexity estimates in two settings: First, we consider an aggressive simulation strategy, where the sample size grows over time geometrically. Such a sampling frequency can be quite demanding in some applications. As an alternative, we then move on and consider a more modest simulation strategy under which only polynomial growth of the batch size is required. Whatever simulation strategy is adopted, key to the assessment of the iteration and oracle complexity is to bound the stopping time
In order to understand the definition of this stopping time, recall that RISFBF computes the last iterate \(X_{K+1}\) by extrapolating between the current base point \(Z_{k}\) and the correction step involving \(Y_{k}+\lambda _{K}(A_{k}-B_{k})\), which requires \(2 m_{k}\) iid realizations from the law \({{\mathsf {P}}}\). In total, when executing the algorithm until the terminal time \(K_{\epsilon }\), we therefore need to simulate \(2\sum _{k=1}^{K_{\epsilon }}m_{k}\) random variables. We now estimate the integer \(K_{\epsilon }\) under a geometric sampling strategy.
Proposition 9
(Non-asymptotic linear convergence under geometric sampling) Suppose the conditions of Theorem 8 hold. Let \(p\in (0,1),\mathtt {B}= 2{\bar{\rho }}\mathtt {s}^{2}\left( 1+\frac{2(3-a)\lambda ^{2}}{1+{\tilde{L}}\lambda }\right) ,\) and choose the sampling rate \(m_{k}=\lfloor p^{-k}\rfloor \). Let \({\hat{p}}\in (p,1)\), and define
Then, whenever \(p\ne q\), we see that
and whenever \(p=q\),
In particular, the stochastic process \((X_{k})_{k\in {\mathbb {N}}}\) converges strongly and \({\mathbb {P}}\)-a.s. to the unique solution \({\bar{x}}\) at a linear rate.
Proof
Departing from (53), ignoring the positive term \({\tilde{b}}_{k}\) from the right-hand side, and taking expectations on both sides leads to
where the equality follows from \(c_k\) being deterministic. The sequence \((c_{k})_{k\in {\mathbb {N}}}\) is further upper bounded by the following considerations: First, the relaxation sequence is bounded by \(\rho _{k}\le {\bar{\rho }}=\frac{3-a}{2(1+{\tilde{L}}\lambda )}\); Second, the sample rate is bounded by \(m_{k}= \lfloor p^{-k} \rfloor \ge \left\lceil \tfrac{1}{2}p^{-k} \right\rceil \ge \tfrac{1}{2}p^{-k}\). Using these facts, eq. (54) yields
where \(\mathtt {B}=2{\bar{\rho }}\mathtt {s}^{2}\left( 1+\frac{2(3-a)\lambda ^{2}}{1+{\tilde{L}}\lambda }\right) \). Iterating the recursion above, one readily sees that
Consequently, by recalling that \(h_1 = (1-\alpha _1)\Vert X_1-{\bar{x}}\Vert ^2\) and \(h_{k}\ge \frac{1-{\bar{\alpha }}}{2}{\mathbb {E}}(\left\| X_{k}-{\bar{x}}\right\| ^{2})\), the bound (59) allows us to derive the recursion
We consider three cases.
-
(i)
\(0<q<p<1\): Defining \(\mathtt {c}_{1} \triangleq \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\tfrac{4\mathtt {B}}{(1-{\bar{\alpha }})(1-q/p)}\), we obtain from (61)
$$\begin{aligned} {\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})\le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\sum _{i=1}^{k}(q/p)^{k-i}p^{k}\le \mathtt {c}_{1}p^{k}. \end{aligned}$$ -
(ii)
\(0<p<q<1\). Akin to (i) and defining \(\mathtt {c}_{2}\triangleq \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\tfrac{4\mathtt {B}}{(1-{\bar{\alpha }})(1-p/q)}\), we arrive as above at the bound \({\mathbb {E}}(\left\| X_{k}-{\bar{x}}\right\| ^{2})\le q ^{k}\mathtt {c}_{2}\).
-
(iii)
\(p=q<1\). Choose \({\hat{p}} \in (q,1)\) and \(\mathtt {c}_{3}\triangleq \tfrac{1}{\exp (1)\ln ({\hat{p}}/q)}\), so that Lemma 18 yields \(kq^{k}\le \mathtt {c}_{3}{\hat{p}}^{k}\) for all \(k\ge 1\). Therefore, plugging this estimate in eq. (61), we see
$$\begin{aligned} {\mathbb {E}}(\left\| X_{k}-{\bar{x}}\right\| ^{2})&\le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\sum _{i=1}^{k}q^{k}\\&\le {\hat{p}}^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\mathtt {c}_{3}{\hat{p}}^{k}\\&=\mathtt {c}_{4}{\hat{p}}^{k}, \end{aligned}$$after setting \(\mathtt {c}_{4}\triangleq \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\frac{4\mathtt {B}\mathtt {c}_{3}}{1-{\bar{\alpha }}}\). Collecting these three cases together, verifies the first part of the proposition.
\(\square \)
Proposition 10
(Oracle and Iteration Complexity under geometric sampling) Given \(\epsilon > 0\), define the stopping time \(K_{\epsilon }\) as in eq. (55). Define
and the same hypothesis as in Theorem 8 hold true. Then, \(K_{\varepsilon }\le \tau _{\epsilon }(p,q)={\mathcal {O}}(\ln (\varepsilon ^{-1}))\). The corresponding oracle complexity of RISFBF is upper bounded as \(2\sum _{i=1}^{\tau _{\epsilon }(p,q)} m_i = {\mathcal {O}}\left( (1/\epsilon )^{1+\delta (p,q)}\right) \), where
Proof
First, let us recall that the total oracle complexity of the method is assessed by
If \(p\ne q\) define \(\tau _{\epsilon }\equiv \tau _{\epsilon }(p,q)=\lceil \frac{\ln (C(p,q)\epsilon ^{-1})}{\ln (1/\max \{p,q\})}\rceil \). Then, \({\mathbb {E}}(\left\| X_{\tau _{\epsilon }+1}-{\bar{x}}\right\| ^{2})\le \epsilon \), and hence \(K_{\epsilon }\le \tau _{\epsilon }\). We now compute
This gives the oracle complexity bound
If \(p=q\), we can replicate this calculation, after setting \(\tau _{\epsilon }=\lceil \frac{\ln (\epsilon ^{-1}{\hat{C}})}{\ln (1/{\hat{p}})}\rceil \). After so many iterations, we can be ensured that \({\mathbb {E}}(\left\| X_{\tau _{\epsilon }+1}-{\bar{x}}\right\| ^{2})\le \epsilon \), with an oracle complexity
\(\square \)
To the best of our knowledge, the provided non-asymptotic linear convergence guarantee appears to be amongst the first in relaxed and inertial splitting algorithms. In particular, by leveraging the increasing nature of mini-batches, this result no longer requires the unbiasedness assumption on the SO, a crucial benefit of the proposed scheme.
There may be settings where geometric growth of \(m_k\) is challenging to adopt. To this end, we provide a result where the sampling rate is polynomial rather than geometric. A polynomial sampling rate arises if \(m_{k}=\lceil {a_{k}(k+k_{0})^{\theta }+b_{k}\rceil }\) for some parameters \(a_{k},b_{k},\theta >0\). Such a regime has been adopted in related mini-batch approaches [75, 76]. This allows for modulating the growth rate by changing the exponent in the sampling rate. We begin by providing a supporting result. We make the specific choice \(a_{k}=b_{k}=1\) for all \(k\ge 1\), and \(k_{0}=0\), leaving essentially the exponent \(\theta >0\) as a free parameter in the design of the stochastic oracle.
Proposition 11
(Polynomial rate of convergence under polynomially increasing \(m_k\)) Suppose the conditions of Theorem 8 hold. Choose the sampling rate \(m_{k}=\lfloor k^{\theta }\rfloor \) where \(\theta > 0\). Then, for any \(k\ge 1\),
Proof
From the relation (60), we obtain
A standard bound based on the integral criterion for series with non-negative summands gives
The upper bounding integral can be evaluated using integration-by-parts, as follows:
Note that \(\frac{\theta }{t\ln (1/q)}\le \frac{1}{2}\) when \(t\ge \lceil 2\theta /\ln (1/q)\rceil \). Therefore, we can attain a simpler bound from the above by
Consequently,
Furthermore,
Note that \((1/q)^{2\theta / \ln (1/q)}=\left( \exp (\ln (1/q))\right) ^{ 2\theta / \ln (1/q)}=\exp (2\theta )\). Hence,
Plugging this into the opening string of inequalities shows
Since \(h_{1}=(1-\alpha _{1})\left\| X_{1}-{\bar{x}}\right\| ^{2}\) and \(h_{k+1}\ge \frac{1-{\bar{\alpha }}}{2}{\mathbb {E}}\left( \left\| X_{k+1}-{\bar{x}}\right\| ^{2}\right) \), we finally arrive at the desired expression (63). \(\square \)
Proposition 12
(Oracle and Iteration complexity under polynomial sampling) Let all Assumptions as in Theorem 8 hold. Given \(\epsilon > 0\), define \(K_{\epsilon }\) as in (55). Then the iteration and oracle complexity to obtain an \(\epsilon \)-solution are \({\mathcal {O}}(\theta \varepsilon ^{-1/\theta })\) and \({\mathcal {O}}(\exp (\theta )\theta ^{\theta }(1/\epsilon )^{1+1/\theta })\), respectively.
Proof
We first note that \((k+1)^{-\theta }\le k^{-\theta }\) for all \(k\ge 1\). Hence, the bound established in Proposition 11 yields
Consider the function \(\psi (t)\triangleq t^{\theta }q^{t}\) for \(t>0\). Then, straightforward calculus shows that \(\psi (t)\) is unimodal on \((0,\infty )\), with unique maximum \(t^{*}=\frac{\theta }{\ln (1/q)}\) and associated value \(\psi (t^{*})=\exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }\). Hence, for all \(t>0\), we have \(t^{\theta }q^{t}\le \exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }\), and consequently, \(q^{k}\le \exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }k^{-\theta }\) for all \(k\ge 1\). This allows us to conclude
where
Then, for any \(k\ge K_{\epsilon }\triangleq \lceil (\mathtt {c}_{q,\theta }/\epsilon )^{1/\theta }\rceil \), we are ensured that \({\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})\le \varepsilon \). Since \((\mathtt {c}_{q,\theta })^{1/\theta }={\mathcal {O}}(\exp (-1)\theta )\), we conclude that \(K_{\epsilon }={\mathcal {O}}(\theta \epsilon ^{-1/\theta })\). The corresponding oracle complexity is bounded as follows:
\(\square \)
Remark 6
It may be observed that if the \(\theta = 1\) or \(m_k = k\), there is a worsening of the rate and complexity statements from their counterparts when the sampling rate is geometric; in particular, the iteration complexity worsens from \({\mathcal {O}}(\ln (\tfrac{1}{\epsilon }))\) to \({\mathcal {O}}(\tfrac{1}{\epsilon })\) while the oracle complexity degrades from the optimal level of \({\mathcal {O}}(\tfrac{1}{\epsilon })\) to \({\mathcal {O}}(\tfrac{1}{\epsilon ^2})\). But this deterioration comes with the advantage that the sampling rate is far slower and this may be of significant consequence in some applications.
4.3 Rates in terms of merit functions
In this subsection we estimate the iteration and oracle complexity of RISFBF with the help of a suitably defined gap function. Generally, a gap function associated with the monotone inclusion problem (MI) is a function \(\mathsf {Gap}:{\mathsf {H}}\rightarrow {\mathbb {R}}\) such that (i) \(\mathsf {Gap}\) is sign restricted on \({\mathsf {H}}\); and (ii) \(\mathsf {Gap}(x) = 0\) if and only if \(x\in {\mathsf {S}}\). The Fitzpatrick function [3, 30, 31, 77] is a useful tool to construct gap functions associated with a set-valued operator \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\). It is defined as the function \(G_{F}:{\mathsf {H}}\times {\mathsf {H}}\rightarrow [-\infty ,\infty ]\) given by
This function allows us to recover the operator F, by means of the following result (cf. [3, Prop. 20.58]): If \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is maximally monotone, then \(G_{F}(x,x^{*})\ge \left\langle x,x^{*}\right\rangle \) for all \((x,x^{*})\in {\mathsf {H}}\times {\mathsf {H}}\), with equality if and only if \((x,x^{*})\in {{\,\mathrm{gr}\,}}(F)\). In particular, \({{\,\mathrm{gr}\,}}(F)=\{(x,x^{*})\in {\mathsf {H}}\times {\mathsf {H}}\vert \;G_{F}(x,x^{*})\ge \left\langle x,x^{*}\right\rangle \}\). In fact, it can be shown that the Fitzpatrick function is minimal in the family of convex functions \(f:{\mathsf {H}}\times {\mathsf {H}}\rightarrow (-\infty ,\infty ]\) such that \(f(x,x^{*})\ge \left\langle x,x^{*}\right\rangle \) for all \((x,x^{*})\in {\mathsf {H}}\times {\mathsf {H}}\), with equality if \((x,x^{*})\in {{\,\mathrm{gr}\,}}(F)\) [77].
Our gap function for the structured monotone operator \(F=V+T\) is derived from its Fitzpatrick function by setting \(\mathsf {Gap}(x)\triangleq G_{F}(x,0)\) for \(x\in {\mathsf {H}}\). This reads explicitly as
It immediately follows from the definition that \(\mathsf {Gap}(x)\ge 0\) for all \(x\in {\mathsf {H}}\). It is also clear, that \(x\mapsto \mathsf {Gap}(x)\) is convex and lower semi-continuous and \(\mathsf {Gap}(x)=0\) if and only if \(x\in {\mathsf {S}}=\mathsf {Zer}(F)\). Let us give some concrete formulae for the gap function.
Example 4
(Variational Inequalities) We reconsider the problem described in Example 2. Let \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) be a maximally monotone and L-Lipschitz continuous map, and \(T(x)={{\,\mathrm{{\mathsf {N}}}\,}}_{{\mathsf {C}}}(x)\) the normal cone of a given closed convex set \({\mathsf {C}}\subset {\mathsf {H}}\). Then, by [77, Prop. 3.3], the gap function (66) reduces to the well-known dual gap function, due to [78],
Example 5
(Convex Optimization) Reconsider the general non-smooth convex optimization problem in Example 1, with primal objective function \({\mathsf {H}}_{1}\ni u\mapsto f(u)+g(Lu)+h(u)\). Let us introduce the convex-concave function
Define
It is easy to check that \(\Gamma (x')\ge 0\), and equality holds only for a primal-dual pair (saddle-point) \({\bar{x}}\in {\mathsf {S}}\). Hence, \(\Gamma (\cdot )\) is a gap function for the monotone inclusion derived from the Karush-Kuhn-Tucker conditions (5). In fact, the function (67) is a standard merit function for saddle-point problems (see e.g. [79]). To relate this gap function to the Fitzpatrick function, we exploit the maximally monotone operators V and T introduced Example 1. In terms of these mappings, first observe that for \(p=({\tilde{u}},{\tilde{v}}),x=(u,v)\) we have
Since h is convex differentiable, the classical gradient inequality reads as \(h(u)-h({\tilde{u}})\ge \left\langle \nabla h({\tilde{u}}),u-{\tilde{u}}\right\rangle \). Using this estimate in the previous display shows
For \(p^{*}=({\tilde{u}}^{*},{\tilde{v}}^{*})\in T(p)\), we again employ convexity to get
Hence,
Therefore, we see
Hence,
It is clear from the definition that a convex gap function can be extended-valued and its domain is contingent on the boundedness properties of \({{\,\mathrm{dom}\,}}T\). In the setting where T(x) is bounded for all \(x\in {\mathsf {H}}\), the gap function is clearly globally defined. However, the case where \({{\,\mathrm{dom}\,}}T\) is unbounded has to be handled with more care. There are potentially two approaches to cope with such a situation: One would be to introduce a perturbation-based termination criterion as defined in [80], and recently used in [81] to solve a class of structured stochastic variational inequality problems. The other solution strategy is based on the notion of restricted merit functions, first introduced in [82], and later on adopted in [83]. We follow the latter strategy.
Let \(x^{s}\in {{\,\mathrm{dom}\,}}T\) denote an arbitrary reference point and \(D>0\) a suitable constant. Define the closed set \({\mathsf {C}}\triangleq {{\,\mathrm{dom}\,}}T\cap \{x\in {\mathsf {H}}\vert \; \left\| x-x^{s}\right\| \le D\}\), and the restricted gap function
Clearly, \(\mathsf {Gap}(x\vert {{\,\mathrm{dom}\,}}T)=\mathsf {Gap}(x)\). The following result explains in a precise way the meaning of the restricted gap function. It extends the variational case in [82, Lemma 1] and [83, Lemma 3] to the general monotone inclusion case.
Lemma 13
Let \({\mathsf {C}}\subset {\mathsf {H}}\) be nonempty closed and convex. The function \({\mathsf {H}}\ni x\mapsto \mathsf {Gap}(x\vert {\mathsf {C}})\) is well-defined and convex on \({\mathsf {H}}\). For any \(x\in {\mathsf {C}}\) we have \(\mathsf {Gap}(x\vert {\mathsf {C}})\ge 0\). Moreover, if \({\bar{x}}\in {\mathsf {C}}\) is a solution to (MI), then \(\mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0\). Moreover, if \(\mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0\) for some \({\bar{x}}\in {{\,\mathrm{dom}\,}}T\) such that \(\left\| {\bar{x}}-x^{s}\right\| <D\), then \({\bar{x}}\in {\mathsf {S}}\).
Proof
The convexity and non-negativity for \(x\in {\mathsf {C}}\) of the restricted function is clear. Since \(\mathsf {Gap}(x\vert C)\le \mathsf {Gap}(x)\) for all \(x\in {\mathsf {H}}\), we see
To show the converse implication, suppose \(\mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0\) for some \({\bar{x}}\in {\mathsf {C}}\) with \(\left\| {\bar{x}}-x^{s}\right\| <D\). Without loss of generality we can choose \({\bar{x}}\in {\mathsf {C}}\) in this particular way, since we may choose the radius of the ball as large as desired. It follows that \(\left\langle y^{*},{\bar{x}}-y\right\rangle \le 0\) for all \(y\in {\mathsf {C}},y^{*}\in F(y)\). Hence, \({\bar{x}}\in {\mathsf {C}}\) is a Minty solution to the Generalized Variational inequality with maximally monotone operator \(F(x)+{{\,\mathrm{{\mathsf {N}}}\,}}_{{\mathsf {C}}}(x)\). Since F is upper semi-continuous and monotone, Minty solutions coincide with Stampacchia solutions, implying that there exists \({\bar{x}}^{*}\in F({\bar{x}})\) such that \(\left\langle {\bar{x}}^{*},y-{\bar{x}}\right\rangle \ge 0\) for all \(y\in {\mathsf {C}}\) (see e.g. [84]). Consider now the gap program
This program is solved at \(y={\bar{x}}\), which is a point for which \(\left\| x-x^{s}\right\| <D\). Hence, the constraint can be removed, and we conclude \(\left\langle {\bar{x}}^{*},y-{\bar{x}}\right\rangle \ge 0\) for all \(y\in {{\,\mathrm{dom}\,}}(F)\). By monotonicity of F, it follows
Hence, \(\mathsf {Gap}({\bar{x}})=0\) and we conclude \({\bar{x}}\in {\mathsf {S}}\). \(\square \)
In order to state and prove our complexity results in terms of the proposed merit function, we start with the first preliminary result.
Lemma 14
Consider the sequence \((X_{k})_{k\in {\mathbb {N}}}\) generated by RISFBF with the initial condition \(X_{0}=X_{1}\). Suppose \(\lambda _k=\lambda \in (0,1/(2L))\) for every \(k \in {\mathbb {N}}\). Moreover, suppose \((\alpha _k)_{k\in {\mathbb {N}}}\) is a non-decreasing sequence such that \(0<\alpha _k\le {\bar{\alpha }}<1\), \(\rho _k=\tfrac{3(1-{\bar{\alpha }})^2}{2(2\alpha _k^2-\alpha _k+1)(1+L\lambda )}\) for every \(k \in {\mathbb {N}}\). Define
and for \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we define \(\Delta N_{k}(p,p^{*})\) as in (21). Then, for all \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we have
Proof
For \((p,p^{*})\in {{\,\mathrm{gr}\,}}(V+T)\), we know from eq. (23)
where the last inequality uses the monotonicity of V. We first derive a recursion which is similar to the fundamental recursion in Lemma 4. Invoking (25) and (26), we get
Multiplying both sides of (28) and noting that \((1-2L\lambda _{k})(1+L\lambda _{k})\le 1-2L^{2}\lambda ^{2}_{k}\), we obtain the following inequality
Inserting the above inequality to (71) and using the same fashion in deriving (33), we arrive at
Invoking the monotonicity of V and rearranging (72), it follows that
We define \(\beta _{k+1}\) as
and similarly with (36), we can show \(\{\beta _k\}\) is non-increasing by choosing \(\rho _k=\tfrac{3(1-{\bar{\alpha }})^2}{2(2\alpha _k^2-\alpha _k+1)(1+L\lambda _k)}\) and \(\lambda _k \equiv \lambda \). Thus, \((1-\alpha _{k+1})\left( \tfrac{3}{2\rho _{k+1}(1+L\lambda _{k+1})}-1\right) \le (1-\alpha _{k})\left( \tfrac{3}{2\rho _{k}(1+L\lambda _k)}-1\right) \). Together with \(\alpha _{k+1}\ge \alpha _{k}\), the last inequality gives
Recall that \(\Delta N_{k}(p,p^{*})=\Delta N_{k}(p,0)+2\rho _{k}\lambda \left\langle p^{*},p-Y_{k}\right\rangle \). Hence, after setting \(\Delta N_{k}(p,0)=\Delta N_{k}(p)\), rearranging the expression given in the previous display shows that
Summing over \(k=1,\ldots ,K\), we obtain
where we notice \(X_1=X_0\) in the last inequality. \(\square \)
Next, we derive a rate statement in terms of the gap function, using the averaged sequence
Theorem 15
(Rate and oracle complexity under monotonicity of V) Consider the sequence \((X_{k})_{k\in {\mathbb {N}}}\) generated RISFBF. Suppose Assumptions 1-5 hold. Suppose \(m_k \triangleq \lfloor k^a\rfloor \) and \(\lambda _k=\lambda \in (0,1/(2L))\) for every \(k \in {\mathbb {N}}\) where \(a > 1\). Suppose \((\alpha _k)_{k\in {\mathbb {N}}}\) is a non-decreasing sequence such that \(0<\alpha _k\le {\bar{\alpha }}<1\), \(\rho _k=\tfrac{3(1-{\bar{\alpha }})^2}{2(2\alpha _k^2-\alpha _k+1)(1+L\lambda )}\) for every \(k \in {\mathbb {N}}\). Then the following hold for any \(K\in {\mathbb {N}}\):
-
(i)
\({\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{K}\vert {\mathsf {C}})] \le {\mathcal {O}}\left( \tfrac{1}{K}\right) .\)
-
(ii)
Given \(\varepsilon >0\), define \(K_{\varepsilon }\triangleq \{k\in {\mathbb {N}}\vert {\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{k}\vert {\mathsf {C}})]\le \varepsilon \}\), then \(\sum _{k=1}^{K_{\varepsilon }} m_k \le {\mathcal {O}}\left( \tfrac{1}{\varepsilon ^{1+a}}\right) .\)
The proof of this Theorem builds on an idea which is frequently used in the analysis of stochastic approximation algorithms, and can at least be traced back to the robust stochastic approximation approach of [49]. In order to bound the expectation of the gap function, we construct an auxiliary process which allows us to majorize the gap via a quantity which is independent of the reference points. Once this is achieved, a simple variance bound completes the result.
Proof of Theorem 15
We define an auxiliary process \((\Psi _{k})_{k\in {\mathbb {N}}}\) such that
Then,
so that
Introducing the iterate \(Y_{k}\), the above implies
As \(\Delta N_{k}(p)=2\rho _{k}\lambda _{k}\left\langle W_{k},p-Y_{k}\right\rangle \), this implies via a telescopian sum argument
Using Lemma 14 and setting \(\lambda _k \equiv \lambda \), for any \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\) it holds true that
Define \(\mathtt {c}_{1}\triangleq (1-\alpha _{1})\left\| X_{1}-p\right\| ^{2}\), divide both sides by \(\sum _{k=1}^{K}\rho _{k}\) and using our definition of an ergodic average (73), this gives
Using the bound established in eq. (75), it follows
Choosing \(\Psi _{1},p\in {\mathsf {C}}\) and introducing \(\mathtt {c}_{2}\triangleq \mathtt {c}_{1}+4D^{2}\), we see that the above can be bounded by a random quantity which is independent of p:
Taking the supremum over pairs \((p,p^{*})\) such that \(p\in {\mathcal {C}}\) and \(p^{*}\in F(y)\), it follows
In order to proceed, we bound the first moment of the process \(\Delta M_{k}\) in the same way as in (34), in order to get
Next, we take expectations on both sides of inequality (76), and use the bound (17), and \({\mathbb {E}}[\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle ]={\mathbb {E}}\left[ {\mathbb {E}}\left( \left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle \vert {\hat{{\mathcal {F}}}}_{k}\right) \right] =0.\) This yields
Since \(\alpha _{k}\uparrow {\bar{\alpha }}\in (0,1)\), we know that \(\rho _{k}\ge {\tilde{\rho }}\triangleq \frac{3(1-{\bar{\alpha }}^{2})}{2(1+L\lambda )(2{\bar{\alpha }}^{2}+1)}\). Similarly, since \(2\alpha ^{2}_{k}-\alpha _{k}+1\ge 7/8\) for all k, it follows \(\rho _{k}\le {\bar{\rho }}\triangleq \frac{12(1-{\bar{\alpha }})^{2}}{7}\). Using this upper and lower bound on the relaxation sequence, we also see that \(\mathtt {a}_{k}\le \lambda ^{2}\left( \frac{12{\bar{\rho }}}{1+L\lambda }+\frac{{\bar{\rho }}}{2}\right) \equiv {\bar{\mathtt {a}}}\), so that
where \(\mathtt {c}_{3}\triangleq \frac{\mathtt {c}_{2}}{{\tilde{\rho }}}+\frac{1}{{\tilde{\rho }}}\left( {\bar{\mathtt {a}}}\sigma ^{2}+{\bar{\rho }}\lambda ^{2}\sigma ^{2}\right) \sum _{k=1}^{\infty }\frac{1}{m_{k}}\). Hence, defining the deterministic stopping time \(K_{\varepsilon }=\{k\in {\mathbb {N}}\vert {\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{k}\vert {\mathsf {C}})]\le \varepsilon \}\), we see \(K_{\varepsilon }\ge \frac{\mathtt {c}_{3}}{2\lambda \varepsilon }=\frac{\mathtt {c}_{4}}{\varepsilon }\).
(ii). Suppose \(m_k=\lfloor k^a\rfloor \), for \(a>1\). Then the oracle complexity to compute an \({\bar{X}}_K\) such that \({\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{k}\vert {\mathsf {C}})] \le \epsilon \) is bounded as
\(\square \)
Remark 7
In the prior result, we employ a sampling rate \(m_k = \lfloor k^a \rfloor \) where \(a > 1\). This achieves the optimal rate of convergence. In contrast, the authors in [32] employ a sampling rate, loosely given by \(m_k = \lfloor k^{1+a} (\ln (k))^{1+b} \rfloor \) where \(a > 0, b \ge -1\) or \(a = 0, b > 0\). We observe that when \(a > 0\) and \(b \ge -1\), the mini-batch size grows faster than our proposed \(m_k\) while it is comparable in the other case.
5 Applications
In this section, we compare the proposed scheme with its SA counterparts on a class of monotone two-stage stochastic variational inequality problems (Sec. 5.1) and a supervised learning problem (Sec. 5.2) and discuss the resulting performance.
5.1 Two-stage stochastic variational inequality problems
In this section, we describe some preliminary computational results obtained from Algorithm 1 when applied to a class of two-stage stochastic variational inequality problems, recently introduced by [85].
Consider an imperfectly competitive market with N firms playing a two-stage game. In the first stage, the firms decide upon their capacity level \(x_{i}\in [l_{i},u_{i}]\), anticipating the expected revenues to be obtained in the second stage in which they compete by choosing quantities à la Cournot. The second-stage market is characterized by uncertainty as the per-unit cost \(h_{i}(\xi _{i})\) is realized on the spot and cannot be anticipated. To compute an equilibrium in this game, we assume that each player is able to take stochastic recourse by determining production levels \(y_i(\xi )\), contingent on random convex costs and capacity levels \(x_i\). In order to bring this into the terminology for our problem, let use define the feasible set for capacity decisions of firm i as \({\mathcal {X}}_{i}\triangleq [l_{i},u_{i}]\subset {\mathbb {R}}_{+}\). The joint profile of capacity decisions is denoted by an N-tuple \(x=(x_{1},\ldots ,x_{N})\in {\mathcal {X}}\triangleq \prod _{i=1}^{N}{\mathcal {X}}_{i}={\mathcal {X}}\). The capacity choice of player i is then determined as a solution to the parametrized problem (Play\(_i(x_{-i})\))
where \(c_i: {\mathcal {X}}_i \rightarrow {\mathbb {R}}_+\) is a \({\tilde{L}}^c_i\)-smooth and convex cost function and \(p(\cdot )\) denotes the inverse-demand function defined as \(p(X)= d-rX\), \(d, r > 0\). The function \({\mathcal {Q}}_i(\cdot ,\xi )\) denotes the optimal cost function of firm i in scenario \(\xi \in \Xi \), assuming a value \({\mathcal {Q}}_i(x_i,\xi )\) when the capacity level \(x_{i}\) is chosen. The recourse function \({\mathbb {E}}_{\xi }[{\mathcal {Q}}_i(\cdot ,\xi )]\) denotes the expectation of the optimal value of the player i’s second stage problem and is defined as
A Nash equilibrium of this game is given by a tuple \((x^{*}_1, \cdots , x^{*}_N)\) where \( x^{*}_i \text{ solves } (\text{ Play}_i(x_{-i}^*))\) for each \(i=1,2,\ldots ,N\). A simple computation shows that \(Q_{i}(x_{i},\xi )=\min \{0,h_{i}(\xi )x_{i}\}\), and hence it is nonsmooth. In order to obtain a smoothed variant, we introduce \({\mathcal {Q}}_i^{\epsilon }(\cdot ,\xi _i)\), defined as
This is the value function of a quadratic program, requiring the maximization of an \(\epsilon \)-strongly concave function. Hence, \({\mathcal {Q}}_{i}^{\epsilon }(x_{i},\xi )\) is single-valued and \(\nabla _{x_i}{\mathcal {Q}}^{\epsilon }_i(\cdot ,\xi )\) is \(\tfrac{1}{\epsilon }\)-Lipschitz and \(\epsilon \)-strongly monotone [86, Prop.12.60] for all \(\xi \in \Xi \). The latter is explicitly given by
Employing this smoothing strategy in our two-stage noncooperative game yields the individual decision problem
The necessary and sufficient equilibrium conditions of this \(\epsilon \)-smoothed game can be compactly represented as
and C, R, and \(D^\epsilon \) are single-valued maps given by
We note that the interchange between the expectation and the gradient operator can be invoked based on smoothness requirements (cf. [87, Th. 7.47]). The problem (SGE\(^{\epsilon }\)) aligns perfectly with the structured inclusion (MI), in which T is a maximal monotone map and V is an expectation-valued maximally monotone map. In addition, we can quantify the Lipschitz constant of V as \(L_V = L_C + L_R + L_D^{\epsilon }\) ,where \(L_C = \max _{1\le i\le N} {\tilde{L}}^c_i\), \(L_R = r\left\| {{\,\mathrm{Id}\,}}+ {\mathbf {1}}{\mathbf {1}}^{\top }\right\| _2 = r(N+1)\) and \(L_D^{\epsilon } = \tfrac{1}{\epsilon }\). Here, \({{\,\mathrm{Id}\,}}\) is the \(N\times N\) identity matrix, and \({\mathbf {1}}\) is the \(N\times 1\) vector consisting only of ones.
Problem parameters for 2-stage SVI. Our numerics are based on specifying \(N=10\), \(r = 0.1\), and \(d = 1\). We consider four problem settings of \(L_V\) ranging from \(10, \cdots , 10^4\) (See Table 1). For each setting, the problem parameters are defined as follows.
-
(i)
Specification of \(h_i(\xi )\). The cost parameters \(h_i(\xi _i) \triangleq \xi _i\) where \(\xi _i \sim {\mathtt {Uniform}}[-5,0]\) and \(i = 1, \cdots , N\).
-
(ii)
Specification of \(L_V, L_R,\) \(L_D^{\epsilon }\), \(L_C\), and \({\hat{b}}_1\). Since \(\left\| {{\,\mathrm{Id}\,}}+ {\mathbf {1}}{\mathbf {1}}^{\top }\right\| _2 = 11\) when \(N = 10\), \(L_R = r \left\| {{\,\mathrm{Id}\,}}+ {\mathbf {1}}{\mathbf {1}}^{\top }\right\| = 1.1\). Let \(\epsilon \) be defined as \(\epsilon =\frac{10}{L_{V}}\) and \(L_D^{\epsilon } = \tfrac{1}{\epsilon } = \tfrac{L_V}{10}\). It follows that \(L_C = L_V-L_R-L_D^{\epsilon }\) and \({\hat{b}}_1= L_C\).
-
(iii)
Specification of \(c_i(x_i)\). The cost function \(c_i\) is defined as \(c_i(x_i) = \tfrac{1}{2}{\hat{b}}_i x_i^2+a_i{x_i}\) where \(a_1, \ldots , a_N \sim {\mathtt {Uniform}}[2,3]\) and \({\hat{b}}_2, \cdots , {\hat{b}}_N \sim {\mathtt {Uniform}}[0,{\hat{b}}_1].\) Further, \(a \triangleq [a_1,\dots ,a_N]^{\top }\in {\mathbb {R}}^N\) and \(B\triangleq {{\,\mathrm{diag}\,}}({\hat{b}}_1,\dots ,{\hat{b}}_N)\) is a diagonal matrix with nonnegative elements.
Algorithm specifications We compare Algorithm 1 (RISFBF) with a stochastic forward-backward (SFB) scheme and a stochastic forward-backward-forward (SFBF) scheme. Solution quality is compared by estimating the residual function \(\mathsf {res}(x)=\Vert x-\Pi _{\mathcal {X}}(x-\lambda V^\epsilon (x))\Vert \). All of the schemes were implemented in MATLAB on a PC with 16GB RAM and 6-Core Intel Core i7 processor (2.6GHz).
(i) (SFB): The (SFB) scheme is defined as the recursion
where \(V^{\epsilon }(X_k) = {\mathbb {E}}_{\xi }[{\widehat{V}}^{\epsilon }(X_k,\xi )]\) and \(\lambda _k = \tfrac{1}{\sqrt{k}}\). The operator \(\Pi _{{\mathcal {X}}}[\cdot ]\) means the orthogonal projection onto the set \({\mathcal {X}}\). Note that \(x_0\) is randomly generated in \([0,1]^N\).
(ii) (SFBF): The Variance-reduced stochastic modified forward-backward scheme we employ is defined by the updates
where \(A_{k}(X_{k})=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}{\widehat{V}}^{\epsilon }(X_k,\xi _k)\), \(B_{k}(Y_k)=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}{\widehat{V}}^{\epsilon }(Y_k,\eta _k)\). We choose a constant \(\lambda _k\equiv \lambda =\tfrac{1}{4L_V}\). We assume \(m_k=\lfloor k^{1.01}\rfloor \) for merely monotone problems and \(m_k=\lfloor 1.01^{k}\rfloor \) for strongly monotone problems.
(iii) (RISFBF): In the implementation of Algorithm 1 we choose a constant steplength \(\lambda _k\equiv \lambda =\tfrac{1}{4L_V}\). In merely monotone settings, we utilize an increasing sequence \(\alpha _k=\alpha _0(1-\tfrac{1}{k+1})\), where \(\alpha _0=0.1\), the relaxation parameter sequence \(\rho _k\) defined as \(\rho _k=\tfrac{3(1-\alpha _0)^2}{2(2\alpha _k^2-\alpha _k+1)(1+L_V\lambda )}\), and \(m_k=\lfloor k^{1.01}\rfloor \). In strongly monotone regimes, we choose a constant inertial parameter \(\alpha _k\equiv \alpha =0.1\), a constant relaxation parameter \(\rho _k\equiv \rho =1\), and \(m_k=\lfloor 1.01^k\rfloor \).
In Fig. 1, we compare the three schemes under maximal monotonicity and strong monotonicity, respectively and examine their sensitivities to inertial and relaxation parameters. Both sets of plots are based on selecting \(L_V=10^2\).
Key insights Several insights may be drawn from Table 1 and Figure 1.
-
(a)
First, from Table 1, one may conclude that on this class of problems, (RISFBF) and (SFBF) significantly outperform (SFB) schemes, which is less surprising given that both schemes employ an increasing mini-batch sizes, leading to performance akin to that seen in deterministic schemes. We should note that when \({\mathcal {X}}\) is somewhat more complicated, the difference in run-times between SA schemes and mini-batch variants becomes more pronounced; in this instance, the set \({\mathcal {X}}\) is relatively simple to project onto and there is little difference in run-time across the three schemes.
-
(b)
Second, we observe that while both (SFBF) and (RISFBF) schemes can contend with poorly conditioned problems, as seen by noting that as \(L_V\) grows, their performance does not degenerate significantly in terms of empirical error; However, in both monotone and strongly monotone regimes, (RISFBF) provides consistently better solutions in terms of empirical error over (SFBF). Figure 1 displays the range of trajectories obtained for differing relaxation and inertial parameters and in the instances considered, (RISFBF) shows consistent benefits over (SFBF).
-
(c)
Third, since such schemes display geometric rates of convergence for strongly monotone inclusion problems, this improvement is reflected in terms of the empirical errors for strongly monotone vs monotone regimes.
5.2 Supervised learning with group variable selection
Our second numerical example considers the following population risk formulation of a composite absolute penalty (CAP) problem arising in supervised statistical learning [7]
where the feasible set \({\mathcal {W}}\subseteq {\mathbb {R}}^{d}\) is a Euclidean ball with \({\mathcal {W}}\triangleq \{w \in {\mathbb {R}}^{d}\mid \Vert w\Vert _2 \le D\}\), \(\xi =(a,b)\in {\mathbb {R}}^{d}\times {\mathbb {R}}\) denotes the random variable consisting of a set of predictors a and output b. The parameter vector w is the sparse linear hypothesis to be learned. The sparsity structure of w is represented by group \({\mathcal {S}}\in 2^{\{1,\dots ,l\}}\). When the groups in \({\mathcal {S}}\) do not overlap, \(\sum _{\textit{g}\in {\mathcal {S}}} \Vert w_\textit{g}\Vert _2\) is referred to as the group lasso penalty [6, 88]. When the groups in \({\mathcal {S}}\) form a partition of the set of predictors, then \(\sum _{\textit{g}\in {\mathcal {S}}} \Vert w_\textit{g}\Vert _2\) is a norm afflicted by singularities when some components \(w_\textit{g}\) are equal to zero. For any \(\textit{g} \in \{1, \cdots , l\}\), \(w_{\textit{g}}\) is a sparse vector constructed by components of x whose indices are in \(\textit{g}\), i.e., \(w_\textit{g} := (w_i)_{i\in \textit{g}}\) with few non-zero components in \(w_\textit{g}\). Here, we assume that each group \(\textit{g} \in {\mathcal {S}}\) consists of k elements. Introduce the linear operator \(L:{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^{k}\underbrace{\times \cdots \times }_{l-\text {times}}{\mathbb {R}}^{k}\), given by \(Lw=[\eta w_{g_{1}},\ldots ,\eta w_{g_{l}}]\). Let us also define
where \(\delta _{{\mathcal {W}}}(\cdot )\) denotes the indicator function with respect to the set \({\mathcal {W}}\). Then (CAP) becomes
This is clearly seen to be a special instance of the convex programming problem (2). Specifically, we let \({\mathsf {H}}_{1}={\mathbb {R}}^{d}\) with the standard Euclidean norm, and \({\mathsf {H}}_{2}={\mathbb {R}}^{k}\underbrace{\times \cdots \times }_{l-\text {times}}{\mathbb {R}}^{k}\) with the product norm
Since
the Fenchel-dual takes the form (3). Accordingly, a primal-dual pair for (CAP) is a root of the monotone inclusion (MI) with
involving \(d+kl\) variables.
Problem parameters for (CAP) We simulated data with \(d = 82\), covered by 10 groups of 10 variables with 2 variables of overlap between two successive groups: \(\{1,\dots ,10\},\{9,\dots ,18\},\dots ,\{73,\dots ,82\}\). We assume the nonzeros of \(w_{\mathrm{true}}\) lie in the union of groups 4 and 5 and sampled from i.i.d. Gaussian variables. The operator V(w, v) is estimated by the mini-batch estimator using \(m_{k}\) iid copies of the random input-output pair \(\xi =(a,b)\in {\mathbb {R}}^{d}\times {\mathbb {R}}\). Specifically, we draw each coordinate of the random vector a from the standard Gaussian distribution \({\mathtt {N}}(0,1)\) and generate \(b=a^{\top }w_{\mathrm{true}}+\varepsilon \), for \(\varepsilon \sim {\mathtt {N}}(0,\sigma ^{2}_{\varepsilon })\). In the concrete experiment reported here, the error variance is taken as \(\sigma _{\varepsilon }=0.1\). In all instances, the regularization parameter is chosen as \(\eta = 10^{-4}\). The accuracy of feature extraction of algorithm output w is evaluated by the relative error to the ground truth, defined as
Algorithm specifications We compare (RISFBF) with stochastic extragradient (SEG) and stochastic forward-backward-forward (SFBF) schemes and specify their algorithm parameters. Again, all the schemes are run on MATLAB 2018b on a PC with 16GB RAM and 6-Core Intel Core i7 processor (2.6\(\times \)8GHz).
-
(i)
(SEG): Set \({\mathcal {X}}\triangleq {\mathcal {W}}\times {{\,\mathrm{dom}\,}}(g^{*})\). The (SEG) scheme [32] utilizes the updates
$$\begin{aligned} \begin{aligned} Y_{k}&:= \Pi _{{\mathcal {X}}} \left[ X_k - \lambda _k A_{k}(X_k)\right] ,\\ X_{k+1}&:= \Pi _{{\mathcal {X}}} \left[ X_k - \lambda _k B_{k}(Y_{k})\right] , \end{aligned} \end{aligned}$$(SEG)where \(A_{k}(X_{k})=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}V(X_k,\xi _k)\), \(B_{k}(Y_k)=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}V(Y_k,\eta _k)\). In this scheme, \(\lambda _k\equiv \lambda \) is chosen to be \(\tfrac{1}{4L_{V}}\) (\(L_{V}\) is the Lipschitz constant of V). We assume \(m_k=\left\lfloor \tfrac{k^{1.1}}{n}\right\rfloor \).
-
(ii)
(SFBF): We employ the algorithm parameters employed in (i). Specifically, we choose a constant \(\lambda _k\equiv \lambda =\tfrac{1}{4L_{V}}\) and \(m_k=\left\lfloor \tfrac{k^{1.1}}{n}\right\rfloor \).
-
(iii)
(RISFBF): Here, we employ a constant step-length \(\lambda _k\equiv \lambda =\tfrac{1}{4 L_{V}}\), an increasing sequence \(\alpha _k=\alpha _0(1-\tfrac{1}{k+1})\), where \(\alpha _0=0.85\), a relaxation parameter sequence \(\rho _k=\tfrac{3(1-\alpha _0)^2}{2(2\alpha _k^2-\alpha _k+1)(1+L_{V}\lambda )}\), and assume \(m_k=\left\lfloor \tfrac{k^{1.1}}{n}\right\rfloor \).
Insights We compare the performance of the schemes in Table 2 and observe that (RISFBF) outperforms its competitors others in extracting the underlying feature of the datasets. In Fig. 2, trajectories for (RISFBF), (SFBF) and (SEG) are presented where a consistent benefit of employing (RISFBF) can be seen for a range of choices of \(\alpha _0\).
6 Conclusion
In a general structured monotone inclusion setting in Hilbert spaces, we introduce a relaxed inertial stochastic algorithm based on Tseng’s forward-backward-forward splitting method. Motivated by the gaps in convergence claims and rate statements in both deterministic and stochastic regimes, we develop a variance-reduced framework and make the following contributions: (i) Asymptotic convergence guarantees are provided under both increasing and constant mini-batch sizes, the latter requiring somewhat stronger assumptions on V; (ii) When V is monotone, rate statements provided in terms of a restricted gap function, inspired by the Fitzpatrick function for inclusions, show that the expected gap of an averaged sequence diminishes at the rate of \({\mathcal {O}}(1/k)\) and oracle complexity of computing an \(\epsilon \)-solution is \({\mathcal {O}}(1/\epsilon ^{1+a})\) where \(a> 1\); (iii) When V is strongly monotone, a non-asymptotic linear rate statement can be proven with an oracle complexity of \({\mathcal {O}}(\log (1/\epsilon ))\) of computing an \(\epsilon \)-solution. In addition, a perturbed linear rate is also developed. It is worth emphasizing that the rate statements in the strongly monotone regime accommodate the possibility of a biased stochastic oracle. Unfortunately, the growth rates in batch-size may be onerous in some situations, motivating the analysis of a polynomial growth rate in sample-size which is easily modulated. This leads to an associated polynomial rate of convergence.
Various open questions arise from our analysis. First, we exclusively focused on a variance reduction technique based on increasing mini-batches. From the point of view of computations and oracle complexity, this approach can become quite costly. Exploiting different variance reduction techniques, taking perhaps special structure of the single-valued operator V into account (as in [57]), has the potential of improving the computational complexity of our proposed method. At the same time, this will complicate the analysis of the variance of the stochastic estimators considerably and consequently, we leave this as an important question for future research.
Second, our analysis needs knowledge about the Lipschitz constant L. While in deterministic regimes, line search techniques have obviated such a need, such avenues are far more challenging to adopt in stochastic regimes. Efforts to address this in variational regimes have centered around leveraging empirical process theory [33]. This remains a goal of future research. Another avenue emerges in applications where we can gain a reasonably good estimate about this quantity via some pre-processing of the data (see e.g. Section 6 in [62]). Developing such an adaptive framework robust to noise is an important topic for future research.
Availability of data materials
All data generated or analysed during this study are included in this published article.
Notes
Our analysis does not rely on this assumption. It is made here only for concreteness and because it is the most prevalent one in applications.
To wit, the function \(x\mapsto 2x^{2}-0.5x+1\) is attains a global minumum at \(x=1/8\), which gives the global lower bound 31/32. Furthermore, the function \(x\mapsto (1-x)(1+4x)\) attains a global maximum at \(x=3/8\), with corresponding value 25/16.
References
Attouch, H., Cabot, A.: Convergence of a relaxed inertial forward-backward algorithm for structured monotone inclusions. Appl. Math. Optim. 80(3), 547–598 (2019). https://doi.org/10.1007/s00245-019-09584-z
Attouch, H., Cabot, A.: Convergence of a relaxed inertial proximal algorithm for maximally monotone operators. Math. Program. 184(1), 243–287 (2020). https://doi.org/10.1007/s10107-019-01412-0
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, (2016)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems - Volume I and Volume II. Springer, (2003)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1), 259–268 (1992). https://doi.org/10.1016/0167-2789(92)90242-F
Jacob, L., Obozinski, G., Vert, J.-P.: Group lasso with overlap and graph lasso. Proceedings of the 26th annual international conference on machine learning, pp. 433–440 (2009)
Zhao, P., Rocha, G., Yu, B.: The composite absolute penalties family for grouped and hierarchical variable selection. Ann. Stat. 37(6A), 3468–3497 (2009). https://doi.org/10.1214/07-AOS584
Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused lasso. J. Royal Statistical Soc.: Series B (Statistical Methodology) 67(1), 91–108 (2005). https://doi.org/10.1111/j.1467-9868.2005.00490.x
Tibshirani, R.J., Taylor, J.: The solution path of the generalized lasso. Ann. Statist. 39(3), 1335–1371 (2011). https://doi.org/10.1214/11-AOS878
Attouch, H., Briceno-Arias, L.M., Combettes, P.L.: A parallel splitting method for coupled monotone inclusions. SIAM J. Control. Optim. 48(5), 3246–3270 (2010). https://doi.org/10.1137/090754297
Latafat, P., Freris, N.M., Patrinos, P.: A new randomized block-coordinate primal-dual proximal algorithm for distributed optimization. IEEE Trans. Autom. Control 64(10), 4050–4065 (2019). https://doi.org/10.1109/TAC.2019.2906924
Rockafellar, R.T.: Conjugate Duality and Optimization. Society for Industrial and Applied Mathematics (1974)
Combettes, P.L., Pesquet, J.-C.: Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued and variational analysis 20(2), 307–330 (2012)
Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53(6), 1462–1475 (2008). https://doi.org/10.1109/TAC.2008.925853
Shanbhag, U.V.: Chapter 5. Stochastic Variational Inequality Problems: Applications, Analysis, and Algorithms, pp. 71–107 (2013). https://doi.org/10.1287/educ.2013.0120
Staudigl, M., Mertikopoulos, P.: Convergent noisy forward-backward-forward algorithms in non-monotone variational inequalities. IFAC-PapersOnLine 52(3), 120–125 (2019)
Mertikopoulos, P., Staudigl, M.: Convergence to Nash Equilibrium in Continuous Games with Noisy First-order Feedback. In: 56th IEEE Conference on Decision and Control (2017)
Briceno-Arias, L.M., Combettes, P.L.: Monotone operator methods for Nash equilibria in non-potential games, pp. 143–159. Springer, ??? (2013)
Yi, P., Pavel, L.: An operator splitting approach for distributed generalized Nash equilibria computation. Automatica 102, 111–121 (2019). https://doi.org/10.1016/j.automatica.2019.01.008
Franci, B., Staudigl, M., Grammatico, S.: Distributed forward-backward (half) forward algorithms for generalized nash equilibrium seeking. In: 2020 European Control Conference (ECC), pp. 1274–1279 (2020). https://doi.org/10.23919/ECC51009.2020.9143676
Friesz, T.L., Bernstein, D., Smith, T.E., Tobin, R.L., Wie, B.W.: Variational inequality formulation of the dynamic network user equilibrium. Oper. Res. 41(1), 179–191 (1993)
Fukushima, M.: The primal Douglas-Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Math. Program. 72(1), 1–15 (1996). https://doi.org/10.1007/BF02592328
Han, K., Eve, G., Friesz, T.L.: Computing dynamic user equilibria on large-scale networks with software implementation. Netw. Spat. Econ. 19(3), 869–902 (2019). https://doi.org/10.1007/s11067-018-9433-y
Börgens, E., Kanzow, C.: ADMM-type methods for generalized Nash equilibrium problems in Hilbert spaces. SIAM J. Optim., 377–403 (2021). https://doi.org/10.1137/19M1284336
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control. Optim. 38(2), 431–446 (2000). https://doi.org/10.1137/S0363012998338806
Boţ, R.I., Mertikopoulos, P., Staudigl, M., Vuong, P.T.: Minibatch forward-backward-forward methods for solving stochastic variational inequalities. Stochastic Syst. (2021) https://doi.org/10.1287/stsy.2019.0064. https://doi.org/10.1287/stsy.2019.0064
Cui, S., Shanbhag, U.V.: On the computation of equilibria in monotone and potential stochastic hierarchical game. arXiv preprint arXiv:2104.07860 (2021)
Thong, D.V., Gibali, A., Staudigl, M., Vuong, P.T.: Computing dynamic user equilibrium on large-scale networks without knowing global parameters. Netw. Spat. Econ. 21, 735–768 (2021)
Diakonikolas, J., Daskalakis, C., Jordan, M.: Efficient methods for structured nonconvex-nonconcave min-max optimization. International Conference on Artificial Intelligence and Statistics, pp. 2746–2754 (2021)
Fitzpatrick, S.: Representing monotone operators by convex functions. In: Workshop/Miniconference on Functional Analysis and Optimization, pp. 59–65 (1988). Centre for Mathematics and its Applications, Mathematical Sciences Institute ..
Simons, S., Zalinescu, C.: A new proof for Rockafellar’s characterization of maximal monotone operators 132(10), 2969–2972 (2004)
Iusem, A., Jofré, A., Oliveira, R.I., Thompson, P.: Extragradient method with variance reduction for stochastic variational inequalities. SIAM J. Optim. 27(2), 686–72410526234 (2017)
Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Variance-based Extragradient methods with line search for stochastic variational inequalities. SIAM J. Optim. 29(1), 175–206 (2019). https://doi.org/10.1137/17M1144799
Geiersbach, C., Pflug, G.C.: Projected stochastic gradients for convex constrained problems in Hilbert spaces. SIAM J. Optim. 29(3), 2079–2099 (2019). https://doi.org/10.1137/18M1200208
Geiersbach, C., Wollner, W.: A stochastic gradient method with mesh refinement for PDE-constrained optimization under uncertainty. SIAM J. Sci. Comput. 42(5), 2750–2772 (2020). https://doi.org/10.1137/19M1263297
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964). https://doi.org/10.1016/0041-5553(64)90137-5
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87. Kluwer Academic Publishers, (2004)
Polyak, B.T.: Introduction to Optimization. Optimization Software, (1987)
Attouch, H., Maingé, P.-E.: Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects. ESAIM: Control, Opt. Calculus of Variations 17(3), 836–857 (2011)
Boţ, R.I., Csetnek, E.: Second order forward-backward dynamical systems for monotone inclusion problems. SIAM J. Control. Optim. 54(3), 1423–1443 (2016). https://doi.org/10.1137/15M1012657
Attouch, H., Peypouquet, J.: Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators. Math. Program. 174(1), 391–432 (2019). https://doi.org/10.1007/s10107-018-1252-x
Su, W., Boyd, S., Candes, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. (2016)
Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(o(1/k^{2})\). Soviet Math. Doklady 27(2), 372–376 (1983)
Gadat, S., Panloup, F., Saadane, S.: Stochastic heavy ball. Electron. J. Statistics 12(1), 461–529 (2018). https://doi.org/10.1214/18-EJS1395
Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imag. Vis. 51(2), 311–325 (2015). https://doi.org/10.1007/s10851-014-0523-2
Briceño-Arias, L.M., Combettes, P.L.: A monotone+skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 21(4), 1230–1250 (2011). https://doi.org/10.1137/10081602X
Bot, R.I., Csetnek, E.R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Num. Algorithms 71(3), 519–540 (2016). https://doi.org/10.1007/s11075-015-0007-5
Bot, R.I., Sedlmayer, M., Vuong, P.T.: A relaxed inertial forward-backward-forward algorithm for solving monotone inclusions with application to GANS. arXiv preprint arXiv:2003.07886 (2020)
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm, pp. 17–58 (2011). https://doi.org/10.1214/10-SSY011
Yousefian, F., Nedić, A., Shanbhag, U.V.: On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems. Math. Program. 165(1), 391–431 (2017). https://doi.org/10.1007/s10107-017-1175-y
Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A variational inequality perspective on generative adversarial networks. arXiv preprint arXiv:1802.10551 (2018)
Mishchenko, K., Kovalev, D., Shulgin, E., Richtárik, P., Malitsky, Y.: Revisiting stochastic extragradient. In: International Conference on Artificial Intelligence and Statistics, pp. 4573–4582 (2020). PMLR
Kannan, A., Shanbhag, U.V.: Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Comput. Optim. Appl. 74(3), 779–820 (2019)
Cui, S., Shanbhag, U.V.: On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems. Set-Valued and Variational Analysis (to appear) (2021)
Rosasco, L., Villa, S., Vũ, B.C.: A stochastic inertial forward-backward splitting algorithm for multivariate monotone inclusions. Optimization 65(6), 1293–1314 (2016). https://doi.org/10.1080/02331934.2015.1127371
Palaniappan, B., Bach, F.: Stochastic variance reduction methods for saddle-point problems. In: Advances in Neural Information Processing Systems, pp. 1416–1424 (2016)
Gower, R.M., Schmidt, M., Bach, F., Richtárik, P.: Variance-reduced methods for machine learning. Proc. IEEE 108(11), 1968–1983 (2020). https://doi.org/10.1109/JPROC.2020.3028013
Friedlander, M.P., Schmidt, M.: Hybrid deterministic-stochastic methods for data fitting. SIAM J. Sci. Comput. 34(3), 1380–1405 (2012)
Jalilzadeh, A., Shanbhag, U.V., Blanchet, J.H., Glynn, P.W.: Smoothed variable sample-size accelerated proximal methods for nonsmooth stochastic convex programs. arXiv preprint arXiv:1803.00718 (2018)
Jofré, A., Thompson, P.: On variance reduction for stochastic smooth convex optimization with multiplicative noise. Math. Program. 174(1–2), 253–292 (2019)
Ghadimi, S., Lan, G., Zhang, H.: Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Math. Program. 155(1–2), 267–305 (2016)
Gunzburger, M.D., Webster, C.G., Zhang, G.: Stochastic finite element methods for partial differential equations with random input data. Acta Numer. 23, 521–650 (2014)
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Dordrecht (2004)
Davis, D., Drusvyatskiy, D.: Stochastic model-based minimization of weakly convex functions. SIAM J. Optim. 29(1), 207–239 (2019)
Lan, G.: First-order and Stochastic Optimization Methods for Machine Learning. Springer Series in the Data Sciences. Springer, (2020)
Combettes, P.L., Pesquet, J.-C.: Stochastic Quasi-Fejér block-coordinate fixed point iterations with random sweeping. SIAM J. Optim. 25(2), 1221–1248 (2015). https://doi.org/10.1137/140971233
Combettes, P.L., Pesquet, J.-C.: Stochastic Quasi-Fejér block-coordinate fixed point iterations with random sweeping ii: mean-square and linear convergence. Math. Program. 174(1), 433–451 (2019). https://doi.org/10.1007/s10107-018-1296-y
Rosasco, L., Villa, S., Vũ, B.C.: Stochastic Forward-Backward splitting for monotone inclusions. J. Optim. Theory Appl. 169(2), 388–406 (2016). https://doi.org/10.1007/s10957-016-0893-2
Spall, J.C.: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Autom. Control 37(3), 332–341 (1992)
Spall, J.C.: A one-measurement form of simultaneous perturbation stochastic approximation. Automatica 33(1), 109–112 (1997)
Duvocelle, B., Mertikopoulos, P., Staudigl, M., Vermeulen, D.: Learning in time-varying games. arXiv preprint arXiv:1809.03066 (2018)
Barty, K., Roy, J.-S., Strugarek, C.: Hilbert-valued perturbed subgradient algorithms. Math. Oper. Res. 32(3), 551–562 (2007). https://doi.org/10.1287/moor.1070.0253
Barty, K., Roy, J.-S., Strugarek, C.: A stochastic gradient type algorithm for closed-loop problems. Math. Program. 119(1), 51–78 (2009). https://doi.org/10.1007/s10107-007-0201-x
Lei, J., Shanbhag, U.V.: Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems over graphs. arXiv:1811.11246 (2019)
Lei, J., Shanbhag, U.V.: Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes. Optimization Methods and Software, pp. 1–31 (2020)
Borwein, J.M., Dutta, J.: Maximal monotone inclusions and Fitzpatrick functions. J. Optim. Theory Appl. 171(3), 757–784 (2016)
Auslender, A., Gourgand, M., Guillet, A.: Resolution numerique d’inegalites variationnelles. In: Lecture Notes in Economics and Mathematical Systems (Mathematical Economics) (1974)
Chen, Y., Lan, G., Ouyang, Y.: Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014). https://doi.org/10.1137/130919362
Monteiro, R.D.C., Svaiter, B.F.: On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J. Optim. 20(6), 2755–2787 (2010). https://doi.org/10.1137/090753127
Chen, Y., Lan, G., Ouyang, Y.: Accelerated schemes for a class of variational inequalities. Math. Program. (2017). https://doi.org/10.1007/s10107-017-1161-4
Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109(2), 319–344 (2007). https://doi.org/10.1007/s10107-006-0034-z
Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. 184(1), 383–410 (2020). https://doi.org/10.1007/s10107-019-01416-w
Burachik, R.S., Millán, R.D.: A projection algorithm for non-monotone variational inequalities. Set-Valued and Variational Anal. 28(1), 149–166 (2020). https://doi.org/10.1007/s11228-019-00517-0
Rockafellar, R.T., Wets, R.J.: Stochastic variational inequalities: single-stage to multistage. Math. Program. 165(1), 331–360 (2017). https://doi.org/10.1007/s10107-016-0995-5
Rockafellar, T.R., Wets, R.J.-B.: Variational Analysis. Springer, (1998)
Shapiro, A., Dentcheva, D., Ruszczyński, A..X.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, (2009)
Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning vol. 1. Springer, (2001)
Acknowledgements
M. Staudigl acknowledges support from the COST Action CA16228 “European Network for Game Theory”.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the completion of this manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
Appendix A Auxiliary results
Lemma 16
For \(x,y\in {\mathsf {H}}\) and scalars \(\alpha ,\beta \ge 0\) with \(\alpha +\beta =1\), it holds that
We recall the Minkowski inequality: For \(X,Y\in L^{p}(\Omega ,{\mathcal {F}},{\mathbb {P}};{\mathsf {H}}),{\mathcal {G}}\subseteq {\mathcal {F}}\) and \(p\in [1,\infty ]\),
In the convergence analysis, we use the Robbins-Siegmund Lemma [38, Lemma 11, pg. 50].
Lemma 17
(Robbins-Siegmund) Let \((\Omega ,{\mathcal {F}},{{\mathbb {F}}}=({\mathcal {F}}_{n})_{n\ge 0},{\mathbb {P}})\) be a discrete stochastic basis. Let \((v_{n})_{n\ge 1},(u_{n})_{n\ge 1}\in \ell ^{0}_{+}({{\mathbb {F}}})\) and \((\theta _{n})_{n\ge 1},(\beta _{n})_{n\ge 1}\in \ell ^{1}_{+}({{\mathbb {F}}})\) be such that for all \(n \ge 0\),
Then \((v_{n})_{n\ge 0}\) converges a.s. to a random variable v, and \((u_{n})_{n\ge 1}\in \ell ^{1}_{+}({{\mathbb {F}}})\).
Lemma 18
Let \(z\ge 0\) and \(0<q<p<1\). Then, if \(D\ge \frac{1}{\exp (1)\ln (p/q)}\), it holds true that \(zq^{z}\le D p^{z}\) for all \(z\ge 0\).
Proof
We want to find a positive constant \(D_{\min }>0\) such that \(D_{\min }\exp (z\ln (p))= z\exp (z\ln (q))\) for all \(z>0\). Choosing D larger than this, gives a valid value. Rearranging, this is equivalent to \(D=z\left( \frac{q}{p}\right) ^{z}\ge 0\) for all \(z\ge 0\), or, which is still equivalent to \(\ln (D)-\ln (z)-z\ln (q/p)=0.\) Define the extended-valued function \(f:[0,\infty )\rightarrow [-\infty ,\infty ]\) by \(f(z)=\ln (D)-\ln (z)-\ln (q/p)\) if \(z>0\), and \(f(z)=\infty \) if \(z=0\). Then, for all \(z>0\), simple calculus show \(f'(z)=-1/z-\ln (q/p)\) and \(f''(z)=1/z^{2}\). Hence, \(z\mapsto f(z)\) is a convex function with a unique minimum \(z_{\min }=\frac{1}{\ln (p/q)}>0\) and a corresponding function value \(f(z_{\min })=\ln (D)+\ln (\ln (p/q))+1\). Hence, for \(D\ge D_{\min }=\frac{1}{\exp (1)\ln (p/q)}\), we see that \(f(z_{\min })>0\), and thus \(zq^{z}\le D p^{z}\) for all \(z\ge 0\). \(\square \)
Rights and permissions
This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
About this article
Cite this article
Cui, S., Shanbhag, U., Staudigl, M. et al. Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces. Comput Optim Appl 83, 465–524 (2022). https://doi.org/10.1007/s10589-022-00399-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-022-00399-3