1 Introduction

1.1 Problem formulation and motivation

A wide range of problems in areas such as optimization, variational inequalities, game theory, signal processing, or traffic theory, can be reduced to solving inclusions involving set-valued operators in a Hilbert space \({\mathsf {H}}\), i.e. to find a point \(x\in {\mathsf {H}}\) such that \(0\in F(x)\), where \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is a set-valued operator. In many applications such inclusion problems display specific structure revealing that the operator F can be additively decomposed. This leads us to the main problem we consider in this paper.

Problem 1

Let \({\mathsf {H}}\) be a real separable Hilbert space with inner product \(\left\langle \cdot ,\cdot \right\rangle \) and associated norm \(\left\| \cdot \right\| =\sqrt{\left\langle \cdot ,\cdot \right\rangle }\). Let \(T:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) and \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) be maximally monotone operators, such that V is L-Lipschitz continuous. The problem is to

$$\begin{aligned} \text { find }x\in {\mathsf {H}}\text { such that } 0\in F(x)\triangleq V(x)+T(x), \end{aligned}$$
(MI)

We assume that Problem 1 is well-posed:

Assumption 1

\({\mathsf {S}}\triangleq \mathsf {Zer}(F)\ne \varnothing \).

We are interested in the case where (MI) is solved by an iterative algorithm based on a stochastic oracle (SO) representation of the operator V. Specifically, when solving the problem, the algorithm calls to the SO. At each call, the SO receives as input a search point \(x\in {\mathsf {H}}\) generated by the algorithm on the basis of past information so far, and returns the output \({\hat{V}}(x,\xi )\), where \(\xi \) is a random variable defined on some given probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\), taking values in a measurable set \(\Xi \) with law \({{\mathsf {P}}}={\mathbb {P}}\circ \xi ^{-1}\). In most parts of this paper, and the vast majority of contributions on stochastic variational problems in general, it is assumed that the output of the SO is unbiased,

$$\begin{aligned} V(x) ={\mathbb {E}}_{\xi }[{\hat{V}}(x,\xi )]=\int _{\Xi }{\hat{V}}(x,z)\,d{{\mathsf {P}}}(z)\qquad \forall x\in {\mathsf {H}}. \end{aligned}$$
(1)

Such stochastic inclusion problems arise in numerous problems of fundamental importance in mathematical optimization and equilibrium problems, either directly or through an appropriate reformulation. An excellent survey on the existing techniques for solving problem (MI) can be found in [3] (in general Hilbert spaces) and [4] (in the finite-dimensional case).

1.2 Motivating examples

In what follows, we provide some motivating examples.

Example 1

(Stochastic Convex Optimization) Let \({\mathsf {H}}_{1},{\mathsf {H}}_{2}\) be separable Hilbert spaces. A large class of stochastic optimization problems, with wide range of applications in signal processing, machine learning and control, is given by

$$\begin{aligned} \min _{u\in {\mathsf {H}}_{1}} \{f(u)+h(u)+g(Lu)\} \end{aligned}$$
(2)

where \(h:{\mathsf {H}}_{1}\rightarrow {\mathbb {R}}\) is a convex differentiable function with a Lipschitz continuous gradient \(\nabla h\), represented as \(h(u)={\mathbb {E}}_{\xi }[{\hat{h}}(u,\xi )]\). \(f:{\mathsf {H}}_{1}\rightarrow (-\infty ,\infty ]\) and \(g:{\mathsf {H}}_{2}\rightarrow (-\infty ,\infty ]\) are proper, convex lower semi-continuous functions, and \(L:{\mathsf {H}}_{1}\rightarrow {\mathsf {H}}_{2}\) is a bounded linear operator. Problem (2) gains particular relevance in machine learning, where usually h(u) is a convex data fidelity term (e.g. a population risk functional), and g(Lu) and f(u) embody penalty or regularization terms; see e.g. total variation [5], hierarchical variable selection [6, 7], and graph regularization [8, 9]. Applications in control and engineering are given in [10, 11]. We refer to (2) as the primal problem. Using Fenchel-Rockafellar duality [3, ch.19], the dual problem of (2) is given by

$$\begin{aligned} \min _{v\in {\mathsf {H}}_{2}}\{(f+h)^{*}(-L^{*}v)+g^{*}(v)\}, \end{aligned}$$
(3)

where \(g^{*}\) is the Fenchel conjugate of g and \((f+h)^{*}(w)=f^{*}\square h^{*}(w)=\inf _{u\in {\mathsf {H}}_{1}}\{f^{*}(u)+h^{*}(w-u)\}\) represents the infimal convolution of the functions f and h. Combining the primal problem (2) with its dual (3), we obtain the saddle-point problem

$$\begin{aligned} \inf _{u\in {\mathsf {H}}_{1}}\sup _{v\in {\mathsf {H}}_{2}}\{f(u)+h(u)-g^{*}(v)+\left\langle Lu,v\right\rangle \}. \end{aligned}$$
(4)

Following classical Karush-Kuhn-Tucker theory [12], the primal-dual optimality conditions associated with (4) are concisely represented by the following monotone inclusion: Find \({\bar{x}}=({\bar{u}},{\bar{v}})\in {\mathsf {H}}_{1}\times {\mathsf {H}}_{2}\equiv {\mathsf {H}}\) such that

$$\begin{aligned} -L^{*}{\bar{v}}\in \partial f({\bar{u}})+\nabla h({\bar{u}}),\text { and }L{\bar{u}}\in \partial g^{*}({\bar{v}}). \end{aligned}$$
(5)

We may compactly summarize these conditions in terms of the zero-finding problem (MI) using the operators V and T, defined as

$$\begin{aligned} V(u,v)\triangleq ( \nabla h(u)+L^{*}v, -Lu ) \text { and } T(u,v)\triangleq \partial f(u)\times \partial g^{*}(v). \end{aligned}$$

Note that the operator \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is the sum of a maximally monotone and a skew-symmetric operator. Hence, in general, it is not cocoercive. Conditions on the data guaranteeing Assumption 1 are stated in [13].

Since h(u) is represented as an expected value, we need to appeal to simulation based methods to evaluate its gradient. Also, significant computational speedups can be made if we are able to sample the skew-symmetric linear operator \((u,v)\mapsto (L^{*}u,-Lu)\) in an efficient way. Hence, we assume that there exists a SO that can provide unbiased estimator to the gradient operators \(\nabla h(u)\) and \((L^{*}v,-Lu)\). More specifically, given the current position \(x=(u,v)\in {\mathsf {H}}_{1}\times {\mathsf {H}}_{2}\), the oracle will output the random estimators \({\hat{H}}(u,\xi ),{\hat{L}}_{u}(u,\xi ),{\hat{L}}_{v}(v,\xi )\) such that

$$\begin{aligned} {\mathbb {E}}_{\xi }[{\hat{H}}(u,\xi )]=\nabla h(u),\; {\mathbb {E}}_{\xi }[{\hat{L}}_{u}(u,\xi )]=Lu,\text { and }{\mathbb {E}}_{\xi }[{\hat{L}}_{v}(v,\xi )]=L^{*}v. \end{aligned}$$

This oracle feedback generates the random operator \({\hat{V}}(x,\xi )=({\hat{H}}(u,\xi )+{\hat{L}}_{v}(v,\xi ),-{\hat{L}}_{u}(u,\xi ))\), which allows us to approach the saddle-point problem (4) via simulation-based techniques.

Example 2

(Stochastic variational inequality problems) There are a multitude of examples of monotone inclusion problems (MI) where the single-valued map V is not the gradient of a convex function. An important model class where this is the case is the stochastic variational inequality (SVI) problem. Due to their huge number of applications, SVI’s received enormous interest over the last several years from various communities [14,15,16,17]. This problem emerges when V(x) is represented as an expected value as in (1) and \(T(x)=\partial g(x)\) for some proper lower semi-continuous function \(g:{\mathsf {H}}\rightarrow (-\infty ,\infty ]\). In this case, the resulting structured monotone inclusion problem can be equivalently stated as

$$\begin{aligned} \text {find }{\bar{x}}\in {\mathsf {H}}\text { s.t. }\left\langle V({\bar{x}}),x-{\bar{x}}\right\rangle +g(x)-g({\bar{x}})\ge 0\quad \forall x\in {\mathsf {H}}. \end{aligned}$$
(6)

An important and frequently studied special case of (6) arises if g is the indicator function of a given closed and convex subset \({\mathsf {C}}\subset {\mathsf {H}}\). In this cases the set-valued operator T becomes the normal cone map

$$\begin{aligned} T(x)={{\,\mathrm{{\mathsf {N}}}\,}}_{{\mathsf {C}}}(x)\triangleq \left\{ \begin{array}{ll} \left\{ p\in {\mathsf {H}}\vert \sup _{y\in {\mathsf {C}}}\left\langle y-x,p\right\rangle \le 0\right\} &{} \text {if }x\in {\mathsf {C}},\\ \varnothing &{} \text {else}. \end{array} \right. \end{aligned}$$
(7)

This formulation includes many fundamental problems including fixed point problems, Nash equilibrium problems and complementarity problems [4]. Consequently, the equilibrium condition (6) reduces to

$$\begin{aligned} \text {find }{\bar{x}}\in {\mathsf {C}}\text { s.t. }\left\langle V({\bar{x}}),x-{\bar{x}}\right\rangle \ge 0\quad \forall x\in {\mathsf {C}}. \end{aligned}$$

1.3 Contributions

Despite the advances in stochastic optimization and variational inequalities, the algorithmic treatment of general monotone inclusion problems under stochastic uncertainty is a largely unexplored field. This is rather surprising given the vast amount of applications of maximally monotone inclusions in control and engineering, encompassing distributed computation of generalized Nash equilibria [18,19,20], traffic systems [21,22,23], and PDE-constrained optimization [24]. The first major aim of this manuscript is to introduce and investigate a relaxed inertial stochastic forward-backward-forward (RISFBF) method, building on an operator splitting scheme originally due to Paul Tseng [25]. RISFBF produces three sequences \(\{(X_{k},Y_{k},Z_{k});k\in {\mathbb {N}}\}\), defined as

$$\begin{aligned} \begin{aligned} Z_{k}&=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\ Y_{k}&=J_{\lambda _{k}T}(Z_{k}-\lambda _{k}A_k(Z_{k})),\\ X_{k+1}&=(1-\rho _{k})Z_{k}+\rho _{k}[Y_{k}+\lambda _{k}(A_k(Z_{k})-B_k(Y_{k}))]. \end{aligned} \end{aligned}$$
(RISFBF)

The data involved in this scheme are explained as follows:

  • \(A_k(Z_{k})\) and \(B_k(Y_{k})\) are random estimators of V obtained by consulting the SO at search points \(Z_k\) and \(Y_k\), respectively;

  • \((\alpha _{k})_{k\in {\mathbb {N}}}\) is a sequence of non-negative numbers regulating the memory, or inertia of the method;

  • \((\lambda _{k})_{k\in {\mathbb {N}}}\) is a positive sequence of step-sizes;

  • \((\rho _{k})_{k\in {\mathbb {N}}}\) is a non-negative relaxation sequence.

If \(\alpha _{k}=0\) and \(\rho _{k}=1\) the above scheme reduces to the stochastic forward-backward-forward method developed in [26, 27], with important applications in Gaussian communication networks [16] and dynamic user equilibrium problems [28]. However, even more connections to existing methods can be made.

Stochastic Extragradient If \(T = \{0\}\), we obtain the inertial extragradient method

$$\begin{aligned} Z_{k}&=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\ Y_{k}&=Z_{k}-\lambda _{k}A_{k}(Z_{k}),\\ X_{k+1}&=Z_{k}-\rho _{k}\lambda _{k}B_{k}(Y_{k}). \end{aligned}$$

If \(\alpha _{k}=0\), this reduces to a generalized extragradient method

$$\begin{aligned} Y_{k}&=X_{k}-\lambda _{k}A_{k}(X_{k}),\\ X_{k+1}&=X_{k}-\lambda _{k}\rho _{k}B_{k}(Y_{k}), \end{aligned}$$

recently introduced in [29].

Proximal Point Method If \(V=0\), the method reduces to the well-known deterministic proximal point algorithm [2], overlaid by inertial and relaxation effects. The scheme reads explicitly as

$$\begin{aligned}&Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\&X_{k+1}=(1-\rho _{k})Z_{k}+\rho _{k}J_{\lambda _{k}T}(Z_{k}). \end{aligned}$$

The list of our contributions reads as follows:

  1. (i)

    Wide Applicability A key argument in favor of Tseng’s operator splitting method is that it is provably convergent when solving structured monotone inclusions of the type (MI), without imposing cocoercivity of the single-valued part V. This is a remarkable advantage relative to the perhaps more familiar and direct forward-backward splitting methods (aka projected (stochastic) gradient descent in the potential case). In particular, our scheme is applicable to the primal-dual splitting described in Example 1.

  2. (ii)

    Asymptotic guarantees We show that under suitable assumptions on the relaxation sequence \((\rho _k)_{k\in {\mathbb {N}}}\), the non-decreasing inertial sequence \((\alpha _k)_{k\in {\mathbb {N}}}\), and step-length sequence \((\lambda _k)_{k\in {\mathbb {N}}}\), the generated stochastic process \((X_{k})_{k\in {\mathbb {N}}}\) weakly almost surely converges to a random variable with values in \({\mathsf {S}}\). Assuming demiregularity of the operators yields strong convergence in the real (possibly infinite-dimensional) Hilbert space.

  3. (iii)

    Non-asymptotic linear rate under strong monotonicity of V When V is strongly monotone, strong convergence of the last iterate is shown and the sequence admits a non-asymptotic linear rate of convergence without a conditional unbiasedness of the SO. In particular, we show that the iteration and oracle complexity of computing an \(\epsilon \)-solution is no worse than \({\mathcal {O}}(\log (\tfrac{1}{\epsilon }))\) and \({\mathcal {O}}(\tfrac{1}{\epsilon })\), respectively.

  4. (iv)

    Non-asymptotic sublinear rate under monotonicity of V When V is monotone, by leveraging the Fitzpatrick function [3, 30, 31] associated with the structured operator \(F=T+V\), we propose a restricted gap function. We then prove that the expected gap of an averaged sequence diminishes at the rate of \({\mathcal {O}}(\tfrac{1}{k})\). This allows us to derive an \({\mathcal {O}}(\tfrac{1}{\epsilon })\) upper bound on the iteration complexity, and an \({\mathcal {O}}(\tfrac{1}{\epsilon ^{2+\delta }})\) upper bound (for \(\delta >0)\) on the oracle complexity for computing an \(\epsilon \)-solution.

The above listed contributions shed new light on a set of open questions, which we summarize below:

  1. (i)

    Absence of rigorous asymptotics So far no aymptotic convergence guarantees have been available when considering relaxed inertial FBF schemes when T is maximally monotone and V is a single-valued monotone expectation-valued map.

  2. (ii)

    Unavailability of rate statements We are not aware of any known non-asymptotic rate guarantees for algorithms solving (MI) under stochastic uncertainty. A key barrier in monotone and stochastic regimes in developing such statements has been in the availability of a residual function. Some recent progress in the special stochastic variational inequality case has been made by [26, 32, 33], but the general Hilbert-space setting involving set-valued operators seems to be largely unexplored (we will say more in Sect. 1.4).

  3. (iii)

    Bias requirements A standard assumption in stochastic optimization is that the SO generates signals which are unbiased estimators of the deterministic operator V(x). Of course, the requirement that the noise process is unbiased may often fail to hold in practice. In the present Hilbert space setting this is in some sense even expected to be the rule rather than the exception, since most operators are derived from complicated dynamical systems or the optimization method is applied to discretized formulations of the original problem. See the recent work [34, 35] for an interesting illustration in the context of PDE-constrained optimization. Some of our results go beyond the standard unbiasedness assumption.

1.4 Related research

Understanding the role of inertial and relaxation effects in numerical schemes is a line of research which received enormous interest over the last two decades. Below, we try to give a brief overview about related algorithms.

Inertial, Relaxation, and Proximal schemes

In the context of convex optimization, Polyak [36] introduced the Heavy-ball method. This is a two-step method for minimizing a smooth convex function f. The algorithm reads as

$$\begin{aligned} \left\{ \begin{array}{l} Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\ X_{k+1}=Z_{k}-\lambda _{k}\nabla f(X_{k}) \end{array}\right. \end{aligned}$$
(HB)

The difference from the gradient method is that the base point of the gradient descent step is taken to be the extrapolated point \(Z_{k}\), instead of \(X_{k}\). This small difference has the surprising consequence that (HB) attains optimal complexity guarantees for strongly convex functions with Lipschitz continuous gradients. Hence, (HB) resembles an optimal method [37]. The acceleration effects can be explained by writing the process entirely in terms of a single updating equation as

$$\begin{aligned} X_{k+1}-2X_{k}-X_{k-1}+(1-\alpha _{k})(X_{k}-X_{k-1})+\lambda _{k}\nabla f(X_{k})=0. \end{aligned}$$

Choosing \(\alpha _{k}=1-a_{k}\delta _{k}\) and \(\lambda _{k}=\gamma _{k}\delta ^{2}_{k}\) for \(\delta _{k}\) a small parameter, we arrive at

$$\begin{aligned} \frac{1}{\delta _{k}^{2}}(X_{k+1}-2X_{k}-X_{k-1})+\frac{a_{k}}{\delta _{k}}(X_{k}-X_{k-1})+\gamma _{k}\nabla f(X_{k})=0. \end{aligned}$$

This can be seen as a discrete-time approximation of the second-order dynamical system

$$\begin{aligned} \ddot{x}(t)+\frac{a}{t}{\dot{x}}(t)+\gamma (t)\nabla f(x(t))=0, \end{aligned}$$

introduced by [38]. Since then, it has received significant attention in the potential, as well as in the non-potential case (see e.g [39,40,41] for an appetizer). As pointed out in [42], if \(\gamma (t)=1\), the above system reduces to a continuous version of Nesterov’s fast gradient method [43]. Recently, [44] defined a stochastic version of the Heavy-ball method.

Motivated by the development of such fast methods for convex optimization, Attouch and Cabot [1] studied a relaxed-inertial forward-backward algorithm, reading as

$$\begin{aligned} \left\{ \begin{array}{l} Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\ Y_k = J_{\lambda _{k}T}(Z_{k}-\lambda _{k}V(Z_{k})) \\ X_{k+1}=(1-\rho _{k})Z_{k}+\rho _{k} Y_{k}. \end{array}\right. \end{aligned}$$
(RIFB)

If \(V=0\), this reduces to a relaxed inertial proximal point method analyzed by Attouch and Cabot [2]. If \(\rho _{k}=1\), an inertial forward-backward splitting method is recovered, first studied by Lorenz and Pock [45].

Convergence guarantees for the forward-backward splitting rely on the cocoercivity (inverse strong monotonicity) of the single-valued operator V. Example 1, in which V is given by a monotone plus a skew-symmetric linear operator, illustrates an important instance for which this assumption is not satisfied (see [46] for further examples). A general-purpose operator splitting framework, relaxing the cocoercivity property, is the forward-backward-forward (FBF) method due to Tseng [25]. Inertial [47] and relaxed-inertial [48] versions of FBF have been developed. An all-encompassing numerical scheme can be compactly described as

$$\begin{aligned} \left\{ \begin{array}{l} Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\ Y_{k} =J_{\lambda _{k}T}(Z_{k}-\lambda _{k}V(Z_{k})),\\ X_{k+1}=(1-\rho _{k})Z_{k}+\rho _{k}[Y_{k}-\lambda _{k}(V(Y_{k})-V(Z_{k}))]. \end{array}\right. \end{aligned}$$
(RIFBF)

Weak and strong convergence under appropriate conditions on the involved operators and parameter sequences are established in [48], but no rate statements are given.

Related work on stochastic approximation Efforts in extending stochastic approximation methods to variational inequality problems have considered standard projection schemes [14] for Lipschitz and strongly monotone operators. Extragradient and (more generally) mirror-prox algorithms [49, 50] can contend with merely monotone operators, while iterative smoothing [51] schemes can cope with with the lack of Lipschitz continuity. It is worth noting that extragradient schemes have recently assumed relevance in the training of generative adversarial networks (GANS) [52, 53]. Rate analysis for stochastic extragradient (SEG) have led to optimal rates for Lipschitz and monotone operators [50], as well as extensions to non-Lipschitzian [51] and pseudomonotone settings [32, 54]. To alleviate the computational complexity single-projection schemes, such as the stochastic forward-backward-forward (SFBF) method [26, 27], as well as subgradient-extragradient and projected reflected algorithms [55] have been studied as well.

SFBF has been shown to be nearly optimal in terms of iteration and oracle complexity, displaying significant empirical improvements compared to SEG. While the role of inertia in optimization is well documented, in stochastic splitting problems, the only contribution we are aware of is the work by Rosasco et al. [56]. In that paper asymptotic guarantees for an inertial stochastic forward-backward (SFB) algorithm are presented under the hypothesis that the operators V and T are maximally monotone and the single-valued operator V is cocoercive.

Variance reduction approaches Variance-reduction schemes address the deterioration in convergence rate and the resulting poorer practical behavior via two commonly adopted avenues:

  1. (i)

    If the single-valued part V appears as a finite-sum (see e.g. [52, 57]), variance-reduction ideas from machine learning [58] can be used.

  2. (ii)

    Mini-batch schemes that employ an increasing batch-size of gradients [59] lead to deterministic rates of convergence for stochastic strongly convex [60], convex [61], and nonconvex optimization [62], as well as for pseudo-monotone SVIs via extragradient [32], and splitting schemes [26].

In terms of run-time, improvements in iteration complexities achieved by mini-batch approaches are significant; e.g. in strongly monotone regimes, the iteration complexity improves from \({\mathcal {O}}(\tfrac{1}{\epsilon })\) to \({\mathcal {O}}(\ln (\tfrac{1}{\epsilon }))\) [27, 55]. Beyond run-time advantages, such avenues provide asymptotic and rate guarantees under possibly weaker assumptions on the problem as well as the oracle; in particular, mini-batch schemes allow for possibly biased oracles and state-dependency of the noise [55]. Concerns about the sampling burdens are, in our opinion, often overstated since such schemes are meant to provide \(\epsilon \)-solutions; e.g. if \(\epsilon =10^{-3}\) and the obtained rate is \({\mathcal {O}}(1/k)\), then the batch-size \(m_k = \lfloor k^a \rfloor \) where \(a > 1\), implying that the batch-sizes are \({\mathcal {O}}(10^{3a})\), a relatively modest requirement, given the advances in computing.

Outline The remainder of the paper is organized in five sections. After dispensing with the preliminaries in Sect. 2, we present the (RISFBF) scheme in Sect. 3. Asymptotic and rate statements are developed in Sect. 4 and preliminary numerics are presented in Sect. 5. We conclude with some brief remarks in Sect. 6. Technical results are collected in Appendix 1.

2 Preliminaries

Throughout, \({\mathsf {H}}\) is a real separable Hilbert space with scalar product \(\left\langle \cdot ,\cdot \right\rangle \), norm \(\left\| \cdot \right\| \), and Borel \(\sigma \)-algebra \({\mathcal {B}}\). The symbols \(\rightarrow \) and \(\rightharpoonup \) denote strong and weak convergence, respectively. \({{\,\mathrm{Id}\,}}:{\mathsf {H}}\rightarrow {\mathsf {H}}\) denotes the identity operator on \({\mathsf {H}}\). Stochastic uncertainty is modeled on a complete probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\), endowed with a filtration \({{\mathbb {F}}}=({\mathcal {F}}_{k})_{k\in {\mathbb {N}}_{0}}\). By means of the Kolmogorov extension theorem, we assume that \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) is large enough so that all random variables we work with are defined on this space. A \({\mathsf {H}}\)-valued random variable is a measurable function \(X:(\Omega ,{\mathcal {F}})\rightarrow ({\mathsf {H}},{\mathcal {B}})\). Let \({\mathcal {G}}\subset {\mathcal {F}}\) be a given sub-sigma algebra. The conditional expecation of the random variable X is denoted by \({\mathbb {E}}(X\vert {\mathcal {G}})\). If \({\mathcal {A}}\subset {\mathcal {G}}\subset {\mathcal {F}}\), the tower-property says that

$$\begin{aligned} {\mathbb {E}}\left[ {\mathbb {E}}(X\vert {\mathcal {G}})\vert {\mathcal {A}}\right] ={\mathbb {E}}\left[ {\mathbb {E}}(X\vert {\mathcal {A}})\vert {\mathcal {G}}\right] ={\mathbb {E}}(X\vert {\mathcal {A}}). \end{aligned}$$

We denote by \(\ell ^{0}({{\mathbb {F}}})\) the set of sequences of real-valued random variables \((\xi _{k})_{k\in {\mathbb {N}}}\) such that, for every \(k\in {\mathbb {N}}\), \(\xi _{k}\) is \({\mathcal {F}}_{k}\)-measurable. For \(p\in [1,\infty ]\), we set

$$\begin{aligned} \ell ^{p}({{\mathbb {F}}})\triangleq \left\{ (\xi _{k})_{k\in {\mathbb {N}}}\in \ell ^{0}({{\mathbb {F}}})\vert \sum _{k\ge 1}\left| \xi _{k}\right| ^{p}<\infty \quad {\mathbb {P}}\text {-a.s.}\right\} . \end{aligned}$$

We denote the set of summable non-negative sequences by \(\ell ^{1}_{+}({\mathbb {N}})\).

We now collect some concepts from monotone operator theory. For more details, we refer the reader to [3]. Let \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) be a set-valued operator. Its domain and graph are defined as \({{\,\mathrm{dom}\,}}F\triangleq \{x\in {\mathsf {H}}\vert F(x)\ne \varnothing \},\text { and }{{\,\mathrm{gr}\,}}(F)\triangleq \{(x,u)\in {\mathsf {H}}\times {\mathsf {H}}\vert u\in F(x)\},\) respectively. A single-valued operator \(C:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is cocoercive if there exists \(\beta >0\) such that \(\left\langle C(x)-C(y),x-y\right\rangle \ge \beta \left\| C(x)-C(y)\right\| ^{2}\). A set-valued operator \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is called monotone if

$$\begin{aligned} \left\langle v-w,x-y\right\rangle \ge 0\qquad \forall (x,v),(y,w)\in {{\,\mathrm{gr}\,}}(F). \end{aligned}$$
(8)

The set of zeros of F, denoted by \(\mathsf {Zer}(T)\), defined as \(\mathsf {Zer}(F)\triangleq \{x\in {\mathsf {H}}\vert 0\in T(x)\}\). The inverse of F is \(F^{-1}:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}},u\mapsto F^{-1}(u)=\{x\in {\mathsf {H}}\vert u\in F(x)\}\). The resolvent of F is \(J_{F}\triangleq ({{\,\mathrm{Id}\,}}+ F)^{-1}.\) If F is maximally monotone, then \(J_{F}\) is a single-valued map. We also need the classical notion of demiregularity of an operator.

Definition 1

An operator \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is demiregular at \(x\in {{\,\mathrm{dom}\,}}(F)\) if for every sequence \(\{(y_{n},u_{n})\}_{n\in {\mathbb {N}}}\subset {{\,\mathrm{gr}\,}}(F)\) and every \(u\in F(y)\), we have

$$\begin{aligned}{}[y_{n}\rightharpoonup y,v_{n}\rightarrow v]\Rightarrow y_{n}\rightarrow y. \end{aligned}$$

The notion of demiregularity captures various properties typically used to establish strong convergence of dynamical systems. [10] exhibits a large class of possibly set-valued operators F which are demiregular. In particular, demiregularity holds if F is uniformly or strongly monotone, or when F is the subdifferential of a uniformly convex lower semi-continuous function f. We often use the Young inequality

$$\begin{aligned} ab\le \frac{a^{2}}{2\varepsilon }+\frac{\varepsilon b^{2}}{2}\quad (a,b\in {\mathbb {R}}). \end{aligned}$$
(9)

3 Algorithm

Our aim is to solve the monotone inclusion problem (MI) under the following assumption:

Assumption 2

Consider Problem 1. The set-valued operator \(T:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is maximally monotone with an efficiently computable resolvent. The single-valued operator \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is maximally monotone and L-Lipschitz continuous (\(L>0)\) with full domain \({{\,\mathrm{dom}\,}}V={\mathsf {H}}\).

Assumption 2 guarantees that the operator \(F=T+V\) is maximally monotone [3, Corolllary 24.4].

For numerical tractability, we make a finite-dimensional noise assumption, common to stochastic optimization problems in (possibly infinite-dimensional) Hilbert spaces [63].Footnote 1

Assumption 3

(Finite-dimensional noise) All randomness can be described via a finite dimensional random variable \(\xi :(\Omega ,{\mathcal {F}})\rightarrow (\Xi ,{\mathcal {E}})\), where \(\Xi \subseteq {\mathbb {R}}^{d}\) is a measurable set with Borel sigma algebra \({\mathcal {E}}\). The law of the random variable \(\xi \) is denoted by \({{\mathsf {P}}}\), i.e. \({{\mathsf {P}}}(\Gamma )\triangleq {\mathbb {P}}(\{\omega \in \Omega \vert \xi (\omega )\in \Gamma \})\) for all \(\Gamma \in {\mathcal {E}}\).

To access new information about the values of the operator V(x), we adopt a stochastic approximation (SA) approach where samples are accessed iteratively and online: At each iteration, we assume to have access to a stochastic oracle (SO) which generates some estimate on the value of the deterministic operator V(x) when the current position is x. This information is obtained by drawing an iid sample form the law \({{\mathsf {P}}}\). These fresh samples are then used in the numerical algorithm after an initial extrapolation step delivering the point \(Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1})\), for some extrapolation coefficient \(\alpha _{k}\in [0,1]\). Departing from \(Z_{k}\), we call the SO to retrieve the mini-batch estimator with sample rate \(m_{k}\in {\mathbb {N}}\):

$$\begin{aligned} A_{k}(Z_{k},\omega )\triangleq \frac{1}{m_{k}}\sum _{t=1}^{m_{k}}{\hat{V}}(Z_{k},\xi _{k}^{(t)}(\omega )). \end{aligned}$$
(10)

\(\xi _{k}\triangleq (\xi _{k}^{(1)},\ldots ,\xi _{k}^{(m_{k})})\) is the data sample employed by the SO to return the estimator \(A_{k}(Z_{k})\). Subsequently we perform a forward-backward update with step size \(\lambda _{k}>0\):

$$\begin{aligned} Y_{k}=J_{\lambda _{k}T}\left( Z_{k}-\lambda _{k}A_{k}(Z_{k})\right) . \end{aligned}$$
(11)

In the final updates, a second independent call of the SO is made, using the data set \(\eta _{k}=(\eta ^{(1)}_{k},\ldots ,\eta ^{(m_{k})}_{k})\), yielding the estimator

$$\begin{aligned} B_{k}(Y_{k},\omega )\triangleq \frac{1}{m_{k}}\sum _{t=1}^{m_{k}}{\hat{V}}(Y_{k},\eta _{k}^{(t)}(\omega )), \end{aligned}$$
(12)

and the new state

$$\begin{aligned} X_{k+1}=(1-\rho _{k})Z_{k}+\rho _{k}\left[ Y_{k}+\lambda _{k}(A_{k}(Z_{k})-B_{k}(Y_{k}))\right] \end{aligned}$$
(13)

This iterative procedure generates a stochastic process \(\{(Z_{k},Y_{k},X_{k})\}_{k\in {\mathbb {N}}}\), defining the relaxed inertial stochastic forward-backward-forward (RISFBF) scheme. A pseudocode is given as Algorithm 1 below.

figure a
$$ \bar{X}_{k} = \sum\limits_{{k = 1}}^{K} {\frac{{\rho _{k} Y_{k} }}{{\sum\nolimits_{{k = 1}}^{K} {\rho k} }}} $$
(14)

Note that RISFBF is still conceptual since we have not explained how the sequences \((\alpha _{k})_{k\in {\mathbb {N}}},(\lambda _{k})_{k\in {\mathbb {N}}}\) and \((\rho _{k})_{k\in {\mathbb {N}}}\) should be chosen. We will make this precise in our complexity analysis, starting in Sect. 4.

3.1 Equivalent form of RISFBF

We can collect the sequential updates of RISFBF as the fixed-point iteration

$$\begin{aligned} \left\{ \begin{array}{l} Z_{k}=X_{k}+\alpha _{k}(X_{k}-X_{k-1}),\\ X_{k+1}=Z_{k}-\rho _{k}\Phi _{k,\lambda _{k}}(Z_{k}) \end{array}\right. \end{aligned}$$
(15)

where \(\Phi _{k,\lambda }:{\mathsf {H}}\times \Omega \rightarrow {\mathsf {H}}\) is the time-varying map given by

$$\begin{aligned} \Phi _{k,\lambda }(x,\omega )\triangleq x-\lambda A_{k}(x,\omega )- ({{\,\mathrm{Id}\,}}_{{\mathsf {H}}}-\lambda B_{k}(\cdot ,\omega ))\circ J_{\lambda T}\circ ({{\,\mathrm{Id}\,}}_{{\mathsf {H}}}-\lambda A_{k}(\cdot ,\omega ))(x). \end{aligned}$$

Formulating the algorithm in this specific way establishes the connection between RISFBF and the heavy-ball system. Indeed, combining the iterations in (15) in one, we get a second-order difference equations, closely resembling the structure present in (HB):

$$\begin{aligned} \frac{1}{\rho _{k}}(X_{k+1}-2X_{k}-X_{k-1})+\frac{(1-\alpha _{k})}{\rho _{k}}(X_{k}-X_{k-1})+\Phi _{k,\lambda _{k}}(X_{k}+\alpha _{k}(X_{k}-X_{k-1}))=0. \end{aligned}$$

Also, it reveals the Markovian nature of the process \((X_{k})_{k\in {\mathbb {N}}}\); It is clear from the formulation (15) that \(X_{k}\) is Markov with respect to the sigma-algebra \(\sigma (\{X_{0},\ldots ,X_{k-1}\})\).

3.2 Assumptions on the stochastic oracle

In order to tame the stochastic uncertainty in RISFBF, we need to impose some assumptions on the distributional properties of the random fields \((A_{k}(x))_{k\in {\mathbb {N}}}\) and \((B_{k}(x))_{k\in {\mathbb {N}}}\). One crucial statistic we need to control is the SO variance. Define the oracle error at a point \(x \in {\mathsf {H}}\) as

$$\begin{aligned} \varepsilon (x,\xi )\triangleq {\hat{V}}(x,\xi )-V(x). \end{aligned}$$
(16)

Assumption 4

(Oracle Noise) We say that the SO

  1. (i)

    is conditionally unbiased if \({\mathbb {E}}_{\xi }[\varepsilon (x,\xi ) \vert x]=0\) for all \(x\in {\mathsf {H}}\);

  2. (ii)

    enjoys a uniform variance bound: \({\mathbb {E}}_{\xi }[\left\| \varepsilon (x,\xi )\right\| ^{2}\vert x]\le \sigma ^{2}\) for some \(\sigma > 0\) and all \(x\in {\mathsf {H}}\).

Define

$$\begin{aligned} U_{k}(\omega )\triangleq \frac{1}{m_{k}}\sum _{t=1}^{m_{k}}\varepsilon (Z_{k}(\omega ),\xi _{k}^{(t)}(\omega )),\text { and }W_{k}(\omega )\triangleq \frac{1}{m_{k}}\sum _{t=1}^{m_{k}}\varepsilon (Y_{k}(\omega ),\eta ^{(t)}_{k}(\omega )). \end{aligned}$$

The introduction of these two processes allows us to decompose the random estimator into a mean component and a residual, so that

$$\begin{aligned} A_{k}(Z_{k})&=V(Z_{k})+U_{k},\text { and }B_{k}(Y_{k})=V(Y_{k})+W_{k} \end{aligned}$$

If Assumption 4(i) holds true then \({\mathbb {E}}[W_{k}\vert {\hat{{\mathcal {F}}}}_{k}]=0={\mathbb {E}}[U_{k}\vert {\mathcal {F}}_{k}]=0\). Hence, under conditional unbiasedness, the processes \(\{(U_{k},{\mathcal {F}}_{k});k\in {\mathbb {N}}\}\) and \(\{(W_{k},\hat{{\mathcal {F}}_{k}});k\in {\mathbb {N}}\}\) are martingale difference sequences, where the filtrations are defined as \({\mathcal {F}}_{0}\triangleq {\hat{{\mathcal {F}}}}_{0}\triangleq {\mathcal {F}}_{1}\triangleq \sigma (X_{0},X_{1})\), and iteratively, for \(k\ge 1\),

$$\begin{aligned} {\hat{{\mathcal {F}}}}_{k}\triangleq \sigma (X_{0},X_{1},\xi _{1},\eta _{1},\ldots ,\eta _{k-1},\xi _{k}),\; {\mathcal {F}}_{k+1}\triangleq \sigma (X_{0},X_{1},\xi _{1},\eta _{1},\ldots ,\xi _{k},\eta _{k}). \end{aligned}$$

Observe that \({\mathcal {F}}_{k}\subseteq {\hat{{\mathcal {F}}}}_{k}\subseteq {\mathcal {F}}_{k+1}\) for all \(k\ge 1\). The uniform variance bound, Assumption 4(ii), ensures that the processes \(\{(U_{k},{\mathcal {F}}_{k});k\in {\mathbb {N}}\}, \{(W_{k},\hat{{\mathcal {F}}_{k}});k\in {\mathbb {N}}\}\) have finite second moment.

Remark 1

For deriving the stochastic estimates in the analysis to come, it is important to emphasize that \(X_{k}\) is \({\mathcal {F}}_{k}\)-measurable for all \(k\ge 0\), and \(Y_{k}\) is \({\hat{{\mathcal {F}}}}_{k}\)-measurable.

The mini-batch sampling technology implies an online variance reduction effect, summarized in the next lemma, whose simple proof we omit.

Lemma 1

(Variance of the SO) Suppose Assumption 4 holds. Then for \(k \ge 1\),

$$\begin{aligned} {\mathbb {E}}[\left\| W_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]\le \frac{\sigma ^{2}}{m_{k}}\text { and }{\mathbb {E}}[\left\| U_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]\le \frac{\sigma ^{2}}{m_{k}}, \qquad {\mathbb {P}}-\text {a.s.} \end{aligned}$$
(17)

We see that larger sampling rates lead to more precise point estimates of the single-valued operator. This comes at the cost of more evaluations of the stochastic operator. Hence, any mini-batch approach faces a trade-off between the oracle complexity and the iteration complexity. We want to use mini-batch estimators to achieve an online variance reduction scheme, motivating the next assumption.

Assumption 5

(Batch Size) The batch size sequence \((m_{k})_{k\in {\mathbb {N}}}\) is non-decreasing and satisfies \(\sum _{k=1}^{\infty }\frac{1}{m_{k}}<\infty \).

4 Analysis

This section is organized into three subsections. The first subsection derives asymptotic convergence guarantees, while the second and third subsections provides linear and sublinear rate statements in strongly monotone and monotone regimes, respectively.

4.1 Asymptotic convergence

Given \(\lambda >0\), we define the residual function for the monotone inclusion (MI) as

$$\begin{aligned} \mathsf {res}_{\lambda }(x)\triangleq \left\| x-J_{\lambda T}(x-\lambda V(x))\right\| . \end{aligned}$$
(18)

Clearly, for every \(\lambda >0\), \(x\in {\mathsf {S}}\Leftrightarrow \mathsf {res}_{\lambda }(x)=0\). Hence, \(\mathsf {res}_{\lambda }(\cdot )\) is a merit function for the monotone inclusion problem. To put this merit function into context, let us consider the special case where T is the subdifferential of a lower semi-continuous convex function \(g:{\mathsf {H}}\rightarrow (-\infty ,\infty ]\), i.e. \(T=\partial g\). In this case, the resolvent \(J_{\lambda T}\) reduces to the well-known proximal-operator

$$\begin{aligned} {{\,\mathrm{prox}\,}}_{\lambda g}(x)\triangleq \mathop {\mathrm {argmin}}\limits _{u\in {\mathsf {H}}}\{\lambda g(u)+\frac{1}{2}\left\| u-x\right\| ^{2}\}. \end{aligned}$$

In the potential case, where \(V(x)=\nabla f(x)\) for some smooth convex function \(f:{\mathsf {H}}\rightarrow {\mathbb {R}}\), the residual function is thus seen to be a constant multiple of the norm of the so-called gradient mapping \(\left\| x-{{\,\mathrm{prox}\,}}_{\lambda g}(x-\lambda V(x))\right\| \), which is a standard merit function in convex [64] and stochastic [65, 66] optimization. We use this function to quantify the per-iteration progress of RISFBF. The main result of this subsection is the following.

Theorem 2

(Asymptotic Convergence) Let \({\bar{\alpha }},{\bar{\varepsilon }}\in (0,1)\) be fixed parameters. Suppose that Assumption 1-5 hold true. Let \((\alpha _{k})_{k\in {\mathbb {N}}}\) be a non-decreasing sequence such that \(\lim _{k\rightarrow \infty }\alpha _{k}={\bar{\alpha }}\). Let \((\lambda _{k})_{k\in {\mathbb {N}}}\) be a converging sequence in \((0,\frac{1}{4L})\) such that \(\lim _{k\rightarrow \infty }\lambda _{k}=\lambda \in (0,\frac{1}{4L})\). If \(\rho _{k}=\frac{5(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{4(2\alpha _{k}^{2}-\alpha _{k}+1)(1+L\lambda _{k})}\) for all \(k\ge 1\), then

  1. (i)

    \(\lim _{k\rightarrow \infty }\mathsf {res}_{\lambda _{k}}(Z_{k})=0\) in \(L^{2}({\mathbb {P}})\);

  2. (ii)

    the stochastic process \((X_{k})_{k\in {\mathbb {N}}}\) generated by algorithm RISFBF weakly converges to a \({\mathsf {S}}\)-valued limiting random variable X;

  3. (iii)

    \(\sum _{k=1}^{\infty }\left[ (1-\alpha _{k})\left( \frac{5(1-\alpha _{k}}{4\rho _{k}(1+L\lambda _{k})}-1\right) -2\alpha _{k}^{2}\right] \left\| X_{k}-X_{k-1}\right\| ^{2}<\infty \quad {\mathbb {P}}\)-a.s.

We prove this Theorem via a sequence of technical Lemmas.

Lemma 3

For all \(k\ge 1\), we have

$$\begin{aligned} -\left\| Z_{k}-Y_{k}\right\| ^{2}\le \lambda ^{2}_{k}\left\| U_{k}\right\| ^{2}-\frac{1}{2}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k}). \end{aligned}$$
(19)

Proof

By definition,

$$\begin{aligned} \frac{1}{2}\mathsf {res}_{\lambda _{k}}^{2}(Z_{k})&=\frac{1}{2}\left\| Z_{k}-J_{\lambda _{k}T}(Z_{k}-\lambda _{k}V(Z_{k}))\right\| ^{2}\\&=\frac{1}{2}\left\| Z_{k}-Y_{k}+J_{\lambda _{k}T}(Z_{k}-\lambda _{k}A_{k}(Z_{k}))-J_{\lambda _{k}T}(Z_{k}-\lambda _{k}V(Z_{k}))\right\| ^{2}\\&\le \left\| Z_{k}-Y_{k}\right\| ^{2}+\left\| J_{\lambda _{k}T}(Z_{k}-\lambda _{k}A_{k}(Z_{k}))-J_{\lambda _{k}T}(Z_{k}-\lambda _{k}V(Z_{k}))\right\| ^{2}\\&\le \left\| Z_{k}-Y_{k}\right\| ^{2}+\lambda ^{2}_{k}\left\| U_{k}\right\| ^{2}, \end{aligned}$$

where the last inequality uses the non-expansivity property of the resolvent operator. Rearranging terms gives the claimed result. \(\square \)

Next, for a given pair \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we define the stochastic processes \((\Delta M_{k})_{k\in {\mathbb {N}}}, (\Delta N_{k}(p,p^{*}))_{k\in {\mathbb {N}}}\), and \((\mathtt {e}_k)_{k\in {\mathbb {N}}}\) as

$$\begin{aligned}&\Delta M_{k}\triangleq \frac{5\rho _{k}\lambda _{k}^{2}}{2(1+L\lambda _{k})} \left\| \mathtt {e}_{k}\right\| ^{2}+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2}, \end{aligned}$$
(20)
$$\begin{aligned}&\Delta N_{k}(p,p^{*})\triangleq 2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},p-Y_{k}\right\rangle ,\text { and } \end{aligned}$$
(21)
$$\begin{aligned}&\mathtt {e}_{k}\triangleq W_{k}-U_{k}. \end{aligned}$$
(22)

Key to our analysis is the following energy bound on the evolution of the anchor sequence \(\left( \left\| X_{k}-p\right\| ^{2}\right) _{k\in {\mathbb {N}}}\).

Lemma 4

(Fundamental Recursion) Let \((X_{k})_{k\in {\mathbb {N}}}\) be the stochastic process generated by RISFBF with \(\alpha _{k}\in (0,1)\), \(0\le \rho _{k}<\frac{5}{4(1+L\lambda _{k})}\), and \(\lambda _{k}\in (0,1/4L)\). For all \(k\ge 1\) and \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we have

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&\le (1+\alpha _{k})\left\| X_{k}-p\right\| ^{2}-\alpha _{k}\left\| X_{k-1}-p\right\| ^{2}-\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})\\&\quad +\Delta M_{k}+\Delta N_{k}(p,p^{*})-2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle \\&\quad +\alpha _{k}\left\| X_{k}-X_{k-1}\right\| ^{2}\left( 2\alpha _{k}+\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \\&\quad -(1-\alpha _{k})\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \left\| X_{k+1}-X_{k}\right\| ^{2}. \end{aligned}$$

Proof

To simplify the notation, let us call \(A_{k}\equiv A_{k}(Z_{k})\) and \(B_{k}\equiv B_{k}(Y_{k})\). We also introduce the intermediate update \(R_{k}\triangleq Y_{k}+\lambda _{k}(A_{k}-B_{k})\). For all \(k\ge 0\), it holds true that

$$\begin{aligned} \left\| Z_{k}-p\right\| ^{2}= & {} \left\| Z_{k}-Y_{k}+Y_{k}-R_{k}+R_{k}-p\right\| ^{2}\\= & {} \left\| Z_{k}-Y_{k}\right\| ^{2}+\left\| Y_{k}-R_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}+2\left\langle Z_{k}-Y_{k},Y_{k}-p\right\rangle \\&+2\left\langle Y_{k}-R_{k},R_{k}-p\right\rangle \\= & {} \left\| Z_{k}-Y_{k}\right\| ^{2}+\left\| Y_{k}-R_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}+2\left\langle Z_{k}-Y_{k},Y_{k}-p\right\rangle \\&+2\left\langle Y_{k}-R_{k},R_{k}-p\right\rangle \\= & {} \left\| Z_{k}-Y_{k}\right\| ^{2}+\left\| Y_{k}-R_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}+2\left\langle Z_{k}-Y_{k},Y_{k}-p\right\rangle \\&+2\left\langle Y_{k}-R_{k},Y_{k}-p\right\rangle +2\left\langle Y_{k}-R_{k},R_k-Y_{k}\right\rangle \\= & {} \left\| Z_{k}-Y_{k}\right\| ^{2}+\left\| Y_{k}-R_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}+2\left\langle Z_{k}-R_{k},Y_{k}-p\right\rangle \\&+2\left\langle Y_{k}-R_{k},R_k-Y_{k}\right\rangle \\= & {} \left\| Z_{k}-Y_{k}\right\| ^{2}-\left\| Y_{k}-R_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}+2\left\langle Z_{k}-R_{k},Y_{k}-p\right\rangle . \end{aligned}$$

Since

$$\begin{aligned} \left\| Y_{k}-R_{k}\right\| ^{2}&=\lambda ^{2}_{k}\left\| B_{k}(Y_{k})-Y_{k}(Z_{k})\right\| ^{2}\\&\le \lambda _{k}^{2}\left\| V(Y_{k})-V(Z_{k})+W_{k+1}-U_{k+1}\right\| ^{2}\\&\le \lambda _{k}^{2}\left\| V(Y_{k})-V(Z_{k})\right\| ^{2}+\lambda _{k}^{2}\left\| W_{k}-U_{k}\right\| ^{2}+2\lambda _{k}^{2}\left\langle V(Y_{k})-V(Z_{k}),W_{k}-U_{k}\right\rangle \\&\le L^{2}\lambda _{k}^{2}\left\| Y_{k}-Z_{k}\right\| ^{2}+\lambda _{k}^{2}\left\| W_{k}-U_{k}\right\| ^{2}+2\lambda _{k}^{2}\left\langle V(Y_{k})-V(Z_{k}),W_{k}-U_{k}\right\rangle \\&\le 2L^{2}\lambda _{k}^{2}\left\| Y_{k}-Z_{k}\right\| ^{2}+2\lambda _{k}^{2}\left\| W_{k}-U_{k}\right\| ^{2}. \end{aligned}$$

Introducing the process \((\mathtt {e}_{k})_{k\in {\mathbb {N}}}\) from eq. (22), the aforementioned set of inequalities reduces to

$$\begin{aligned} \left\| Y_{k}-R_{k}\right\| ^{2}\le 2L^{2}\lambda _{k}^{2}\left\| Y_{k}-Z_{k}\right\| ^{2}+2\lambda _{k}^{2}\left\| \mathtt {e}_{k}\right\| ^{2}. \end{aligned}$$

Hence,

$$\begin{aligned} \left\| Z_{k}-p\right\| ^{2}\ge (1-2L^{2}\lambda _{k}^{2})\left\| Z_{k}-Y_{k}\right\| ^{2}-2\lambda _{k}^{2}\left\| \mathtt {e}_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}+2\left\langle Z_{k}-R_{k},Y_{k}-p\right\rangle . \end{aligned}$$

But \(Y_{k}+\lambda _{k}T(Y_{k})\ni Z_{k}-\lambda _{k}A_{k}\), implying that

$$\begin{aligned} \frac{1}{\lambda _{k}}(Z_{k}-Y_{k}-\lambda _{k}A_{k})\in T(Y_{k}). \end{aligned}$$

Pick \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), so that \(p^{*}-V(p)\in T(p)\). Then, the monotonicity of T yields the estimate

$$\begin{aligned} \left\langle \frac{1}{\lambda _{k}}(Z_{k}-Y_{k}-\lambda _{k}A_{k})-p^{*}+V(p),Y_{k}-p\right\rangle \ge 0. \end{aligned}$$

This is equivalent to

$$\begin{aligned}&\left\langle \frac{1}{\lambda _{k}}(Z_{k}-R_{k}-\lambda _{k}B_{k})-p^{*}+V(p),Y_{k}-p\right\rangle \ge 0, \nonumber \\ \text{ or }&\left\langle Z_{k}-R_{k},Y_{k}-p\right\rangle \ge \lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle +\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle . \end{aligned}$$
(23)

This implies that

$$\begin{aligned} \left\langle Z_{k}-R_{k},Y_{k}-x^{*}\right\rangle \ge \lambda _{k}\left\langle W_{k},Y_{k}-x^{*}\right\rangle . \end{aligned}$$

Hence, we obtain the following,

$$\begin{aligned} \left\| Z_{k}-p\right\| ^{2}&\ge (1-2L^{2}\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2}+\left\| R_{k}-p\right\| ^{2}-2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}\\&+2\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle +2\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle . \end{aligned}$$

Rearranging terms, we arrive at the following bound on \(\left\| R_{k}-p\right\| ^{2}\):

$$\begin{aligned} \left\| R_{k}-p\right\| ^{2}\le&\left\| Z_{k}-p\right\| ^{2}-(1-2L^{2}\lambda _{k}^{2})\left\| Y_{k}-Z_{k}\right\| ^{2}+2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}+2\lambda _{k}\left\langle W_{k}+p^{*},p-Y_{k}\right\rangle \nonumber \\&+2\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle \end{aligned}$$
(24)

Next, we observe that \(\left\| X_{k+1}-p\right\| ^{2}\) may be bounded as follows.

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&=\left\| (1-\rho _{k})Z_{k}+\rho _{k}R_{k}-p\right\| ^{2}\nonumber \\&=\left\| (1-\rho _{k})(Z_{k}-p)-\rho _{k}(R_{k}-p)\right\| ^{2}\nonumber \\&=(1-\rho _{k})^2\left\| Z_{k}-p\right\| ^{2}+\rho _{k}^2\left\| R_{k}-p\right\| ^{2}- 2\rho _k(1-\rho _k) \left\langle Z_k-p,R_k-p\right\rangle \nonumber \\&=(1-\rho _{k})\left\| Z_{k}-p\right\| ^{2}-\rho _{k}(1-\rho _{k})\left\| Z_{k}-p\right\| ^{2}\nonumber \\&\quad +\rho _{k}\left\| R_{k}-p\right\| ^{2}-\rho _{k}(1-\rho _{k})\left\| R_{k}-p\right\| ^2 \nonumber \\&\quad + 2\rho _k(1-\rho _k) \left\langle Z_k-p,R_k-p\right\rangle \nonumber \\&=(1-\rho _{k})\left\| Z_{k}-p\right\| ^{2}+\rho _{k}\left\| R_{k}-p\right\| ^{2}-\rho _{k}(1-\rho _{k})\left\| R_{k}-Z_{k}\right\| ^{2}\nonumber \\&=(1-\rho _{k})\left\| Z_{k}-p\right\| ^{2}+\rho _{k}\left\| R_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}. \end{aligned}$$
(25)

We may then derive a bound on the expression in (25),

$$\begin{aligned}&(1-\rho _{k})\left\| Z_{k}-p\right\| ^{2}+\rho _{k}\left\| R_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}\nonumber \\&\quad \le \left\| Z_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}-\rho _{k}(1-2L^{2}\lambda ^{2}_{k})\left\| Z_{k}-Y_{k}\right\| ^{2}\nonumber \\&\qquad +2\lambda ^{2}_{k}\rho _{k}\left\| \mathtt {e}_{k}\right\| ^{2}-2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle +2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle \end{aligned}$$
(26)
$$\begin{aligned}&\quad =\left\| Z_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}-\rho _{k}(1/2-2L^{2}\lambda ^{2}_{k})\left\| Z_{k}-Y_{k}\right\| ^{2}\nonumber \\&\qquad +2\lambda ^{2}_{k}\rho _{k}\left\| \mathtt {e}_{k}\right\| ^{2}-2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle \nonumber \\&\qquad -\frac{\rho _{k}}{2}\left\| Y_{k}-Z_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle . \end{aligned}$$
(27)

By invoking (19), we arrive at the estimate

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&\le \left\| Z_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}+2\lambda ^{2}\rho _{k}\left\| \mathtt {e}_{k}\right\| ^{2}\\&\quad -2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle \nonumber \\&\quad -\rho _{k}(1/2-2L^{2}\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2} -\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})\\&\quad +\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle . \end{aligned}$$

Furthermore,

$$\begin{aligned} \frac{1}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\|&=\left\| R_{k}-Z_{k}\right\| \le \left\| R_{k}-Y_{k}\right\| +\left\| Y_{k}-Z_{k}\right\| \\&\le \lambda _{k}\left\| B_{k}-A_{k}\right\| +\left\| Y_{k}-Z_{k}\right\| \\&\le (1+ L\lambda _{k})\left\| Y_{k}-Z_{k}\right\| +\lambda _{k}\left\| \mathtt {e}_{k}\right\| , \end{aligned}$$

which implies that

$$\begin{aligned} \frac{1}{2\rho ^{2}_{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}\le (1+L \lambda _{k})^{2}\left\| Y_{k}-Z_{k}\right\| ^{2}+\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}. \end{aligned}$$
(28)

Multiplying both sides by \(\frac{\rho _{k}(1/2-2L\lambda _{k})}{1+L\lambda _{k}}\), a positive scalar since \(\lambda _k \in (0,\tfrac{1}{4L})\), we obtain

$$\begin{aligned} \frac{1/2-2L\lambda _{k}}{2\rho _{k}(1+L\lambda _{k})}\left\| X_{k+1}-Z_{k}\right\| ^{2}\le &\, \rho _{k}(1/2-2L\lambda _{k})(1+L\lambda _{k})\left\| Y_{k}-Z_{k}\right\| ^{2}\nonumber \\&\quad +\frac{\lambda ^{2}_{k}\rho _{k}(1/2-2L\lambda _{k})}{1+L\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}. \end{aligned}$$
(29)

Rearranging terms, and noting that \((1/2-2L\lambda _{k})(1+L\lambda _{k})\le 1/2-2L^{2}\lambda ^{2}_{k}\), the above estimate becomes

$$\begin{aligned} \nonumber -\rho _{k}(1/2-2L^{2}\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2}&\le -\frac{1/2-2L\lambda _{k}}{2\rho _{k}(1+L\lambda _{k})}\left\| X_{k+1}-Z_{k}\right\| ^{2}\\&\quad +\frac{\rho _{k}\lambda ^{2}_{k}(1/2-2L\lambda _{k})}{1+L\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}. \end{aligned}$$
(30)

Substituting this bound into the first majorization of the anchor process \(\left\| X_{k+1}-p\right\| ^{2}\), we see

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&\le \left\| Z_{k}-p\right\| ^{2}-\left( \frac{1-\rho _{k}}{\rho _{k}}+\frac{1/2-2L\lambda _{k}}{2\rho _{k}(1+L\lambda _{k})}\right) \left\| X_{k+1}-Z_{k}\right\| ^{2}\\&\quad +\rho _{k}\lambda _{k}^{2}\left\| \mathtt {e}_{k}\right\| ^{2}\left( 2+\frac{1/2-2L\lambda _{k}}{1+L\lambda _{k}}\right) +2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle \\&\quad -2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle -\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2}\\&=\left\| Z_{k}-p\right\| ^{2}-\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2}-2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle \\&\quad -\frac{5/2-2\rho _{k}(1+L\lambda _{k})}{2\rho _{k}(1+L\lambda _{k})}\left\| X_{k+1}-Z_{k}\right\| ^{2}\\&\quad +\frac{5\rho _{k}\lambda _{k}^{2}}{2(1+L\lambda _{k})} \left\| \mathtt {e}_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle . \end{aligned}$$

Observe that

$$\begin{aligned} \left\| X_{k+1}-Z_{k}\right\| ^{2}&=\left\| (X_{k+1}-X_{k})-\alpha _{k}(X_{k}-X_{k-1})\right\| ^{2} \nonumber \\&\ge (1-\alpha _{k})\left\| X_{k+1}-X_{k}\right\| ^{2}+(\alpha ^{2}_{k}-\alpha _{k})\left\| X_{k}-X_{k-1}\right\| ^{2}, \end{aligned}$$
(31)

and Lemma 16 gives

$$\begin{aligned} \left\| Z_{k}-p\right\| ^{2}=(1+\alpha _{k})\left\| X_{k}-p\right\| ^{2}-\alpha _{k}\left\| X_{k-1}-p\right\| ^{2}+\alpha _{k}(1+\alpha _{k})\left\| X_{k}-X_{k-1}\right\| ^{2}. \end{aligned}$$
(32)

By hypothesis, \(\alpha _{k},\rho _{k},\lambda _{k}\) are defined such that \(\frac{5/2-2\rho _{k}(1+L\lambda _{k})}{2\rho _{k}(1+L\lambda _{k})}>0\). Then, using both of these relations in the last estimate for \(\left\| X_{k+1}-p\right\| ^{2}\), we arrive at

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&\le (1+\alpha _{k})\left\| X_{k}-p\right\| ^{2}-\alpha _{k}\left\| X_{k-1}-p\right\| ^{2}+\alpha _{k}(1+\alpha _{k})\left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -2\rho _{k}\lambda _{k}\left\langle W_{k+1}+p^{*},Y_{k}-p\right\rangle \\&\quad -\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})+\frac{5\rho _{k}\lambda _{k}^{2}}{2(1+L\lambda _{k})}\left\| \mathtt {e}_{k}\right\| ^{2}\\&\quad +\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle \\&\quad -\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \left[ (1-\alpha _{k})\left\| X_{k+1}-X_{k}\right\| ^{2}+(\alpha ^{2}_{k}-\alpha _{k})\left\| X_{k}-X_{k-1}\right\| ^{2}\right] . \end{aligned}$$

Using the respective definitions of the stochastic increments \(\Delta M_{k},\Delta N_{k}(p,p^{*})\) in (20) and (21), we arrive at

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&\le (1+\alpha _{k})\left\| X_{k}-p\right\| ^{2}-\alpha _{k}\left\| X_{k-1}-p\right\| ^{2}-\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})\nonumber \\&\quad +\Delta M_{k}+\Delta N_{k}(p,p^{*})-2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle \nonumber \\&\quad +\alpha _{k}\left\| X_{k}-X_{k-1}\right\| ^{2}\left( 2\alpha _{k}+\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \nonumber \\&\quad -(1-\alpha _{k})\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \left\| X_{k+1}-X_{k}\right\| ^{2}. \end{aligned}$$
(33)

\(\square \)

Recall that \(Y_{k}\) is \({\hat{{\mathcal {F}}}}_{k}\)-measurable. By the law of iterated expectations, we therefore see

$$\begin{aligned} {\mathbb {E}}[\Delta N_{k}(p,p^{*})\vert {\mathcal {F}}_{k}]={\mathbb {E}}\left\{ {\mathbb {E}}[\Delta N_{k}(p,p^{*})\vert {\hat{{\mathcal {F}}}}_{k}]\vert {\mathcal {F}}_{k}\right\} =2\rho _{k}\lambda _{k} {\mathbb {E}}[\left\langle p^{*},p-Y_{k}\right\rangle \vert {\mathcal {F}}_{k}], \end{aligned}$$

for all \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\). Observe that if we choose \((p,0)\in {{\,\mathrm{gr}\,}}(F)\), meaning that \(p\in {\mathsf {S}}\), then \(\Delta N_{k}(p,0)\equiv \Delta N_{k}(p)\) is a martingale difference sequence. Furthermore, for all \(k\ge 1\),

$$\begin{aligned} {\mathbb {E}}[\Delta M_{k}\vert {\mathcal {F}}_{k}]\le \frac{5\rho _{k}\lambda _{k}^2}{1+L\lambda _{k}}{\mathbb {E}}[\left\| W_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]+\lambda ^{2}_{k}\left( \frac{5\rho _{k}}{1+L\lambda _{k}}+\frac{\rho _{k}}{2}\right) {\mathbb {E}}[\left\| U_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]\le \frac{\mathtt {a}_{k}\sigma ^{2}}{m_{k}}, \end{aligned}$$
(34)

where \(\mathtt {a}_{k}\triangleq \lambda ^{2}_{k}\left( \frac{10\rho _{k}}{1+L\lambda _{k}}+\frac{\rho _{k}}{2}\right) \).

To prove the a.s. convergence of the stochastic process \((X_{k})_{k\in {\mathbb {N}}}\), we rely on the following preparations. Motivated by the analysis of deterministic inertial schemes, we are interested in a regime under which \(\alpha _{k}\) is non-decreasing.

For a fixed reference point \(p\in {\mathsf {H}}\), define the anchor sequences \(\phi _{k}(p)\triangleq \frac{1}{2}\left\| X_{k}-p\right\| ^{2}\), and the energy sequence \(\Delta _{k}\triangleq \frac{1}{2}\left\| X_{k}-X_{k-1}\right\| ^{2}.\) In terms of these sequences, we can rearrange the fundamental recursion from Lemma 4 to obtain

$$\begin{aligned} \phi _{k+1}(p)-&\alpha _{k}\phi _{k}(p)-(1-\alpha _{k})\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \Delta _{k+1}\le \phi _{k}(p)-\alpha _{k}\phi _{k-1}(p)\\&\quad -(1-\alpha _{k})\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \Delta _{k}+\frac{1}{2}\Delta M_{k}+\frac{1}{2}\Delta N_{k}(p,p^{*})\\&\quad -\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle +\Delta _{k}\left[ 2\alpha ^{2}_{k}+(1-\alpha _{k})\left( 1-\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \right] \\&\quad -\frac{\rho _{k}}{8}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k}). \end{aligned}$$

For a given pair \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), define

$$\begin{aligned} Q_{k}(p)\triangleq \phi _{k}(p)-\alpha _{k}\phi _{k-1}(p)+(1-\alpha _{k})\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \Delta _{k}. \end{aligned}$$
(35)

Then, in terms of the sequence

$$\begin{aligned} \beta _{k+1}\triangleq (1-\alpha _{k})\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) -(1-\alpha _{k+1})\left( \frac{5}{4\rho _{k+1}(1+L\lambda _{k+1})}-1\right) , \end{aligned}$$
(36)

and using the monotonicity of V, guaranteeing that \(\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle \ge 0\), we get

$$\begin{aligned} Q_{k+1}(p)&\le Q_{k}(p)-\frac{\rho _{k}}{8}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})+\frac{1}{2}\Delta M_{k}+\frac{1}{2}\Delta N_{k}(p,p^{*})+(\alpha _{k}-\alpha _{k+1})\phi _{k}(p)\\&\quad +\left[ 2\alpha ^{2}_{k}+(1-\alpha _{k})\left( 1-\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \right] \Delta _{k}-\beta _{k+1}\Delta _{k+1}. \end{aligned}$$

Defining

$$\begin{aligned} \theta _{k}\triangleq \frac{\rho _{k}}{8}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})-\left[ 2\alpha ^{2}_{k}+(1-\alpha _{k})\left( 1-\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \right] \Delta _{k}, \end{aligned}$$

we arrive at

$$\begin{aligned} Q_{k+1}(p)\le Q_{k}(p)-\theta _{k}+\frac{1}{2}\Delta M_{k}+\frac{1}{2}\Delta N_{k}(p,p^{*})+(\alpha _{k}-\alpha _{k+1})\phi _{k}(p)-\beta _{k+1}\Delta _{k+1}. \end{aligned}$$
(37)

Our aim is to use \(Q_{k}(p)\) as a suitable energy function for RISFBF. For that to work, we need to identify a specific parameter sequence pair \((\rho _{k},\alpha _{k})\) so that \(\beta _{k}\ge 0\) and \(\theta _{k} \ge 0\), taking the following design criteria into account:

  1. 1.

    \(\alpha _{k}\in (0,{\bar{\alpha }}]\subset (0,1)\) for all \(k\ge 1\);

  2. 2.

    \(\alpha _{k}\) is non-decreasing with

    $$\begin{aligned} \sup _{k\ge 1}\alpha _{k}={\bar{\alpha }},\text { and} \inf _{k\ge 1}\alpha _{k}>0. \end{aligned}$$
    (38)

Incorporating these two restrictions on the inertia parameter \(\alpha _{k}\), we are left with the following constraints:

$$\begin{aligned} \beta _{k}\ge 0\text { and }2\alpha _{k}^{2}+(1-\alpha _{k})\left( 1-\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \le 0. \end{aligned}$$
(39)

To identify a constellation of parameters \((\alpha _{k},\rho _{k})\) satisfying these two conditions, define

$$\begin{aligned} h_{k}(x,y)\triangleq (1-x)\left( \frac{5}{4y(1+L\lambda _{k})}-1\right) . \end{aligned}$$
(40)

Then,

$$\begin{aligned} 0&\ge 2\alpha _{k}^{2}-(1-\alpha _{k})\left( h_{k}(\alpha _{k},\rho _{k})+(1-\alpha _{k})-1\right) \\&=2\alpha ^{2}_{k}+\alpha _{k}(1-\alpha _{k})-(1-\alpha _{k})h_{k}(\alpha _{k},\rho _{k})\\&=\alpha _{k}(1+\alpha _{k})-(1-\alpha _{k})h_{k}(\alpha _{k},\rho _{k}), \end{aligned}$$

which gives

$$\begin{aligned} h_{k}(\alpha _{k},\rho _{k})\ge \frac{\alpha _{k}(1+\alpha _{k})}{1-\alpha _{k}}. \end{aligned}$$
(41)

Solving this condition for \(\rho _{k}\) reveals that \(\frac{1}{\rho _{k}}\ge \frac{4(2\alpha ^{2}_{k}-\alpha _{k}+1)(1+L\lambda _{k})}{5(1-\alpha _{k})^{2}}.\) Using the design condition \(\alpha _{k}\le {\bar{\alpha }}<1\), we need to choose the relaxation parameter \(\rho _{k}\) so that \(\rho _{k}\le \frac{5(1-\alpha _k)^{2}}{4(1+L\lambda _{k})(2\alpha ^{2}_{k}-\alpha _{k}+1)}\). This suggests to use the relaxation sequence \(\rho _{k}=\rho _k(\alpha _k,\lambda _k)\triangleq \frac{5(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{4(1+L\lambda _{k})(2\alpha ^{2}_{k}-\alpha _{k}+1)}\). It remains to verify that with this choice we can guarantee \(\beta _{k}\ge 0.\) This can be deduced as follows: Recalling (40), we get

$$\begin{aligned} h_k(\alpha _k,\rho _k) = (1-\alpha _{k})\left( \tfrac{5}{4\rho _{k}(1+L\lambda )}-1\right) =\tfrac{(1-\alpha _k)(2\alpha _k^2-\alpha _k+1)}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}+\alpha _{k}-1. \end{aligned}$$

In particular, we note that if \(f(\alpha ) \triangleq {\tfrac{(1-\alpha )(2\alpha ^2-\alpha +1)}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}}+\alpha -1\), then

$$\begin{aligned} f'(\alpha )&= \tfrac{(1-\alpha )(4\alpha -1)-(2\alpha ^2-\alpha +1)}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}+1= \tfrac{-6\alpha ^2+6\alpha -2+(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2} = \tfrac{-6(\alpha -\frac{1}{2})^2-\frac{1}{2}+(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2} \end{aligned}$$

We consider two cases:

Case 1: \({\bar{\alpha }}\le 1/2\). In this case

$$\begin{aligned} f'(\alpha )\le \tfrac{-6({\bar{\alpha }}-\frac{1}{2})^2-\frac{1}{2}+(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}\le \frac{-5{\bar{\alpha }}^{2}+4{\bar{\alpha }}-1}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^2}<0. \end{aligned}$$

Case 2: \(1/2<{\bar{\alpha }}<1\). In this case

$$\begin{aligned} f'(\alpha )\le \frac{-6(1/2-1/2)^{2}-1/2+(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}\le \frac{-1/2+(1-{\bar{\varepsilon }})(1-1/2)^{2}}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}<0. \end{aligned}$$

Thus, \(f(\alpha )\) is decreasing in \(\alpha \in (0,{\bar{\alpha }}]\), where \(0<{\bar{\alpha }}<1\).

Using these relations, we see that (37) reduces to

$$\begin{aligned} Q_{k+1}(p)\le Q_{k}(p)-\theta _{k}+\frac{1}{2}\Delta M_{k}+\frac{1}{2}\Delta N_{k}(p,p^{*}), \end{aligned}$$
(42)

where \(\theta _{k}\ge 0\). This is the basis for our proof of Theorem 2.

Proof of Theorem 2

We start with (i). Consider (42), with the special choice \(p^{*}=0\), so that \(p\in {\mathsf {S}}\). Taking conditional expectations on both sides of this inequality, we arrive at

$$\begin{aligned} {\mathbb {E}}[Q_{k+1}(p)\vert {\mathcal {F}}_{k}]\le Q_{k}(p)-\theta _{k}+\psi _{k}, \end{aligned}$$

where \(\psi _{k}\triangleq \frac{\mathtt {a}_{k}\sigma ^{2}}{2m_{k}}\). By design of the relaxation sequence \(\rho _{k}\), we see that

$$\begin{aligned} \mathtt {a}_{k}&=\lambda ^{2}_{k}\left( \frac{10\rho _{k}}{1+L\lambda _{k}}+\frac{\rho _{k}}{2}\right) =\lambda ^{2}_{k}\left( \frac{10}{1+L\lambda _{k}}+\frac{1}{2}\right) \frac{5(1-{\bar{\varepsilon }})}{4(2\alpha ^{2}_{k}-\alpha _{k}+1)(1+L\lambda _{k})}. \end{aligned}$$

Since \(\lim _{k\rightarrow \infty }\lambda _{k}=\lambda \in (0,1/4L)\), and \(\lim _{k\rightarrow \infty }\alpha _{k}={\bar{\alpha }}\in (0,1)\), we conclude that the sequence \((\mathtt {a}_{k})_{k\in {\mathbb {N}}}\) is bounded. Consequently, thanks to Assumption 5, the sequence \((\psi _{k})_{k\in {\mathbb {N}}}\) is in \(\ell ^{1}_{+}({\mathbb {N}})\). We next claim that \(Q_{k}(p)\ge 0\). To verify this, note that

$$\begin{aligned} Q_{k}(p)&=\frac{1}{2}\left\| X_{k}-p\right\| ^{2}-\frac{\alpha _{k}}{2}\left\| X_{k-1}-p\right\| ^{2}+\frac{(1-\alpha _{k})}{2}\left( \frac{5}{4\rho _{k}(1+L\lambda _{k})}-1\right) \left\| X_{k}-X_{k-1}\right\| ^{2}\\&=\frac{1}{2}\left\| X_{k}-p\right\| ^{2}+\left( \frac{(1-\alpha _{k})(2\alpha _{k}^{2}+1-\alpha _{k})}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}-1+\alpha _{k}\right) \frac{1}{2}\left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -\frac{\alpha _{k}}{2}\left\| X_{k-1}-p\right\| ^{2}\\&\ge \frac{1}{2}\left\| X_{k}-p\right\| ^{2}+\left( \frac{(1-\alpha _{k})(\alpha _{k}^{2}+1-\alpha _{k})}{(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}-1+\alpha _{k}\right) \frac{1}{2}\left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -\frac{\alpha _{k}}{2}\left\| X_{k-1}-p\right\| ^{2}\\&\ge \frac{1}{2}\left\| X_{k}-p\right\| ^{2}+\left( \frac{(1-\alpha _{k})(\alpha _{k}^{2}+1-\alpha _{k})}{(1-\alpha _{k})^{2}}-1+\alpha _{k}\right) \frac{1}{2}\left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -\frac{\alpha _{k}}{2}\left\| X_{k-1}-p\right\| ^{2}\\&=(\alpha _{k}+(1-\alpha _{k}))\left\| X_{k}-p\right\| ^{2}+\left( \alpha _{k}+\frac{\alpha ^{2}_{k}}{1-\alpha _{k}}\right) \left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -\alpha _{k}\left\| X_{k-1}-p\right\| ^{2}\\&\ge \frac{\alpha _{k}}{2}\left( \left\| X_{k}-p\right\| ^{2}+\left\| X_{k}-X_{k-1}\right\| ^{2}\right) \\&\quad -\frac{\alpha _{k}}{2}\left\| X_{k-1}-p\right\| ^{2}+\alpha _{k}\left\| X_{k}-p\right\| \cdot \left\| X_{k}-X_{k-1}\right\| \\&\ge \frac{\alpha _{k}}{2}\left( \left\| X_{k}-p\right\| +\left\| X_{k}-X_{k-1}\right\| \right) ^{2}-\frac{\alpha _{k}}{2}\left\| X_{k-1}-p\right\| ^{2}\ge 0. \end{aligned}$$

where the first and second inequality uses \({\bar{\varepsilon }}<1\) and \(\alpha _{k}\le {\bar{\alpha }}\in (0,1)\), the third inequality makes use of the Young inequality: \(\frac{1-a}{2a}\left\| X_{k}-p\right\| ^{2}+\frac{a}{2(1-a)}\left\| X_{k}-X_{k-1}\right\| ^{2}\ge \left\| X_{k}-p\right\| \cdot \left\| X_{k}-X_{k-1}\right\| \). Finally, the fourth inequality uses the triangle inequality \(\left\| X_{k-1}-p\right\| \le \left\| X_{k}-X_{k-1}\right\| +\left\| X_{k}-p\right\| \). Lemma 17 readily yields the existence of an a.s. finite limiting random variable \(Q_{\infty }(p)\) such that \(Q_{k}(p)\rightarrow Q_{\infty }(p)\), \({\mathbb {P}}\)-a.s., and \((\theta _{k})_{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({{\mathbb {F}}})\). Since \(\lambda _{k}\rightarrow \lambda \), we get \(\lim _{k\rightarrow \infty }\rho _{k}=\frac{5(1-{\bar{\varepsilon }})(1-{\bar{\alpha }})^{2}}{4(1+L\lambda )(2{\bar{\alpha }}^{2}+1-{\bar{\alpha }})}\). Hence,

$$\begin{aligned}&\lim _{k\rightarrow \infty }\left( 2\alpha _{k}^2-(1-\alpha _{k})\left( 1-\tfrac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \right) \left\| X_{k}(\omega )-X_{k-1}(\omega )\right\| ^2=0,\text { and }\\&\lim _{k\rightarrow \infty }\frac{\rho _{k}}{4}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k}(\omega ))=0. \end{aligned}$$

\({\mathbb {P}}\)-a.s. We conclude that \(\lim _{k\rightarrow \infty }\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})=0\), \({\mathbb {P}}\)-a.s..

To prove (ii) observe that, since \({\bar{\varepsilon }}\in (0,1)\) and \(\lim _{k\rightarrow \infty }\alpha _{k}={\bar{\alpha }}\), it follows

$$\begin{aligned} \left[ 2\alpha _{k}^{2}+(1-\alpha _{k})\left( 1-\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \right] \le \frac{-{\bar{\varepsilon }}}{1-{\bar{\varepsilon }}}(2{\bar{\alpha }}^{2}+1-{\bar{\alpha }})<0. \end{aligned}$$

Consequently, \(\lim _{k\rightarrow \infty }\left\| X_{k}-X_{k-1}\right\| ^2 =0\), \({\mathbb {P}}\)-a.s., and \(\left( \phi _{k}(p)-\alpha _{k}\phi _{k-1}(p)\right) _{k\in {\mathbb {N}}}\) is almost surely bounded. Hence, for each \(\omega \in \Omega \), there exists a bounded random variable \(C_{1}(\omega )\in [0,\infty )\) such that

$$\begin{aligned} \phi _{k}(p,\omega )\le C_{1}(\omega )+\alpha _{k}\phi _{k-1}(p,\omega )\le C_{1}(\omega )+{\bar{\alpha }}\phi _{k-1}(p,\omega )\qquad \forall k\ge 1. \end{aligned}$$

Iterating this relation, using the fact that \({\bar{\alpha }}\in [0,1)\), we easily derive

$$\begin{aligned} \phi _{k}(p,\omega )\le \frac{C_{1}(\omega )}{1-{\bar{\alpha }}}+{\bar{\alpha }}^{k}\phi _{1}(p,\omega ). \end{aligned}$$

Hence, \((\phi _{k}(p))_{k\in {\mathbb {N}}}\) is a.s. bounded, which implies that \((X_{k})_{k\in {\mathbb {N}}}\) is bounded \({\mathbb {P}}\)-a.s. We next claim that \((\left\| X_{k}-p\right\| )_{k\in {\mathbb {N}}}\) converges to a \([0,\infty )\)-valued random variable \({\mathbb {P}}\)-a.s. Indeed, take \(\omega \in \Omega \) such that \(\phi _{k}(p,\omega )\equiv \phi _{k}(\omega )\) is bounded. Suppose there exists \(\mathtt {t}_{1}(\omega )\in [0,\infty ),\mathtt {t}_{2}(\omega )\in [0,\infty )\), and subsequences \((\phi _{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\) and \((\phi _{l_{j}}(\omega ))_{j\in {\mathbb {N}}}\) such that \(\phi _{k_{j}}(\omega )\rightarrow \mathtt {t}_{1}(\omega )\) and \(\phi _{l_{j}}(\omega )\rightarrow \mathtt {t}_{2}(\omega )>\mathtt {t}_{1}(\omega )\). Then, \(\lim _{j\rightarrow \infty }Q_{k_{j}}(p)(\omega )=Q_{\infty }(p)(\omega )=(1-{\bar{\alpha }})\mathtt {t}_{1}(\omega )<(1-{\bar{\alpha }})\mathtt {t}_{2}(\omega )=\lim _{j\rightarrow \infty }Q_{l_{j}}(p)(\omega )=Q_{\infty }(p)(\omega )\), a contradiction. It follows that \(\mathtt {t}_{1}(\omega )=\mathtt {t}_{2}(\omega )\) and, in turn, \(\phi _{k}(\omega )\rightarrow \mathtt {t}(\omega )\). Thus, for each \(p\in {\mathsf {S}}\), \(\phi _{k}(p)\rightarrow \mathtt {t}\) \({\mathbb {P}}\)-a.s.

Since we assume that \({\mathsf {H}}\) is separable, [67, Prop 2.3(iii)] guarantees that there exists a set \(\Omega _{0}\in {\mathcal {F}}\) with \({\mathbb {P}}(\Omega _{0})=1\), and, for every \(\omega \in \Omega _{0}\) and every \(p\in {\mathsf {S}}\), the sequence \((\left\| X_{k}(\omega )-p\right\| )_{k\in {\mathbb {N}}}\) converges.

We next show that all weak limit points of \((X_{k})_{k\in {\mathbb {N}}}\) are contained in \({\mathsf {S}}\). Let \(\omega \in \Omega \) such that \((X_{k}(\omega ))_{k\in {\mathbb {N}}}\) is bounded. Thanks to [3, Lemma 2.45], we can find a weakly convergent subsequence \((X_{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\) with limit \(\chi (\omega )\), i.e. for all \(u\in {\mathsf {H}}\) we have \(\lim _{j\rightarrow \infty }\left\langle X_{k_{j}}(\omega ),u\right\rangle =\left\langle \chi (\omega ),u\right\rangle \). This implies

$$\begin{aligned} \lim _{j\rightarrow \infty }\left\langle Z_{k_{j}}(\omega ),u\right\rangle =\lim _{j\rightarrow \infty }\left\langle X_{k_{j}}(\omega ),u\right\rangle +\lim _{j\rightarrow \infty }\alpha _{k_{j}}\left\langle X_{k_{j}}(\omega )-X_{k_{j-1}}(\omega ),u\right\rangle =\left\langle \chi (\omega ),u\right\rangle , \end{aligned}$$

showing that \(Z_{k_{j}}(\omega )\rightharpoonup \chi (\omega )\). Along this weakly converging subsequence, define

$$\begin{aligned} r_{k_{j}}(\omega )\triangleq Z_{k_{j}}(\omega )-J_{\lambda _{k_{j}}T}(Z_{k_{j}}(\omega )-\lambda _{k_{j}}V(Z_{k_{j}}(\omega ))). \end{aligned}$$

Clearly, \(\mathsf {res}_{\lambda _{k_{j}}}(Z_{k_{j}}(\omega ))=\left\| r_{k_{j}}(\omega )\right\| \), so that \(\lim _{j\rightarrow \infty }r_{k_{j}}(\omega )=0\). By definition

$$\begin{aligned} \frac{1}{\lambda _{k_{j}}}r_{k_{j}}(\omega )-V(Z_{k_{j}}(\omega ))+V\left( Z_{k_{j}}(\omega )-r_{k_{j}}(\omega )\right) \in F(Z_{k_{j}}(\omega )-r_{k_{j}}(\omega )). \end{aligned}$$

Since V and \(F=T+V\) are maximally monotone, their graphs are sequentially closed in the weak-strong topology \({\mathsf {H}}^{\text {weak}}\times {\mathsf {H}}^{\text {strong}}\) [3, Prop. 20.33(ii)]. Therefore, by the strong convergence of the sequence \((r_{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\), we deduce weak convergence of the sequence \((Z_{k_{j}}(\omega )-r_{k_{j}}(\omega ),Z_{k_{j}}(\omega ))_{j\in {\mathbb {N}}}\rightharpoonup (\chi (\omega ),\chi (\omega ))\). Therefore \(\frac{1}{\lambda }r_{k_{j}}(\omega )-V(Z_{k_{j}}(\omega ))+V\left( Z_{k_{j}}(\omega )-r_{k_{j}}(\omega )\right) \rightarrow 0\). Hence, \(0\in (T+V)(\chi (\omega ))\), showing that \(\chi (\omega )\in {\mathsf {S}}\). Invoking [67, Prop 2.3(iv)], we conclude that \((X_{k})_{k\in {\mathbb {N}}}\) converges weakly \({\mathbb {P}}\)-a.s to an \({\mathsf {S}}\)-valued random variable.

We now establish (iii). Let \(q_{k}\triangleq {\mathbb {E}}[Q_{k}(p)]\), so that (42) yields the recursion

$$\begin{aligned} q_{k}\le q_{k-1}-{\mathbb {E}}[\theta _{k}]+\psi _{k}. \end{aligned}$$

By Assumption 5, and the definition of all sequences involved, we see that \(\sum _{k=1}^{\infty }\psi _{k}<\infty \). Hence, a telescopian argument gives

$$\begin{aligned} q_{k}-q_{0}=\sum _{i=1}^{k}(q_{i}-q_{i-1})\le -\sum _{i=1}^{k}{\mathbb {E}}[\theta _{i}]+\sum _{i=1}^{k}\psi _{i}\le -\sum _{i=1}^{k}{\mathbb {E}}[\theta _{i}]+\sum _{i=1}^{\infty }\psi _{i}. \end{aligned}$$

Hence, for all \(k\ge 1\), rearranging the above reveals

$$\begin{aligned} \sum _{i=1}^{k}{\mathbb {E}}[\theta _{i}]\le q_{0}+\sum _{i=1}^{\infty }\psi _{i}<\infty . \end{aligned}$$

Letting \(k\rightarrow \infty \), we conclude \(\left( {\mathbb {E}}[\theta _{k}]\right) _{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({\mathbb {N}})\). Classically, this implies \(\theta _{k}\rightarrow 0\) \({\mathbb {P}}\)-a.s. By a simple majorization argument, we deduce that \({\mathbb {P}}\)-a.s.

$$\begin{aligned} \infty&>\sum _{k=1}^{\infty }\left\{ \frac{\rho _{k}}{8}\mathsf {res}^{2}_{\lambda _{k}}(Z_{k})-\left[ 2\alpha ^{2}_{k}+(1-\alpha _{k})\left( 1-\frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}\right) \right] \Delta _{k}\right\} \\&\ge \sum _{k=1}^{\infty }\left[ (1-\alpha _{k})\left( \frac{5(1-\alpha _{k})}{4\rho _{k}(1+L\lambda _{k})}-1\right) -2\alpha ^{2}_{k}\right] \Delta _{k}. \end{aligned}$$

\(\square \)

Remark 2

The above result gives some indication of the balance between the inertial effect and the relaxation effect. Our analysis revealed that the maximal value of the relaxation parameter is \(\rho \le \frac{5(1-{\bar{\varepsilon }})(1-\alpha )^{2}}{4(1+L\lambda )(2\alpha ^{2}-\alpha +1)}\). This is closely aligned with the maximal relaxation value exhibited in Remark 2.13 of [2]. Specifically, the function \(\rho _{m}(\alpha ,\varepsilon )=\frac{5(1-\varepsilon )(1-\alpha )^{2}}{4(1+L\lambda )(2\alpha ^{2}-\alpha +1)}\). This function is decreasing in \(\alpha \). For this choice of parameters, one observes that for \(\alpha \rightarrow 0\) we get \(\rho \rightarrow \frac{5(1-\varepsilon )}{4(1+L\lambda )}\) and for \(\alpha \rightarrow 1\) it is observed \(\rho \rightarrow 0\).

As an immediate corollary of Theorem 2, we obtain a convergence result when all parameter sequences are constant.

Corollary 5

(Asymptotic convergence under constant inertia and relaxation) Let the same Assumptions as in Theorem 2 hold. Consider Algorithm RISFBF with the constant parameter sequences \(\alpha _{k}\equiv \alpha \in (0,1),\lambda _{k}\equiv \lambda \in (0,\tfrac{1}{4L})\) and \(\rho _{k}=\rho <\frac{5(1-\alpha )^{2}}{4(1+L\lambda )(2\alpha ^{2}+1-\alpha )}\). Then \((X_{k})_{k\in {\mathbb {N}}}\) converges weakly \({\mathbb {P}}\)-a.s. to a limiting random variable with values in \({\mathsf {S}}\).

In fact, the a.s. convergence with a larger \(\lambda _k\) is allowed as shown in the following corollary.

Corollary 6

(Asymptotic convergence under larger steplength) Let the same Assumptions as in Theorem 2 hold. Consider Algorithm RISFBF with the constant parameter sequences \(\alpha _{k}\equiv \alpha \in (0,1),\lambda _{k}\equiv \lambda \in (0,\tfrac{1-\nu }{2L})\) and \(\rho _{k}=\rho <\frac{(3-\nu )(1-\alpha )^{2}}{2(1+L\lambda )(2\alpha ^{2}+1-\alpha )}\), where \(0<\nu <1\). Then \((X_{k})_{k\in {\mathbb {N}}}\) converges weakly \({\mathbb {P}}\)-a.s. to a limiting random variable with values in \({\mathsf {S}}\).

Proof

First we make a slight modification to (27) that the following relation holds for \(0<\nu <1\)

$$\begin{aligned} \nonumber&\quad (1-\rho _{k})\left\| Z_{k}-p\right\| ^{2}+\rho _{k}\left\| R_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}\\ \nonumber&\le \left\| Z_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}-\rho _{k}((1-\nu )-2L^{2}\lambda ^{2}_{k})\left\| Z_{k}-Y_{k}\right\| ^{2}\\ \nonumber&\quad +2\lambda ^{2}\rho _{k}\left\| \mathtt {e}_{k}\right\| ^{2}-2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle -\rho _{k}\nu \left\| Y_{k}-Z_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle . \end{aligned}$$

Then similarly with (29), we multiply both sides of (28) by \(\frac{\rho _{k}((1-\nu )-2L\lambda _{k})}{1+L\lambda _{k}}\), which is positive since \(\lambda _k \in (0,\tfrac{1-\nu }{2L})\). The convergence follows in a similar fashion to Theorem 2. \(\square \)

Another corollary of Theorem 2 is a strong convergence result, assuming that F is demiregular (cf. Definition 1).

Corollary 7

(Strong Convergence under demiregularity) Let the same Assumptions as in Theorem 2 hold. If \(F=T+V\) is demiregular, then \((X_{k})_{k\in {\mathbb {N}}}\) converges strongly \({\mathbb {P}}\)-a.s. to a \({\mathsf {S}}\)-valued random variable.

Proof

Set \(y_{k_{j}}(\omega )\triangleq Z_{k_{j}}(\omega )-r_{k_{j}}(\omega )\), and \(u_{k_{j}}(\omega )\triangleq \frac{1}{\lambda }r_{k_{j}}(\omega )-V(Z_{k_{j}}(\omega ))+V(Z_{k_{j}}(\omega )-r_{k_{j}}(\omega ))\). We know from the proof of Theorem 2 that \(y_{k_{j}}(\omega )\rightharpoonup \chi (\omega )\) and \(u_{k_{j}}(\omega )\rightarrow 0\). If \(F=T+V\) is demiregular then \(y_{k_{j}}(\omega )\rightarrow \chi (\omega )\). Since we know \(r_{k_{j}}(\omega )\rightarrow 0\), we conclude \(Z_{k_{j}}(\omega )\rightarrow \chi (\omega )\). Since \(Z_{k}\) and \(X_{k}\) have the same limit points, it follows \(X_{k}\rightarrow \chi \). \(\square \)

4.2 Linear convergence

In this section, we derive a linear convergence rate and prove strong convergence of the last iterate in the case where the single-valued operator V is strongly monotone. Various linear convergence results in the context of stochastic approximation algorithms for solving fixed-point problems are reported in [68] in the context of the random sweeping processes. In a general structured monotone inclusion setting [69] derive rate statements for cocoercive mean operators in the context of forward-backward splitting methods. More recently, Cui and Shanbhag [27] provide linear and sublinear rates of convergence for a variance-reduced inexact proximal-point scheme for both strongly monotone and monotone inclusion problems. However, to the best of our knowledge, our results are the first published for a stochastic operator splitting algorithm, featuring relaxation and inertial effects. Notably, this result does not require imposing Assumption 4(i) (i.e. the noise process be conditionally unbiased.) Instead our derivations hold true under a weaker notion of an asymptotically unbiased SO.

Assumption 6

(Asymptotically unbiased SO) There exists a constant \(\mathtt {s}>0\) such that

$$\begin{aligned} {\mathbb {E}}[\left\| U_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]\le \frac{\mathtt {s}^{2}}{m_{k}}\text { and }{\mathbb {E}}[\left\| W_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]\le \frac{\mathtt {s}^{2}}{m_{k}}, \qquad {\mathbb {P}}-\text {a.s.} \end{aligned}$$
(43)

for all \(k\ge 1\).

This definition is rather mild and is imposed in many simulation-based optimization schemes in finite dimensions. Amongst the more important ones is the simultaneous perturbation stochastic approximation (SPSA) method pioneered by Spall [70, 71]. In this scheme, it is required that the gradient estimator satisfies an asymptotic unbiasedness requirement; in particular, the bias in the gradient estimator needs to diminish at a suitable rate to ensure asymptotic convergence. In fact, this setting has been investigated in detail in the context of stochastic Nash games [72]. Further examples for stochastic approximation schemes in a Hilbert-space setting obeying Assumption 6 are [73, 74] and [35]. We now discuss an example that further clarifies the requirements on the estimator.

Example 3

Let \(\{{\hat{V}}_{k}(x,\xi )\}_{k\in {\mathbb {N}}}\) be a collection of independent random \({\mathsf {H}}\)-valued vector fields of the form \({\hat{V}}_{k}(x,\xi )=V(x)+\varepsilon _{k}(x,\xi )\) such that

$$\begin{aligned} {\mathbb {E}}_{\xi }[\varepsilon _{k}(x,\xi )\vert x]=\frac{B_{k}}{\sqrt{m_k}}\text { and }{\mathbb {E}}_{\xi }[\left\| \varepsilon _{k}(x,\xi )\right\| ^{2}\vert x]\le {\hat{\sigma }}^{2}\qquad {\mathbb {P}}-\text {a.s.}, \end{aligned}$$

where \({\hat{\sigma }}>0\) and \({\tilde{b}} > 0\) such that \((B_{k})_{k\in {\mathbb {N}}}\) is an \({\mathsf {H}}\)-valued sequence satisfying \(\Vert B_k\Vert ^2 \le {\hat{b}}^2\) in an a.s. sense. These statistics can be obtained as

$$\begin{aligned} {\mathbb {E}}[\left\| U_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]&={\mathbb {E}}\left[ \left\| \frac{1}{m_{k}}\sum _{t=1}^{m_{k}}\varepsilon _{t}(Z_{k})\right\| ^{2}\vert {\mathcal {F}}_{k}\right] \\&=\frac{1}{m_{k}^{2}}\sum _{t=1}^{m_{k}}{\mathbb {E}}[\left\| \varepsilon _{t}(Z_{k})\right\| ^{2}\vert {\mathcal {F}}_{k}]+\frac{2}{m_{k}^{2}}\sum _{t=1}^{m_{k}}\sum _{l>t}{\mathbb {E}}[\left\langle \varepsilon _{t}(Z_{k}),\varepsilon _{l}(Z_{k})\right\rangle \vert {\mathcal {F}}_{k}]\\&\le \frac{{\hat{\sigma }}^{2}}{m_{k}}+\frac{(m_{k}-1)}{m_{k}}\frac{\left\| B_{k}\right\| ^{2}}{m_k} \le \frac{{\hat{\sigma }}^2 + {\hat{b}}^2}{m_k} \qquad {\mathbb {P}}-\text {a.s.} \end{aligned}$$

Setting \(\mathtt {s}^2 \triangleq {\hat{\sigma }}^{2}+{\hat{b}}^{2}\), we see that condition (43) holds. A similar estimate holds for the random noise \(\left\| W_{k}\right\| ^{2}\).

Assumption 7

\(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) is \(\mu \)-strongly monotone (\(\mu >0\)), i.e.

$$\begin{aligned} \left\langle V(x)-V(y),x-y\right\rangle \ge \mu \left\| x-y\right\| ^{2}\qquad \forall x,y\in {{\,\mathrm{dom}\,}}V={\mathsf {H}}. \end{aligned}$$
(44)

Combined with Assumption 1, strong monotonicity implies that \({\mathsf {S}}=\{{\bar{x}}\}\) for some \({\bar{x}}\in {\mathsf {H}}\).

Remark 3

In the context of a structured operator \(F=T+V\), the assumption that the single-valued part V is strongly monotone can be done without loss of generality. Indeed, if instead T is assumed to be \(\mu \)-strongly monotone, then \((V+\mu {{\,\mathrm{Id}\,}})+(T-\mu {{\,\mathrm{Id}\,}})\) is maximally monotone and Lipschitz continuous while \({{\tilde{V}}} \triangleq V+\mu {{\,\mathrm{Id}\,}}\) may be seen to be \(\mu \)-strongly monotone operator.

Our first result establishes a “perturbed linear convergence” rate on the anchor sequence \(\left( \left\| X_{k}-{\bar{x}}\right\| ^{2}\right) _{k\in {\mathbb {N}}}\), similar to the one derived in [68, Corollary 3.2] in the context of randomized fixed point iterations.

Theorem 8

(Perturbed linear convergence) Consider RISFBF with \(X_{0}=X_{1}\). Suppose Assumptions 1-3, Assumption 6 and Assumption 7 hold. Let \({\mathsf {S}}=\{{\bar{x}}\}\) denotes the unique solution of (MI). Suppose \(\lambda _k\equiv \lambda \le \min \left\{ \tfrac{a}{2\mu },b\mu ,\tfrac{1-a}{2{\tilde{L}}}\right\} \), where \(0<a,b<1\), \({\tilde{L}}^2\triangleq L^2+\tfrac{1}{2}\), \(\eta _k\equiv \eta \triangleq (1-b)\lambda \mu \). Define \(\Delta M_{k}\triangleq 2\rho _{k}\left\| W_{k}\right\| ^{2}+\frac{(3-a)\rho _{k}\lambda _{k}^{2}}{1+{\tilde{L}}\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}\). Let \((\alpha _k)_{k\in {\mathbb {N}}}\) be a non-decreasing sequence such that \(0<\alpha _k\le {\bar{\alpha }}<1\), and define \(\rho _k\triangleq \tfrac{(3-a)(1-\alpha _k)^2}{2(2\alpha _k^2-0.5\alpha _k+1)(1+{\tilde{L}}\lambda )}\) for every \(k \in {\mathbb {N}}\). Set

$$\begin{aligned}&H_{k}\triangleq \left\| X_{k}-{\bar{x}}\right\| ^2+(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda )}-1\right) \left\| X_{k}-X_{k-1}\right\| ^2-\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^2, \nonumber \\&c_{k}\triangleq {\mathbb {E}}[\Delta M_{k}\vert {\mathcal {F}}_{k}],\text { and }{\bar{c}}_{k}\triangleq \sum _{i=1}^{k}q^{k-i}{\mathbb {E}}[c_{i}\vert {\mathcal {F}}_{1}], \end{aligned}$$
(45)

where \(q=1-\rho \eta \in (0,1)\), \(\rho =\frac{16(3-a)(1-{\bar{\alpha }})^{2}}{31(1+{\tilde{L}}\lambda )}\). Then the following hold:

  1. (i)

    \(({\bar{c}}_{k})_{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({\mathbb {N}})\).

  2. (ii)

    For all \(k\ge 1\)

    $$\begin{aligned} {\mathbb {E}}[H_{k+1}\vert {\mathcal {F}}_{1}]\le q^{k}H_{1}+{\bar{c}}_{k}. \end{aligned}$$
    (46)

    In particular, this implies a perturbed linear rate of the sequence \((\Vert X_{k}-{\bar{x}}\Vert ^2)_{k\in {\mathbb {N}}}\) as

    $$\begin{aligned} {\mathbb {E}}[\left\| X_{k+1}-{\bar{x}}\right\| ^{2}\vert {\mathcal {F}}_{0}]\le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\tfrac{2}{1-{\bar{\alpha }}}{\bar{c}}_{k}. \end{aligned}$$
    (47)
  3. (iii)

    \(\sum _{k=1}^{\infty }(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda )}-1\right) \left\| X_{k}-X_{k-1}\right\| ^2<\infty \) \({\mathbb {P}}\text {-a.s.}\).

Proof

Our point of departure for the analysis under the stronger Assumption 7 is eq. (23), which becomes

$$\begin{aligned} \left\langle Z_{k}-R_{k},Y_{k}-p\right\rangle \ge \lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle +\lambda _{k}\mu \left\| Y_{k}-p\right\| ^{2}\quad \forall (p,p^{*})\in {{\,\mathrm{gr}\,}}(F). \end{aligned}$$

Repeating the analysis of the previous section with reference point \(p={\bar{x}}\) and \(p^{*}=0\), the unique solution of (MI), yields the bound

$$\begin{aligned} \left\| R_{k}-{\bar{x}}\right\| ^{2}&\le \left\| Z_{k}-{\bar{x}}\right\| ^{2}-(1-2L^{2}\lambda _{k}^{2})\left\| Y_{k}-Z_{k}\right\| ^{2}+2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}\\&\quad +2\lambda _{k}\left\langle W_{k},{\bar{x}}-Y_{k}\right\rangle -2\lambda _{k}\mu \left\| {\bar{x}}-Y_{k}\right\| ^{2}. \end{aligned}$$

The triangle inequality \(\left\| Z_{k}-{\bar{x}}\right\| ^{2}\le 2\left\| Y_{k}-Z_{k}\right\| ^{2}+2\left\| Y_{k}-{\bar{x}}\right\| ^{2}\) gives

$$\begin{aligned} \left\| R_{k}-{\bar{x}}\right\| ^{2}&\le \left\| Z_{k}-{\bar{x}}\right\| ^{2}-(1-2L^{2}\lambda _{k}^{2})\left\| Y_{k}-Z_{k}\right\| ^{2}+2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}+2\lambda _{k}\left\langle W_{k},{\bar{x}}-Y_{k}\right\rangle \\&\quad +2\lambda _{k}\mu \left\| Y_{k}-Z_{k}\right\| ^{2}-\lambda _{k}\mu \left\| Z_{k}-{\bar{x}}\right\| ^{2}. \end{aligned}$$

By (9), we have for all \(c>0\)

$$\begin{aligned} \left\langle W_{k},{\bar{x}}-Y_{k}\right\rangle&\le \frac{1}{2c}\left\| W_{k}\right\| ^{2}+\frac{c}{2}\left\| Y_{k}-{\bar{x}}\right\| ^{2}\nonumber \\&\le \frac{1}{2c}\left\| W_{k}\right\| ^{2}+c\left( \left\| Z_{k}-{\bar{x}}\right\| ^{2}+\left\| Z_{k}-Y_{k}\right\| ^{2}\right) . \end{aligned}$$
(48)

Observe that this estimate is crucial in weakening the requirement of conditional unbiasedness. Choose \(c=\frac{\lambda _{k}}{2}\) to get

$$\begin{aligned} \left\| R_{k}-{\bar{x}}\right\| ^{2}&\le \left\| Z_{k}-{\bar{x}}\right\| ^{2}-(1-2L^{2}\lambda _{k}^{2})\left\| Y_{k}-Z_{k}\right\| ^{2}+2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}+2\left\| W_{k}\right\| ^{2}+\lambda ^{2}_{k}\left\| {\bar{x}}-Z_{k}\right\| ^{2}\\&\quad +2\lambda _{k}\mu \left\| Y_{k}-Z_{k}\right\| ^{2}-\lambda _{k}\mu \left\| Z_{k}-{\bar{x}}\right\| ^{2}+\lambda _{k}^{2}\left\| Z_{k}-Y_{k}\right\| ^{2}\\&=(1+\lambda _{k}^{2}-\lambda _{k}\mu )\left\| Z_{k}-{\bar{x}}\right\| ^{2}+2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}+2\left\| W_{k}\right\| ^{2}\\&\quad -(1-2L^{2}\lambda ^{2}_{k}-2\lambda _{k}\mu -\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2}. \end{aligned}$$

Assume that \(\lambda _{k}\mu \le \frac{a}{2}<1\). Then,

$$\begin{aligned} 1-2L^{2}\lambda ^{2}_{k}-2\lambda _{k}\mu -\lambda _{k}^{2}\ge (1-a)-2L^{2}\lambda _{k}^{2}-\lambda _{k}^{2}=(1-a)-2{\tilde{L}}^{2}\lambda ^{2}_{k}, \end{aligned}$$

where \({\tilde{L}}^2\triangleq L^2+1/2\). Moreover, choosing \(\lambda _{k}\le b\mu \), we see

$$\begin{aligned} 1+\lambda ^{2}_{k}-\lambda _{k}\mu \le 1-(1-b)\lambda _k\mu . \end{aligned}$$

Using these bounds, we readily deduce for \(0<\lambda _{k}\le \min \{\frac{a}{2\mu },b\mu \}\), that

$$\begin{aligned} \left\| R_{k}-{\bar{x}}\right\| ^{2}&\le \left( 1-(1-b)\lambda _k\mu \right) \left\| Z_{k}-{\bar{x}}\right\| ^{2}-\left( (1-a)-2{\tilde{L}}^{2}\lambda ^{2}_{k}\right) \left\| Y_{k}-Z_{k}\right\| ^{2} \nonumber \\&+2\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}+2\left\| W_{k}\right\| ^{2}. \end{aligned}$$
(49)

Proceeding as in the derivation of eq. (30), one sees first that

$$\begin{aligned} \frac{1}{2\rho _{k}^{2}}\left\| X_{k+1}-Z_{k}\right\| ^{2}\le (1+{\tilde{L}}\lambda _{k})^{2}\left\| Y_{k}-Z_{k}\right\| ^{2}+\lambda ^{2}_{k}\left\| \mathtt {e}_{k}\right\| ^{2}, \end{aligned}$$

and therefore,

$$\begin{aligned} -\rho _{k}((1-a)-2{\tilde{L}}^{2}\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2}&\le -\frac{(1-a)-2{\tilde{L}}\lambda _{k}}{2\rho _{k}(1+{\tilde{L}}\lambda _{k})}\left\| X_{k+1}-Z_{k}\right\| ^{2}\nonumber \\&\quad +\frac{\rho _{k}\lambda ^{2}_{k}((1-a)-2{\tilde{L}}\lambda _{k})}{1+{\tilde{L}}\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}. \end{aligned}$$
(50)

Define \(\eta _{k}=(1-b)\lambda _k\mu \). Using the equality (25),

$$\begin{aligned}&\left\| X_{k+1}-{\bar{x}}\right\| ^{2}=(1-\rho _{k})\left\| Z_{k}-{\bar{x}}\right\| ^{2}+\rho _{k}\left\| R_{k}-{\bar{x}}\right\| ^{2}-\tfrac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}\\&\quad \overset{(49)}{\le } (1-\rho _k\eta _k)\left\| Z_{k}-{\bar{x}}\right\| ^{2}-\tfrac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}-\rho _{k}((1-a)-2{\tilde{L}}^{2}\lambda ^{2}_{k})\left\| Z_{k}-Y_{k}\right\| ^{2}\\&\qquad +2\lambda ^{2}_k\rho _{k}\left\| \mathtt {e}_{k}\right\| ^{2}+2\rho _k\left\| W_{k}\right\| ^2\\&\quad \overset{(50)}{\le } (1-\rho _k\eta _k)\left\| Z_{k}-{\bar{x}}\right\| ^{2}-\tfrac{(3-a)-2\rho _{k}(1+{\tilde{L}}\lambda _{k})}{2\rho _{k}(1+{\tilde{L}}\lambda _{k})}\left\| X_{k+1}-Z_{k}\right\| ^{2}+2\rho _{k}\left\| W_{k}\right\| ^{2}+\tfrac{(3-a)\rho _{k}\lambda _{k}^{2}}{1+{\tilde{L}}\lambda _{k}} \left\| \mathtt {e}_{k}\right\| ^{2} \\&\quad \overset{(32),(31)}{\le } (1-\rho _k\eta _k)[(1+\alpha _{k})\left\| X_{k}-{\bar{x}}\right\| ^{2}-\alpha _{k}\left\| X_{k-1}-{\bar{x}}\right\| ^{2}+\alpha _{k}(1+\alpha _{k})\left\| X_{k}-X_{k-1}\right\| ^{2}] \\&\qquad -\tfrac{(3-a)-2\rho _{k}(1+{\tilde{L}}\lambda _{k})}{2\rho _{k}(1+{\tilde{L}}\lambda _{k})}[(1-\alpha _{k})\left\| X_{k+1}-X_{k}\right\| ^{2}+(\alpha ^{2}_{k}-\alpha _{k})\left\| X_{k}-X_{k-1}\right\| ^{2}] \\&\qquad +2\rho _{k}\left\| W_{k}\right\| ^{2}+\tfrac{(3-a)\rho _{k}\lambda _{k}^{2}}{(1+{\tilde{L}}\lambda _{k})} \left\| \mathtt {e}_{k}\right\| ^{2} \\&\quad \le (1+\alpha _k)(1-\rho _k\eta _k)\Vert X_k-{\bar{x}}\Vert ^2-\alpha _k(1-\rho _k\eta _k)\Vert X_{k-1}-{\bar{x}}\Vert ^2+\Delta M_{k} \\&\qquad +\alpha _k\left\| X_k-X_{k-1}\right\| ^2\left[ (1+\alpha _k)(1-\rho _k\eta _k)+(\alpha _k-1)+\tfrac{(3-a)(1-\alpha _k)}{2\rho _k(1+{\tilde{L}}\lambda _k)}\right] \\&\qquad -(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda _k)}-1\right) \left\| X_{k+1}-X_k\right\| ^2, \end{aligned}$$

with stochastic error term \(\Delta M_{k}\triangleq 2\rho _{k}\left\| W_{k}\right\| ^{2}+\frac{(3-a)\rho _{k}\lambda _{k}^{2}}{1+{\tilde{L}}\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}\). From here, it follows that

$$\begin{aligned}&\left\| X_{k+1}-{\bar{x}}\right\| ^2+(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda _k)}-1\right) \left\| X_{k+1}-X_k\right\| ^2-\alpha _k\Vert X_k-{\bar{x}}\Vert ^2 \nonumber \\&\quad \le (1-\rho _k\eta _k)\left[ \left\| X_{k}-{\bar{x}}\right\| ^2+(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda _k)}-1\right) \left\| X_{k}-X_{k-1}\right\| ^2-\alpha _k\Vert X_{k-1}-{\bar{x}}\Vert ^2\right] \nonumber \\&\qquad -\left[ (1-\rho _k\eta _k)(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda _k)}-1\right) \right. \nonumber \\&\qquad \left. -\alpha _k\left( (1+\alpha _k)(1-\rho _k\eta _k)+(\alpha _k-1)+\tfrac{(3-a)(1-\alpha _k)}{2\rho _k(1+{\tilde{L}}\lambda _k)}\right) \right] \left\| X_{k}-X_{k-1}\right\| ^2 \nonumber \\&\qquad -\alpha _k\rho _k\eta _k\left\| X_{k}-{\bar{x}}\right\| ^2+\Delta M_{k} \nonumber \\&\quad = (1-\rho _k\eta _k)\left[ \left\| X_{k}-{\bar{x}}\right\| ^2+(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda _k)}-1\right) \left\| X_{k}-X_{k-1}\right\| ^2-\alpha _k\Vert X_{k-1}-{\bar{x}}\Vert ^2\right] \nonumber \\&\qquad -\underbrace{\left[ (1-\alpha _k-\rho _k\eta _k)\left( \tfrac{(3-a)(1-\alpha _k)}{2\rho _k(1+{\tilde{L}}\lambda _k)}-1 \right) -\alpha _k^2(2-\rho _k\eta _k) \right] }_{\triangleq \tilde{I}}\left\| X_{k}-X_{k-1}\right\| ^2-\alpha _k\rho _k\eta _k\left\| X_{k}-{\bar{x}}\right\| ^2+\Delta M_{k}. \end{aligned}$$
(51)

Since \(\lambda _k = \lambda \), and \(\rho _k=\tfrac{(3-a)(1-\alpha _k)^2}{2(2\alpha _k^2-0.5\alpha _k+1)(1+{\tilde{L}}\lambda )}\), we claim that \(\rho _k \le \tfrac{1-\alpha _k}{(1+4\alpha _k)\eta }\) for \(\eta \equiv (1-b)\lambda \mu \). Indeed,Footnote 2

$$\begin{aligned} \tfrac{\tfrac{1-\alpha _k}{(1+4\alpha _k)\eta }}{\rho _k}=\tfrac{2(2\alpha _k^2-0.5\alpha _k+1)(1+{\tilde{L}}\lambda )}{(3-a)(1-\alpha _k)(1+4\alpha _k)\eta } \ge \tfrac{2(2\alpha _k^2-0.5\alpha _k+1)(1+{\tilde{L}}\lambda )}{(3-a)(1-\alpha _k)(1+4\alpha _k)\tfrac{a}{2}(1-b)} \ge \tfrac{2\cdot \tfrac{31}{32}\cdot 1}{\tfrac{25}{16}\cdot 1} =\frac{31}{25}> 1. \end{aligned}$$

In particular, this implies \(\eta \rho _{k}\in (0,1)\) for all \(k\in {\mathbb {N}}\). We then have

$$\begin{aligned} {{\tilde{I}}}&=(1-\alpha _k-\rho _k\eta )\left( \tfrac{(3-a)(1-\alpha _k)}{2\rho _k(1+{\tilde{L}}\lambda )}-1 \right) -\alpha _k^2(2-\rho _k\eta )\nonumber \\&\ge \left( 1-\alpha _k-\tfrac{1-\alpha _k}{1+4\alpha _k}\right) \left( \tfrac{2\alpha _k^2-0.5\alpha _k+1}{1-\alpha _k}-1 \right) -2\alpha _k^2 \nonumber \\&=\tfrac{(1-\alpha _k)4\alpha _k}{1+4\alpha _k}\cdot \tfrac{2\alpha _k^2+0.5\alpha _k}{1-\alpha _k}-\tfrac{2\alpha _k^2(1+4\alpha _k)}{1+4\alpha _k}\nonumber \\&=0. \end{aligned}$$
(52)

Next, we show that \(H_k\ge \tfrac{1-{\bar{\alpha }}}{2}\Vert X_k-{\bar{x}}\Vert ^2\), for \(H_{k}\) defined in (45). This can be seen from the next string of inequalities:

$$\begin{aligned} H_{k}&=\left\| X_{k}-{\bar{x}}\right\| ^{2}-\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^{2}+(1-\alpha _k)\left( \frac{3-a}{2\rho _k(1+{\tilde{L}}\lambda )}-1\right) \Vert X_k-X_{k-1}\Vert ^2\\&\ge \left\| X_{k}-{\bar{x}}\right\| ^{2}+\left( \frac{(1-\alpha _k)(2\alpha _k^{2}+1-0.5\alpha _k)}{(1-\alpha _k)^{2}}-1+\alpha _k\right) \left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^{2}\\&\ge \left\| X_{k}-{\bar{x}}\right\| ^{2}+\left( \frac{(1-\alpha _k)(2\alpha _k^{2}+1-\alpha _k)}{(1-\alpha _k)^{2}}-1+\alpha _k\right) \left\| X_{k}-X_{k-1}\right\| ^{2}\\&\quad -\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^{2}\\&=\left( \alpha _k+\frac{1-\alpha _k}{2}\right) \left\| X_{k}-{\bar{x}}\right\| ^{2}+\left( \alpha _k+\frac{2\alpha _k^{2}}{1-\alpha _k}\right) \left\| X_{k}-X_{k-1}\right\| ^{2}-\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^{2}\\&\quad +\frac{1-\alpha _k}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2}\\&\ge \alpha _k\left( \left\| X_{k}-{\bar{x}}\right\| ^{2}+\left\| X_{k}-X_{k-1}\right\| ^{2}\right) -\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^{2}\\&\quad +2\alpha _k\left\| X_{k}-{\bar{x}}\right\| \cdot \left\| X_{k}-X_{k-1}\right\| +\frac{1-\alpha _k}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2}\\&\ge \alpha _k\left( \left\| X_{k}-{\bar{x}}\right\| +\left\| X_{k}-X_{k-1}\right\| \right) ^{2}-\alpha _k\left\| X_{k-1}-{\bar{x}}\right\| ^{2}\\&\quad +\frac{1-\alpha _k}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2} \ge \frac{1-\alpha _k}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2}\\&\ge \frac{1-{\bar{\alpha }}}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2}. \end{aligned}$$

In this derivation we have used the (9) to estimate \(\frac{1-\alpha _k}{2}\left\| X_{k}-{\bar{x}}\right\| ^{2}+\frac{2\alpha _k^{2}}{1-\alpha _k}\left\| X_{k}-X_{k-1}\right\| ^{2}\ge 2\alpha _k\left\| X_{k}-{\bar{x}}\right\| \cdot \left\| X_{k}-X_{k-1}\right\| ,\) and the specific choice \(\rho _k = \frac{(3-a)(1-\alpha _k)^{2}}{2(2\alpha _k^{2}-\frac{1}{2}\alpha _k+1)(1+{\tilde{L}}\lambda )}\).

By recalling (51) and invoking (52), we are left with the stochastic recursion

$$\begin{aligned} H_{k+1}\le q_k H_{k}-{{\tilde{b}}}_{k}+\Delta M_{k}. \end{aligned}$$
(53)

where \(q_k \triangleq 1-\rho _k \eta \) and \({\tilde{b}}_k \triangleq \alpha _k \rho _k \eta _k\Vert X_k-{\bar{x}}\Vert ^2.\) Since \(\rho _k = \frac{(3-a)(1-\alpha _k)^{2}}{2(2\alpha _k^{2}-\frac{1}{2}\alpha _k+1)(1+{\tilde{L}}\lambda )} \ge \rho =\frac{16(3-a)(1-{\bar{\alpha }})^{2}}{31(1+{\tilde{L}}\lambda )}\) for every k, we have that \(q_k \le q =1-\eta \rho \) for every k. Furthermore, \(1>\eta \rho _{k}\ge \eta \rho \), so that \(q \in (0,1)\). Taking conditional expectations on both sides on (53), we get

$$\begin{aligned} {\mathbb {E}}[H_{k+1}\vert {\mathcal {F}}_{k}]+{{\tilde{b}}}_{k}\le q H_{k}+c_{k}\quad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

using the notation \(c_{k}\triangleq {\mathbb {E}}[\Delta M_{k}\vert {\mathcal {F}}_{k}]\). Applying the operator \({\mathbb {E}}[\cdot \vert {\mathcal {F}}_{k-1}]\) and using the tower property of conditional expectations, this gives

$$\begin{aligned} {\mathbb {E}}[H_{k+1}\vert {\mathcal {F}}_{k-1}]\le q^{2}H_{k-1}-q{\mathbb {E}}[{\tilde{b}}_{k-1}\vert {\mathcal {F}}_{k-1}]-{\mathbb {E}}[{\tilde{b}}_{k}\vert {\mathcal {F}}_{k-1}]+q{\mathbb {E}}[c_{k-1}\vert {\mathcal {F}}_{k-1}]+{\mathbb {E}}[c_{k}\vert {\mathcal {F}}_{k-1}]. \end{aligned}$$

Proceeding inductively, we see that

$$\begin{aligned} {\mathbb {E}}[H_{k+1}\vert {\mathcal {F}}_{1}]\le q^{k}H_{1}+\sum _{i=1}^{k-1}q^{k-i}{\mathbb {E}}[c_{i}\vert {\mathcal {F}}_{1}]=q^{k}H_{1}+{\bar{c}}_{k}. \end{aligned}$$

This establishes eq. (46). To validate eq. (47), recall that we assume \(X_{1}=X_{0}\), so that \(H_{1}=(1-\alpha _{1})\left\| X_{1}-{\bar{x}}\right\| ^{2}\). Furthermore, \(H_{k+1}\ge \frac{1-{\bar{\alpha }}}{2}\left\| X_{k+1}-{\bar{x}}\right\| ^{2}\), so that

$$\begin{aligned} {\mathbb {E}}[\left\| X_{k+1}-{\bar{x}}\right\| ^{2}\vert {\mathcal {F}}_{1}]\le q^{k}\left( \frac{2(1-\alpha _{1})}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{2}{1-{\bar{\alpha }}}{\bar{c}}_{k}. \end{aligned}$$

We now show that \(({\bar{c}}_{k})_{k\in {\mathbb {N}}}\in \ell ^{1}_{+}({\mathbb {N}})\). Simple algebra, combined with Assumption 6, gives

$$\begin{aligned} c_{k}&={\mathbb {E}}[\Delta M_{k}\vert {\mathcal {F}}_{k}]\le 2\rho _{k}\left( 1+\frac{(3-a)\lambda ^{2}}{1+{\tilde{L}}\lambda }\right) {\mathbb {E}}[\left\| W_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]+\frac{2(3-a)\rho _k\lambda ^{2}}{1+{\tilde{L}}\lambda }{\mathbb {E}}[\left\| U_{k}\right\| ^{2}\vert {\mathcal {F}}_{1}]\nonumber \\&\le \frac{2\mathtt {s}^{2}\rho _{k}}{m_{k}}\left( 1+\frac{2(3-a)\lambda ^{2}}{1+{\tilde{L}}\lambda }\right) \equiv \frac{\rho _{k}\mathtt {s}^{2}}{m_{k}}\kappa . \end{aligned}$$
(54)

Hence, since \((\rho _{k})_{k\in {\mathbb {N}}}\) is bounded, Assumption 5 gives \(\lim _{k\rightarrow \infty }c_{k}=0\) a.s. Using again the tower property, we see \({\mathbb {E}}[c_{k}\vert {\mathcal {F}}_{1}]={\mathbb {E}}\left[ {\mathbb {E}}(c_{k}\vert {\mathcal {F}}_{k})\vert {\mathcal {F}}_{1}\right] \le \kappa \frac{\rho _{k}\mathtt {s}^{2}}{m_{k}} \le \kappa \frac{{\bar{\rho }}\mathtt {s}^{2}}{m_{k}}\), where \(\rho _k = \frac{(3-a)(1-\alpha _k)^{2}}{2(2\alpha _k^{2}-\frac{1}{2}\alpha _k+1)(1+{\tilde{L}}\lambda )} \le {\bar{\rho }} =\frac{3-a}{2(1+{\tilde{L}}\lambda )} \) for every k. Consequently, the discrete convolution \(\left( \sum _{i=1}^{k-1}q^{k-i}{\mathbb {E}}[c_{i}\vert {\mathcal {F}}_{1}]\right) _{k\in {\mathbb {N}}}\) is summable. Therefore \(\sum _{k\ge 1}{\mathbb {E}}[H_{k}]<\infty \) and \(\sum _{k\ge 1}{\mathbb {E}}[{\tilde{b}}_{k}]<\infty \). Clearly, this implies \(\lim _{k\rightarrow \infty }{\mathbb {E}}[{{\tilde{b}}}_{k}]=0,\) and consequently the subsequently stated two implication follow as well:

$$\begin{aligned}&\lim _{k\rightarrow \infty }\left\| X_{k}-{\bar{x}}\right\| =0\quad {\mathbb {P}}\text {-a.s.},\quad \text { and }\\&\sum _{k=1}^{\infty }(1-\alpha _k)\left( \tfrac{3-a}{2\rho _k(1+{\tilde{L}}\lambda )}-1\right) \left\| X_{k}-X_{k-1}\right\| ^2<\infty \quad {\mathbb {P}}\text {-a.s.}. \end{aligned}$$

\(\square \)

Remark 4

It is worth remarking that the above proof does not rely on unbiasedness of the random estimators. The reason why we can lift this rather typical assumption lies in our application Young’s inequality in the estimate (48). The only assumption needed is a summable oracle variance as formulated in Assumption 6 to get the above result working.

Remark 5

The above result illustrates again nicely the well-known trade-off between relaxation and inertial effects (cf. Remark 2). Indeed, up to constant factors, the coupling between inertia and relaxation is expressed by the function \(\alpha \mapsto \frac{(1-\alpha )^{2}}{2\alpha ^{2}-\frac{1}{2}\alpha +1}\). Basic calculus reveals that this function is decreasing for \(\alpha \) increasing. In the extreme case when \(\alpha \uparrow 1\), it is necessary to let \(\rho \downarrow 0\), and vice versa. When \(\alpha \rightarrow 0\) then the limiting value of our specific relaxation policy is \(\frac{3-a}{1+{\tilde{L}}\lambda }\). In practical applications, it is advisable to choose b small in order to make q large. The value a must be calibrated in a disciplined way in order to allow for a sufficiently large step size \(\lambda \). This requires some knowledge of the condition number of the problem \(\mu /L\). As a heuristic argument, a good strategy, anticipating that b should be close to 0, is to set \(\frac{a}{2\mu }=\frac{1-a}{2{\tilde{L}}}\). This means \(a=\frac{\mu }{{\tilde{L}}+\mu }\).

We obtain a full linear rate of convergence when a more aggressive sample rate is employed in the SO. We achieve such global linear rates, together with tuneable iteration and oracle complexity estimates in two settings: First, we consider an aggressive simulation strategy, where the sample size grows over time geometrically. Such a sampling frequency can be quite demanding in some applications. As an alternative, we then move on and consider a more modest simulation strategy under which only polynomial growth of the batch size is required. Whatever simulation strategy is adopted, key to the assessment of the iteration and oracle complexity is to bound the stopping time

$$\begin{aligned} K_{\epsilon } \triangleq \inf \{ k \in {\mathbb {N}}\vert \; {\mathbb {E}}\left( \left\| X_{k+1} - {\bar{x}}\right\| ^2\right) \le \epsilon \}. \end{aligned}$$
(55)

In order to understand the definition of this stopping time, recall that RISFBF computes the last iterate \(X_{K+1}\) by extrapolating between the current base point \(Z_{k}\) and the correction step involving \(Y_{k}+\lambda _{K}(A_{k}-B_{k})\), which requires \(2 m_{k}\) iid realizations from the law \({{\mathsf {P}}}\). In total, when executing the algorithm until the terminal time \(K_{\epsilon }\), we therefore need to simulate \(2\sum _{k=1}^{K_{\epsilon }}m_{k}\) random variables. We now estimate the integer \(K_{\epsilon }\) under a geometric sampling strategy.

Proposition 9

(Non-asymptotic linear convergence under geometric sampling) Suppose the conditions of Theorem 8 hold. Let \(p\in (0,1),\mathtt {B}= 2{\bar{\rho }}\mathtt {s}^{2}\left( 1+\frac{2(3-a)\lambda ^{2}}{1+{\tilde{L}}\lambda }\right) ,\) and choose the sampling rate \(m_{k}=\lfloor p^{-k}\rfloor \). Let \({\hat{p}}\in (p,1)\), and define

$$\begin{aligned} C(p,q)&\triangleq \tfrac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\tfrac{4\mathtt {B}}{(1-{\bar{\alpha }})(1-\min \{p/q,q/p\})}\quad \text {if }p\ne q,\text { and }\end{aligned}$$
(56)
$$\begin{aligned} {\hat{C}}&\triangleq \tfrac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\tfrac{4\mathtt {B}}{(1-{\bar{\alpha }})\exp (1)\ln ({\hat{p}}/q)}\quad \text {if }p=q. \end{aligned}$$
(57)

Then, whenever \(p\ne q\), we see that

$$\begin{aligned} {\mathbb {E}}\left( \left\| X_{k+1}-{\bar{x}}\right\| ^{2}\right) \le C(p,q)\max \{p,q\}^{k}, \end{aligned}$$

and whenever \(p=q\),

$$\begin{aligned} {\mathbb {E}}\left( \left\| X_{k+1}-{\bar{x}}\right\| ^{2}\right) \le {\hat{C}}{\hat{p}}^{k}. \end{aligned}$$

In particular, the stochastic process \((X_{k})_{k\in {\mathbb {N}}}\) converges strongly and \({\mathbb {P}}\)-a.s. to the unique solution \({\bar{x}}\) at a linear rate.

Proof

Departing from (53), ignoring the positive term \({\tilde{b}}_{k}\) from the right-hand side, and taking expectations on both sides leads to

$$\begin{aligned} \frac{1-{\bar{\alpha }}}{2}{\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})\le h_{k+1}\triangleq {\mathbb {E}}(H_{k+1})\le q{\mathbb {E}}(H_{k})+c_{k}=q h_{k}+c_{k}, \end{aligned}$$
(58)

where the equality follows from \(c_k\) being deterministic. The sequence \((c_{k})_{k\in {\mathbb {N}}}\) is further upper bounded by the following considerations: First, the relaxation sequence is bounded by \(\rho _{k}\le {\bar{\rho }}=\frac{3-a}{2(1+{\tilde{L}}\lambda )}\); Second, the sample rate is bounded by \(m_{k}= \lfloor p^{-k} \rfloor \ge \left\lceil \tfrac{1}{2}p^{-k} \right\rceil \ge \tfrac{1}{2}p^{-k}\). Using these facts, eq. (54) yields

$$\begin{aligned} c_{k}\le \frac{\rho _{k}\mathtt {s}^{2}\kappa }{m_{k}}\le 2\mathtt {B}p^{k}\qquad \forall k\ge 1, \end{aligned}$$
(59)

where \(\mathtt {B}=2{\bar{\rho }}\mathtt {s}^{2}\left( 1+\frac{2(3-a)\lambda ^{2}}{1+{\tilde{L}}\lambda }\right) \). Iterating the recursion above, one readily sees that

$$\begin{aligned} h_{k+1}\le q^{k}h_{1}+\sum _{i=1}^{k}q^{k-i}c_{i}\quad \forall k\ge 1. \end{aligned}$$
(60)

Consequently, by recalling that \(h_1 = (1-\alpha _1)\Vert X_1-{\bar{x}}\Vert ^2\) and \(h_{k}\ge \frac{1-{\bar{\alpha }}}{2}{\mathbb {E}}(\left\| X_{k}-{\bar{x}}\right\| ^{2})\), the bound (59) allows us to derive the recursion

$$\begin{aligned} {\mathbb {E}}\left( \left\| X_{k+1}-{\bar{x}}\right\| ^{2}\right) \le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\sum _{i=1}^{k}q^{k-i}p^{i}. \end{aligned}$$
(61)

We consider three cases.

  1. (i)

    \(0<q<p<1\): Defining \(\mathtt {c}_{1} \triangleq \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\tfrac{4\mathtt {B}}{(1-{\bar{\alpha }})(1-q/p)}\), we obtain from (61)

    $$\begin{aligned} {\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})\le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\sum _{i=1}^{k}(q/p)^{k-i}p^{k}\le \mathtt {c}_{1}p^{k}. \end{aligned}$$
  2. (ii)

    \(0<p<q<1\). Akin to (i) and defining \(\mathtt {c}_{2}\triangleq \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\tfrac{4\mathtt {B}}{(1-{\bar{\alpha }})(1-p/q)}\), we arrive as above at the bound \({\mathbb {E}}(\left\| X_{k}-{\bar{x}}\right\| ^{2})\le q ^{k}\mathtt {c}_{2}\).

  3. (iii)

    \(p=q<1\). Choose \({\hat{p}} \in (q,1)\) and \(\mathtt {c}_{3}\triangleq \tfrac{1}{\exp (1)\ln ({\hat{p}}/q)}\), so that Lemma 18 yields \(kq^{k}\le \mathtt {c}_{3}{\hat{p}}^{k}\) for all \(k\ge 1\). Therefore, plugging this estimate in eq. (61), we see

    $$\begin{aligned} {\mathbb {E}}(\left\| X_{k}-{\bar{x}}\right\| ^{2})&\le q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\sum _{i=1}^{k}q^{k}\\&\le {\hat{p}}^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}\right) +\frac{4\mathtt {B}}{1-{\bar{\alpha }}}\mathtt {c}_{3}{\hat{p}}^{k}\\&=\mathtt {c}_{4}{\hat{p}}^{k}, \end{aligned}$$

    after setting \(\mathtt {c}_{4}\triangleq \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\frac{4\mathtt {B}\mathtt {c}_{3}}{1-{\bar{\alpha }}}\). Collecting these three cases together, verifies the first part of the proposition.

\(\square \)

Proposition 10

(Oracle and Iteration Complexity under geometric sampling) Given \(\epsilon > 0\), define the stopping time \(K_{\epsilon }\) as in eq. (55). Define

$$\begin{aligned} \tau _{\epsilon }(p,q)\triangleq \left\{ \begin{array}{ll} \lceil \frac{\ln (C(p,q)\epsilon ^{-1})}{\ln (1/\max \{p,q\})}\rceil &{} \text {if }p\ne q,\\ \lceil \frac{\ln ({\hat{C}}\epsilon ^{-1})}{\ln (1/{\hat{p}})}\rceil &{} \text {if }p=q \end{array}\right. \end{aligned}$$
(62)

and the same hypothesis as in Theorem 8 hold true. Then, \(K_{\varepsilon }\le \tau _{\epsilon }(p,q)={\mathcal {O}}(\ln (\varepsilon ^{-1}))\). The corresponding oracle complexity of RISFBF is upper bounded as \(2\sum _{i=1}^{\tau _{\epsilon }(p,q)} m_i = {\mathcal {O}}\left( (1/\epsilon )^{1+\delta (p,q)}\right) \), where

$$\begin{aligned} \delta (p,q)\triangleq \left\{ \begin{array}{ll} 0 &{} \text {if }p>q,\\ \frac{\ln (p)}{\ln (q)}-1 &{} \text {if }p\in (0,q),\\ \frac{\ln (p)}{\ln ({\hat{p}})}-1 &{} \text {if }p=q. \end{array}\right. \end{aligned}$$

Proof

First, let us recall that the total oracle complexity of the method is assessed by

$$\begin{aligned} 2\sum _{i=1}^{K_{\epsilon }}m_{i}=2\sum _{i=1}^{K_{\epsilon }}\lfloor p^{-i}\rfloor \le 2\sum _{i=1}^{K_{\epsilon }}p^{-i}. \end{aligned}$$

If \(p\ne q\) define \(\tau _{\epsilon }\equiv \tau _{\epsilon }(p,q)=\lceil \frac{\ln (C(p,q)\epsilon ^{-1})}{\ln (1/\max \{p,q\})}\rceil \). Then, \({\mathbb {E}}(\left\| X_{\tau _{\epsilon }+1}-{\bar{x}}\right\| ^{2})\le \epsilon \), and hence \(K_{\epsilon }\le \tau _{\epsilon }\). We now compute

$$\begin{aligned} \sum _{i=1}^{\tau _{\epsilon }}(1/p)^{i}&=\frac{1}{p}\frac{(1/p)^{\lceil \frac{\ln (C(p,q)\epsilon ^{-1})}{\ln (1/\max \{p,q\})}\rceil }-1}{1/p-1}\le \frac{1}{p^{2}} \frac{(1/p)^{\frac{\ln (C(p,q)\epsilon ^{-1})}{\ln (1/\max \{p,q\})}}}{1/p-1}\\&=\frac{\left( \epsilon ^{-1}C(p,q)\right) ^{\ln (1/p)/\ln (1/\max \{p,q\})}}{p(1-p)}. \end{aligned}$$

This gives the oracle complexity bound

$$\begin{aligned} 2\sum _{i=1}^{\tau _{\epsilon }}m_{i}\le 2\frac{\left( \epsilon ^{-1}C(p,q)\right) ^{\ln (1/p)/\ln (1/\max \{p,q\})}}{p(1-p)}. \end{aligned}$$

If \(p=q\), we can replicate this calculation, after setting \(\tau _{\epsilon }=\lceil \frac{\ln (\epsilon ^{-1}{\hat{C}})}{\ln (1/{\hat{p}})}\rceil \). After so many iterations, we can be ensured that \({\mathbb {E}}(\left\| X_{\tau _{\epsilon }+1}-{\bar{x}}\right\| ^{2})\le \epsilon \), with an oracle complexity

$$\begin{aligned} 2\sum _{i=1}^{\tau _{\epsilon }}m_{i}\le \frac{2}{{\hat{p}}(1-{\hat{p}})}\left( \frac{{\hat{C}}}{\epsilon }\right) ^{\ln (p)/\ln ({\hat{p}})}. \end{aligned}$$

\(\square \)

To the best of our knowledge, the provided non-asymptotic linear convergence guarantee appears to be amongst the first in relaxed and inertial splitting algorithms. In particular, by leveraging the increasing nature of mini-batches, this result no longer requires the unbiasedness assumption on the SO, a crucial benefit of the proposed scheme.

There may be settings where geometric growth of \(m_k\) is challenging to adopt. To this end, we provide a result where the sampling rate is polynomial rather than geometric. A polynomial sampling rate arises if \(m_{k}=\lceil {a_{k}(k+k_{0})^{\theta }+b_{k}\rceil }\) for some parameters \(a_{k},b_{k},\theta >0\). Such a regime has been adopted in related mini-batch approaches [75, 76]. This allows for modulating the growth rate by changing the exponent in the sampling rate. We begin by providing a supporting result. We make the specific choice \(a_{k}=b_{k}=1\) for all \(k\ge 1\), and \(k_{0}=0\), leaving essentially the exponent \(\theta >0\) as a free parameter in the design of the stochastic oracle.

Proposition 11

(Polynomial rate of convergence under polynomially increasing \(m_k\)) Suppose the conditions of Theorem 8 hold. Choose the sampling rate \(m_{k}=\lfloor k^{\theta }\rfloor \) where \(\theta > 0\). Then, for any \(k\ge 1\),

$$\begin{aligned} {\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^2 ) \le q^{k}&\quad \left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\frac{2}{1-{\bar{\alpha }}}\frac{q^{-1}\exp (2\theta )-1}{1-q}\right) \nonumber \\&\quad +\frac{4\mathtt {B}}{(1-{\bar{\alpha }})q\ln (1/q)}(k+1)^{-\theta } \end{aligned}$$
(63)

Proof

From the relation (60), we obtain

$$\begin{aligned} h_{k+1}&\le q^{k}h_{1}+\sum _{i=1}^{k}q^{k-i}c_{i}\le q^{k}h_{1}+\mathtt {B}\sum _{i=1}^{k}q^{k-i}i^{-\theta }\\&= q^{k}\left( h_{1}+\mathtt {B}\sum _{i=1}^{k}q^{-i}i^{-\theta }\right) \\&= q^{k}\left( h_{1}+\mathtt {B}\sum _{i=1}^{\lceil 2\theta /\ln (1/q)\rceil }q^{-i}i^{-\theta }+\mathtt {B}\sum _{i=\lceil 2\theta /\ln (1/q)\rceil +1}^{k}q^{-i}i^{-\theta }\right) . \end{aligned}$$

A standard bound based on the integral criterion for series with non-negative summands gives

$$\begin{aligned} \sum _{i=\lceil 2\theta /\ln (1/q)\rceil +1}^{k}q^{-i}i^{-\theta }\le \int _{\lceil 2\theta /\ln (1/q)\rceil }^{k+1}\frac{(1/q)^{t}}{t^{\theta }}\,dt. \end{aligned}$$

The upper bounding integral can be evaluated using integration-by-parts, as follows:

$$\begin{aligned} \int _{\lceil 2\theta /\ln (1/q)\rceil }^{k+1}\frac{(1/q)^{t}}{t^{\theta }}\,dt=t^{\theta }\frac{e^{t\ln (1/q)}}{\ln (1/q)}\vert _{t=\lceil 2\theta /\ln (1/q)\rceil }^{t=k+1}+\int _{\lceil 2\theta /\ln (1/q)\rceil }^{k+1}\theta t^{-(\theta +1)}\frac{e^{t\ln (1/q)}}{\ln (1/q)}\,dt. \end{aligned}$$

Note that \(\frac{\theta }{t\ln (1/q)}\le \frac{1}{2}\) when \(t\ge \lceil 2\theta /\ln (1/q)\rceil \). Therefore, we can attain a simpler bound from the above by

$$\begin{aligned} \int _{\lceil 2\theta /\ln (1/q)\rceil }^{k+1}\frac{(1/q)^{t}}{t^{\theta }}\,dt\le \frac{(1/q)^{k+1}}{\ln (1/q)(k+1)^{\theta }}+\frac{1}{2}\int _{\lceil 2\theta /\ln (1/q)\rceil }^{k+1}\frac{(1/q)^{t}}{t^{\theta }}\,dt \end{aligned}$$

Consequently,

$$\begin{aligned} \int _{\lceil 2\theta /\ln (1/q)\rceil }^{k+1}\frac{(1/q)^{t}}{t^{\theta }}\,dt\le \frac{2(1/q)^{k+1}(k+1)^{-\theta }}{\ln (1/q)}. \end{aligned}$$

Furthermore,

$$\begin{aligned} \sum _{i=1}^{\lceil 2\theta /\ln (1/q)\rceil }q^{-i}i^{-\theta }\le \sum _{i=1}^{\lceil 2\theta /\ln (1/q)\rceil }q^{-i}=\frac{1}{q}\frac{(1/q)^{\lceil 2\theta /\ln (1/q)\rceil }-1}{1/q-1}\le \frac{1}{q}\frac{(1/q)^{2\theta /\ln (1/q)+1}-1}{1/q-1}. \end{aligned}$$

Note that \((1/q)^{2\theta / \ln (1/q)}=\left( \exp (\ln (1/q))\right) ^{ 2\theta / \ln (1/q)}=\exp (2\theta )\). Hence,

$$\begin{aligned} \sum _{i=1}^{\lceil 2\theta /\ln (1/q)\rceil }q^{-i}i^{-\theta }\le \frac{1}{q}\frac{q^{-1}\exp (2\theta )-1}{1/q-1}=\frac{q^{-1}\exp (2\theta )-1}{1-q}. \end{aligned}$$

Plugging this into the opening string of inequalities shows

$$\begin{aligned} h_{k+1}&\le q^{k}\left( h_{1}+\mathtt {B}\sum _{i=1}^{\lceil 2\theta /\ln (1/q)\rceil }q^{-i}+\frac{2\mathtt {B}(1/q)^{k+1}(k+1)^{-\theta }}{\ln (1/q)}\right) \\&\le q ^{k}\left( h_{1}+\mathtt {B}\frac{q^{-1}\exp (2\theta )-1}{1-q}+\frac{2\mathtt {B}(1/q)^{k+1}(k+1)^{-\theta }}{\ln (1/q)}\right) \\&=q^{k}\left( h_{1}+\mathtt {B}\frac{q^{-1}\exp (2\theta )-1}{1-q}\right) +\frac{2\mathtt {B}/q}{\ln (1/q)}(k+1)^{-\theta }. \end{aligned}$$

Since \(h_{1}=(1-\alpha _{1})\left\| X_{1}-{\bar{x}}\right\| ^{2}\) and \(h_{k+1}\ge \frac{1-{\bar{\alpha }}}{2}{\mathbb {E}}\left( \left\| X_{k+1}-{\bar{x}}\right\| ^{2}\right) \), we finally arrive at the desired expression (63). \(\square \)

Proposition 12

(Oracle and Iteration complexity under polynomial sampling) Let all Assumptions as in Theorem 8 hold. Given \(\epsilon > 0\), define \(K_{\epsilon }\) as in (55). Then the iteration and oracle complexity to obtain an \(\epsilon \)-solution are \({\mathcal {O}}(\theta \varepsilon ^{-1/\theta })\) and \({\mathcal {O}}(\exp (\theta )\theta ^{\theta }(1/\epsilon )^{1+1/\theta })\), respectively.

Proof

We first note that \((k+1)^{-\theta }\le k^{-\theta }\) for all \(k\ge 1\). Hence, the bound established in Proposition 11 yields

$$\begin{aligned} {\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})\le&\quad q^{k}\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\frac{2}{1-{\bar{\alpha }}}\frac{q^{-1}\exp (2\theta )-1}{1-q}\right) \\&\quad +\frac{4\mathtt {B}}{(1-{\bar{\alpha }})q\ln (1/q)}k^{-\theta } \end{aligned}$$

Consider the function \(\psi (t)\triangleq t^{\theta }q^{t}\) for \(t>0\). Then, straightforward calculus shows that \(\psi (t)\) is unimodal on \((0,\infty )\), with unique maximum \(t^{*}=\frac{\theta }{\ln (1/q)}\) and associated value \(\psi (t^{*})=\exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }\). Hence, for all \(t>0\), we have \(t^{\theta }q^{t}\le \exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }\), and consequently, \(q^{k}\le \exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }k^{-\theta }\) for all \(k\ge 1\). This allows us to conclude

$$\begin{aligned} {\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})&\le \exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }k^{-\theta }\\&\quad \left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\frac{2}{1-{\bar{\alpha }}}\frac{q^{-1}\exp (2\theta )-1}{1-q}\right) \\&\qquad +\frac{4\mathtt {B}}{(1-{\bar{\alpha }})q\ln (1/q)}k^{-\theta }= \mathtt {c}_{q,\theta }k^{-\theta }, \end{aligned}$$

where

$$\begin{aligned} \mathtt {c}_{q,\theta }\triangleq & {} \quad \exp (-\theta )\left( \frac{\theta }{\ln (1/q)}\right) ^{\theta }\left( \frac{2(1-\alpha _1)}{1-{\bar{\alpha }}}\left\| X_{1}-{\bar{x}}\right\| ^{2}+\frac{2}{1-{\bar{\alpha }}}\frac{q^{-1}\exp (2\theta )-1}{1-q}\right) \nonumber \\&+\frac{4\mathtt {B}}{(1-{\bar{\alpha }})q\ln (1/q)} \end{aligned}$$
(64)

Then, for any \(k\ge K_{\epsilon }\triangleq \lceil (\mathtt {c}_{q,\theta }/\epsilon )^{1/\theta }\rceil \), we are ensured that \({\mathbb {E}}(\left\| X_{k+1}-{\bar{x}}\right\| ^{2})\le \varepsilon \). Since \((\mathtt {c}_{q,\theta })^{1/\theta }={\mathcal {O}}(\exp (-1)\theta )\), we conclude that \(K_{\epsilon }={\mathcal {O}}(\theta \epsilon ^{-1/\theta })\). The corresponding oracle complexity is bounded as follows:

$$\begin{aligned} 2\sum _{i=1}^{K_{\epsilon }}m_{i}\le 2\sum _{i=1}^{K_{\epsilon }}i^{\theta }\le 2\int _{1}^{K_{\epsilon }+1}t^{\theta }\,dt\le \frac{2}{1+\theta }\left( \lceil \frac{\mathtt {c}_{q,\theta }}{\epsilon }\rceil ^{1/\theta }+1\right) ^{1+\theta }={\mathcal {O}}(\exp (\theta )\theta ^{\theta }(1/\epsilon )^{1+1/\theta }). \end{aligned}$$

\(\square \)

Remark 6

It may be observed that if the \(\theta = 1\) or \(m_k = k\), there is a worsening of the rate and complexity statements from their counterparts when the sampling rate is geometric; in particular, the iteration complexity worsens from \({\mathcal {O}}(\ln (\tfrac{1}{\epsilon }))\) to \({\mathcal {O}}(\tfrac{1}{\epsilon })\) while the oracle complexity degrades from the optimal level of \({\mathcal {O}}(\tfrac{1}{\epsilon })\) to \({\mathcal {O}}(\tfrac{1}{\epsilon ^2})\). But this deterioration comes with the advantage that the sampling rate is far slower and this may be of significant consequence in some applications.

4.3 Rates in terms of merit functions

In this subsection we estimate the iteration and oracle complexity of RISFBF with the help of a suitably defined gap function. Generally, a gap function associated with the monotone inclusion problem (MI) is a function \(\mathsf {Gap}:{\mathsf {H}}\rightarrow {\mathbb {R}}\) such that (i) \(\mathsf {Gap}\) is sign restricted on \({\mathsf {H}}\); and (ii) \(\mathsf {Gap}(x) = 0\) if and only if \(x\in {\mathsf {S}}\). The Fitzpatrick function [3, 30, 31, 77] is a useful tool to construct gap functions associated with a set-valued operator \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\). It is defined as the function \(G_{F}:{\mathsf {H}}\times {\mathsf {H}}\rightarrow [-\infty ,\infty ]\) given by

$$\begin{aligned} G_{F}(x,x^{*})=\left\langle x,x^{*}\right\rangle -\inf _{(y,y^{*})\in {{\,\mathrm{gr}\,}}(F)}\left\langle x-y,x^{*}-y^{*}\right\rangle . \end{aligned}$$
(65)

This function allows us to recover the operator F, by means of the following result (cf. [3, Prop. 20.58]): If \(F:{\mathsf {H}}\rightarrow 2^{{\mathsf {H}}}\) is maximally monotone, then \(G_{F}(x,x^{*})\ge \left\langle x,x^{*}\right\rangle \) for all \((x,x^{*})\in {\mathsf {H}}\times {\mathsf {H}}\), with equality if and only if \((x,x^{*})\in {{\,\mathrm{gr}\,}}(F)\). In particular, \({{\,\mathrm{gr}\,}}(F)=\{(x,x^{*})\in {\mathsf {H}}\times {\mathsf {H}}\vert \;G_{F}(x,x^{*})\ge \left\langle x,x^{*}\right\rangle \}\). In fact, it can be shown that the Fitzpatrick function is minimal in the family of convex functions \(f:{\mathsf {H}}\times {\mathsf {H}}\rightarrow (-\infty ,\infty ]\) such that \(f(x,x^{*})\ge \left\langle x,x^{*}\right\rangle \) for all \((x,x^{*})\in {\mathsf {H}}\times {\mathsf {H}}\), with equality if \((x,x^{*})\in {{\,\mathrm{gr}\,}}(F)\) [77].

Our gap function for the structured monotone operator \(F=V+T\) is derived from its Fitzpatrick function by setting \(\mathsf {Gap}(x)\triangleq G_{F}(x,0)\) for \(x\in {\mathsf {H}}\). This reads explicitly as

$$\begin{aligned} \mathsf {Gap}(x) \triangleq \sup _{(y,y^{*})\in {{\,\mathrm{gr}\,}}(F)}\left\langle y^{*},x-y\right\rangle =\sup _{p\in {{\,\mathrm{dom}\,}}T}\sup _{p^{*}\in T(p)}\left\langle V(p)+p^{*},x-p\right\rangle \qquad \forall x\in {\mathsf {H}}. \end{aligned}$$
(66)

It immediately follows from the definition that \(\mathsf {Gap}(x)\ge 0\) for all \(x\in {\mathsf {H}}\). It is also clear, that \(x\mapsto \mathsf {Gap}(x)\) is convex and lower semi-continuous and \(\mathsf {Gap}(x)=0\) if and only if \(x\in {\mathsf {S}}=\mathsf {Zer}(F)\). Let us give some concrete formulae for the gap function.

Example 4

(Variational Inequalities) We reconsider the problem described in Example 2. Let \(V:{\mathsf {H}}\rightarrow {\mathsf {H}}\) be a maximally monotone and L-Lipschitz continuous map, and \(T(x)={{\,\mathrm{{\mathsf {N}}}\,}}_{{\mathsf {C}}}(x)\) the normal cone of a given closed convex set \({\mathsf {C}}\subset {\mathsf {H}}\). Then, by [77, Prop. 3.3], the gap function (66) reduces to the well-known dual gap function, due to [78],

$$\begin{aligned} \mathsf {Gap}(x)= \sup _{p\in {\mathsf {C}}} \ \left\langle V(p),x-p\right\rangle . \end{aligned}$$

Example 5

(Convex Optimization) Reconsider the general non-smooth convex optimization problem in Example 1, with primal objective function \({\mathsf {H}}_{1}\ni u\mapsto f(u)+g(Lu)+h(u)\). Let us introduce the convex-concave function

$$\begin{aligned} {\mathcal {L}}(u,v)\triangleq f(u)+h(u)-g^{*}(v)+\left\langle Lu,v\right\rangle \qquad \forall (u,v)\in {\mathsf {H}}_{1}\times {\mathsf {H}}_{2}. \end{aligned}$$

Define

$$\begin{aligned} \Gamma (x')\triangleq \sup _{u\in {\mathsf {H}}_{1},v\in {\mathsf {H}}_{2}}\left( {\mathcal {L}}(u',v)-{\mathcal {L}}(u,v')\right) \qquad \forall x'=(u',v')\in {\mathsf {H}}={\mathsf {H}}_{1}\times {\mathsf {H}}_{2}. \end{aligned}$$
(67)

It is easy to check that \(\Gamma (x')\ge 0\), and equality holds only for a primal-dual pair (saddle-point) \({\bar{x}}\in {\mathsf {S}}\). Hence, \(\Gamma (\cdot )\) is a gap function for the monotone inclusion derived from the Karush-Kuhn-Tucker conditions (5). In fact, the function (67) is a standard merit function for saddle-point problems (see e.g. [79]). To relate this gap function to the Fitzpatrick function, we exploit the maximally monotone operators V and T introduced Example 1. In terms of these mappings, first observe that for \(p=({\tilde{u}},{\tilde{v}}),x=(u,v)\) we have

$$\begin{aligned} \left\langle V(p),x-p\right\rangle =\left\langle \nabla h({\tilde{u}}),u-{\tilde{u}}\right\rangle +\left\langle {\tilde{v}},Lu\right\rangle -\left\langle L{\tilde{u}},v\right\rangle \end{aligned}$$

Since h is convex differentiable, the classical gradient inequality reads as \(h(u)-h({\tilde{u}})\ge \left\langle \nabla h({\tilde{u}}),u-{\tilde{u}}\right\rangle \). Using this estimate in the previous display shows

$$\begin{aligned} \left\langle V(p),x-p\right\rangle \le h(u)-h({\tilde{u}})-\left\langle L{\tilde{u}},v\right\rangle +\left\langle {\tilde{v}},Lu\right\rangle . \end{aligned}$$

For \(p^{*}=({\tilde{u}}^{*},{\tilde{v}}^{*})\in T(p)\), we again employ convexity to get

$$\begin{aligned} f(u)&\ge f({\tilde{u}})+\left\langle {\tilde{u}}^{*},u-{\tilde{u}}\right\rangle \qquad \forall u\in H_{1},\\ g^{*}(v)&\ge g^{*}({\tilde{v}})+\left\langle {\tilde{v}}^{*},v-{\tilde{v}}\right\rangle \qquad \forall v\in H_{2}. \end{aligned}$$

Hence,

$$\begin{aligned} \left\langle {\tilde{u}}^{*},u-{\tilde{u}}\right\rangle +\left\langle {\tilde{v}}^{*},v-{\tilde{v}}\right\rangle \le (f(u)-f({\tilde{u}}))+(g^{*}(v)-g^{*}({\tilde{v}})). \end{aligned}$$

Therefore, we see

$$\begin{aligned} \left\langle V(p)+p^{*},x-p\right\rangle&\le \left( f(u)+h(u)-g^{*}({\tilde{v}})+\left\langle {\tilde{v}},Lu\right\rangle \right) -\left( f({\tilde{u}})+h({\tilde{u}})-g^{*}(v)+\left\langle v,L{\tilde{u}}\right\rangle \right) \\&={\mathcal {L}}(u,{\tilde{v}})-{\mathcal {L}}({\tilde{u}},v). \end{aligned}$$

Hence,

$$\begin{aligned} \mathsf {Gap}(x)=\sup _{(p,p^{*})\in {{\,\mathrm{gr}\,}}(T)}\left\langle V(p)+p^{*},x-p\right\rangle&\le \sup _{({\tilde{u}},{\tilde{v}})\in {\mathsf {H}}_{1}\times {\mathsf {H}}_{2}}\left( {\mathcal {L}}(u,{\tilde{v}})-{\mathcal {L}}({\tilde{u}},v)\right) =\Gamma (x). \end{aligned}$$

It is clear from the definition that a convex gap function can be extended-valued and its domain is contingent on the boundedness properties of \({{\,\mathrm{dom}\,}}T\). In the setting where T(x) is bounded for all \(x\in {\mathsf {H}}\), the gap function is clearly globally defined. However, the case where \({{\,\mathrm{dom}\,}}T\) is unbounded has to be handled with more care. There are potentially two approaches to cope with such a situation: One would be to introduce a perturbation-based termination criterion as defined in [80], and recently used in [81] to solve a class of structured stochastic variational inequality problems. The other solution strategy is based on the notion of restricted merit functions, first introduced in [82], and later on adopted in [83]. We follow the latter strategy.

Let \(x^{s}\in {{\,\mathrm{dom}\,}}T\) denote an arbitrary reference point and \(D>0\) a suitable constant. Define the closed set \({\mathsf {C}}\triangleq {{\,\mathrm{dom}\,}}T\cap \{x\in {\mathsf {H}}\vert \; \left\| x-x^{s}\right\| \le D\}\), and the restricted gap function

$$\begin{aligned} \mathsf {Gap}(x\vert {\mathsf {C}})\triangleq \sup \{\left\langle y^{*},x-y\right\rangle \vert y\in {\mathsf {C}},y^{*}\in F(y)\}. \end{aligned}$$
(68)

Clearly, \(\mathsf {Gap}(x\vert {{\,\mathrm{dom}\,}}T)=\mathsf {Gap}(x)\). The following result explains in a precise way the meaning of the restricted gap function. It extends the variational case in [82, Lemma 1] and [83, Lemma 3] to the general monotone inclusion case.

Lemma 13

Let \({\mathsf {C}}\subset {\mathsf {H}}\) be nonempty closed and convex. The function \({\mathsf {H}}\ni x\mapsto \mathsf {Gap}(x\vert {\mathsf {C}})\) is well-defined and convex on \({\mathsf {H}}\). For any \(x\in {\mathsf {C}}\) we have \(\mathsf {Gap}(x\vert {\mathsf {C}})\ge 0\). Moreover, if \({\bar{x}}\in {\mathsf {C}}\) is a solution to (MI), then \(\mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0\). Moreover, if \(\mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0\) for some \({\bar{x}}\in {{\,\mathrm{dom}\,}}T\) such that \(\left\| {\bar{x}}-x^{s}\right\| <D\), then \({\bar{x}}\in {\mathsf {S}}\).

Proof

The convexity and non-negativity for \(x\in {\mathsf {C}}\) of the restricted function is clear. Since \(\mathsf {Gap}(x\vert C)\le \mathsf {Gap}(x)\) for all \(x\in {\mathsf {H}}\), we see

$$\begin{aligned} {\bar{x}}\in {\mathsf {S}}\Leftrightarrow \mathsf {Gap}({\bar{x}})=0\Rightarrow \mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0. \end{aligned}$$

To show the converse implication, suppose \(\mathsf {Gap}({\bar{x}}\vert {\mathsf {C}})=0\) for some \({\bar{x}}\in {\mathsf {C}}\) with \(\left\| {\bar{x}}-x^{s}\right\| <D\). Without loss of generality we can choose \({\bar{x}}\in {\mathsf {C}}\) in this particular way, since we may choose the radius of the ball as large as desired. It follows that \(\left\langle y^{*},{\bar{x}}-y\right\rangle \le 0\) for all \(y\in {\mathsf {C}},y^{*}\in F(y)\). Hence, \({\bar{x}}\in {\mathsf {C}}\) is a Minty solution to the Generalized Variational inequality with maximally monotone operator \(F(x)+{{\,\mathrm{{\mathsf {N}}}\,}}_{{\mathsf {C}}}(x)\). Since F is upper semi-continuous and monotone, Minty solutions coincide with Stampacchia solutions, implying that there exists \({\bar{x}}^{*}\in F({\bar{x}})\) such that \(\left\langle {\bar{x}}^{*},y-{\bar{x}}\right\rangle \ge 0\) for all \(y\in {\mathsf {C}}\) (see e.g. [84]). Consider now the gap program

$$\begin{aligned} g_{{\mathsf {C}}}({\bar{x}},{\bar{x}}^{*})\triangleq \inf \{\left\langle {\bar{x}}^{*},y-{\bar{x}}\right\rangle \vert y\in {\mathsf {C}}\}. \end{aligned}$$

This program is solved at \(y={\bar{x}}\), which is a point for which \(\left\| x-x^{s}\right\| <D\). Hence, the constraint can be removed, and we conclude \(\left\langle {\bar{x}}^{*},y-{\bar{x}}\right\rangle \ge 0\) for all \(y\in {{\,\mathrm{dom}\,}}(F)\). By monotonicity of F, it follows

$$\begin{aligned} \left\langle y^{*},y-{\bar{x}}\right\rangle \ge \left\langle {\bar{x}}^{*},y-{\bar{x}}\right\rangle \ge 0 \quad \forall (y,y^{*})\in {{\,\mathrm{gr}\,}}(F). \end{aligned}$$

Hence, \(\mathsf {Gap}({\bar{x}})=0\) and we conclude \({\bar{x}}\in {\mathsf {S}}\). \(\square \)

In order to state and prove our complexity results in terms of the proposed merit function, we start with the first preliminary result.

Lemma 14

Consider the sequence \((X_{k})_{k\in {\mathbb {N}}}\) generated by RISFBF with the initial condition \(X_{0}=X_{1}\). Suppose \(\lambda _k=\lambda \in (0,1/(2L))\) for every \(k \in {\mathbb {N}}\). Moreover, suppose \((\alpha _k)_{k\in {\mathbb {N}}}\) is a non-decreasing sequence such that \(0<\alpha _k\le {\bar{\alpha }}<1\), \(\rho _k=\tfrac{3(1-{\bar{\alpha }})^2}{2(2\alpha _k^2-\alpha _k+1)(1+L\lambda )}\) for every \(k \in {\mathbb {N}}\). Define

$$\begin{aligned} \Delta M_{k}\triangleq \frac{3\rho _{k}\lambda _{k}^{2}}{1+L\lambda _{k}} \left\| \mathtt {e}_{k}\right\| ^{2}+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2} \end{aligned}$$
(69)

and for \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we define \(\Delta N_{k}(p,p^{*})\) as in (21). Then, for all \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\), we have

$$\begin{aligned} \sum _{k=1}^{K}2\rho _{k}\lambda \left\langle p^{*},Y_k-p\right\rangle \le (1-\alpha _1)\left\| X_{1}-p\right\| ^2+\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\Delta N_{k}(p,0). \end{aligned}$$
(70)

Proof

For \((p,p^{*})\in {{\,\mathrm{gr}\,}}(V+T)\), we know from eq. (23)

$$\begin{aligned} \left\langle Z_{k}-R_{k},Y_{k}-p\right\rangle\ge & {}\,\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle +\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle \\\ge & {} \,\left\langle p^{*},Y_{k}-p\right\rangle +\lambda _{k}\left\langle W_{k},Y_{k}-p\right\rangle , \end{aligned}$$

where the last inequality uses the monotonicity of V. We first derive a recursion which is similar to the fundamental recursion in Lemma 4. Invoking (25) and (26), we get

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}&\le \left\| Z_{k}-p\right\| ^{2}-\frac{1-\rho _{k}}{\rho _{k}}\left\| X_{k+1}-Z_{k}\right\| ^{2}+2\lambda ^{2}\rho _{k}\left\| \mathtt {e}_{k}\right\| ^{2}\nonumber \\&\quad -2\rho _{k}\lambda _{k}\left\langle W_{k}+p^{*},Y_{k}-p\right\rangle \nonumber \\&\quad -\rho _{k}(1-2L^{2}\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2}+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\left\| U_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),p-Y_{k}\right\rangle . \end{aligned}$$
(71)

Multiplying both sides of (28) and noting that \((1-2L\lambda _{k})(1+L\lambda _{k})\le 1-2L^{2}\lambda ^{2}_{k}\), we obtain the following inequality

$$\begin{aligned} -\rho _{k}(1-2L^{2}\lambda ^{2}_{k})\left\| Y_{k}-Z_{k}\right\| ^{2}\le -\frac{1-2L\lambda _{k}}{2\rho _{k}(1+L\lambda _{k})}\left\| X_{k+1}-Z_{k}\right\| ^{2}+\frac{\rho _{k}\lambda ^{2}_{k}(1-2L\lambda _{k})}{1+L\lambda _{k}}\left\| \mathtt {e}_{k}\right\| ^{2}. \end{aligned}$$

Inserting the above inequality to (71) and using the same fashion in deriving (33), we arrive at

$$\begin{aligned} \left\| X_{k+1}-p\right\| ^{2}\le &(1+\alpha _{k})\left\| X_{k}-p\right\| ^{2}-\alpha _{k}\left\| X_{k-1}-p\right\| ^{2}\nonumber +\Delta M_{k}\\&+\Delta N_{k}(p,p^{*})-2\rho _{k}\lambda _{k}\left\langle V(Y_{k})-V(p),Y_{k}-p\right\rangle \nonumber \\&+\alpha _{k}\left\| X_{k}-X_{k-1}\right\| ^{2}\left( 2\alpha _{k}+\frac{3(1-\alpha _{k})}{2\rho _{k}(1+L\lambda _{k})}\right) \nonumber \\&-(1-\alpha _{k})\left( \frac{3}{2\rho _{k}(1+L\lambda _{k})}-1\right) \left\| X_{k+1}-X_{k}\right\| ^{2}. \end{aligned}$$
(72)

Invoking the monotonicity of V and rearranging (72), it follows that

$$\begin{aligned}&\Vert X_{k+1}-p\Vert ^2+(1-\alpha _{k})\left( \tfrac{3}{2\rho _{k}(1+L\lambda _k)}-1\right) \Vert X_{k+1}-X_k\Vert ^2-\alpha _{k}\Vert X_k-p\Vert ^2 \nonumber \\&\quad \le \Vert X_k-p\Vert ^2+(1-\alpha _k)\left( \tfrac{3}{2\rho _k(1+L\lambda _k)}-1\right) \Vert X_{k}-X_{k-1}\Vert ^2\nonumber \\&\qquad -\alpha _k\Vert X_{k-1}-p\Vert ^2+\Delta M_{k}+\Delta N_{k}(p,p^*) \nonumber \\&\qquad +\underbrace{\left( 2\alpha _k^2+(1-\alpha _k)\left( 1-\tfrac{3(1-\alpha _k)}{2\rho _k(1+L\lambda _k)}\right) \right) }_{\tiny \le 0}\Vert X_k-X_{k-1}\Vert ^2 \nonumber \\&\quad \le \Vert X_k-p\Vert ^2+(1-\alpha _k)\left( \tfrac{3}{2\rho _k(1+L\lambda _k)}-1\right) \Vert X_{k}-X_{k-1}\Vert ^2\nonumber \\&\qquad -\alpha _k\Vert X_{k-1}-p\Vert ^2+\Delta M_{k}+\Delta N_{k}(p,p^*). \end{aligned}$$

We define \(\beta _{k+1}\) as

$$\begin{aligned} \beta _{k+1}\triangleq (1-\alpha _{k})\left( \frac{3}{2\rho _{k}(1+L\lambda _{k})}-1\right) -(1-\alpha _{k+1})\left( \frac{3}{2\rho _{k+1}(1+L\lambda _{k+1})}-1\right) , \end{aligned}$$

and similarly with (36), we can show \(\{\beta _k\}\) is non-increasing by choosing \(\rho _k=\tfrac{3(1-{\bar{\alpha }})^2}{2(2\alpha _k^2-\alpha _k+1)(1+L\lambda _k)}\) and \(\lambda _k \equiv \lambda \). Thus, \((1-\alpha _{k+1})\left( \tfrac{3}{2\rho _{k+1}(1+L\lambda _{k+1})}-1\right) \le (1-\alpha _{k})\left( \tfrac{3}{2\rho _{k}(1+L\lambda _k)}-1\right) \). Together with \(\alpha _{k+1}\ge \alpha _{k}\), the last inequality gives

$$\begin{aligned}&\Vert X_{k+1}-p\Vert ^2+(1-\alpha _{k+1})\left( \tfrac{3}{2\rho _{k+1}(1+L\lambda )}-1\right) \Vert X_{k+1}-X_k\Vert ^2-\alpha _{k+1}\Vert X_k-p\Vert ^2 \\&\quad \le \Vert X_k-p\Vert ^2+(1-\alpha _k)\left( \tfrac{3}{2\rho _k(1+L\lambda )}-1\right) \Vert X_{k}-X_{k-1}\Vert ^2\\&\qquad -\alpha _k\Vert X_{k-1}-p\Vert ^2+\Delta M_{k}+\Delta N_{k}(p,p^*). \end{aligned}$$

Recall that \(\Delta N_{k}(p,p^{*})=\Delta N_{k}(p,0)+2\rho _{k}\lambda \left\langle p^{*},p-Y_{k}\right\rangle \). Hence, after setting \(\Delta N_{k}(p,0)=\Delta N_{k}(p)\), rearranging the expression given in the previous display shows that

$$ \begin{aligned} 2\rho _{k} \lambda \left\langle {p^{*} ,Y_{k} - p} \right\rangle \le & \left( {X_{k} - p^{2} + (1 - \alpha _{k} )\left( {\tfrac{3}{{2\rho _{k} (1 + L\lambda )}} - 1} \right)X_{k} } \right. \\ & \left. { - X_{{k - 1}}^{2} - \alpha _{k} X_{{k - 1}} - p^{2} } \right) \\ & - \left( {X_{{k + 1}} - p^{2} + (1 - \alpha _{{k + 1}} )\left( {\tfrac{3}{{2\rho _{{k + 1}} (1 + L\lambda )}} - 1} \right)X_{{k + 1}} } \right. \\ & \left. { - X_{k} ^{2} - \alpha _{{k + 1}} X_{k} - p^{2} } \right) + \Delta M_{k} + \Delta N_{k} (p) \\ \end{aligned} $$

Summing over \(k=1,\ldots ,K\), we obtain

$$\begin{aligned}&\sum _{k=1}^{K}2\rho _{k}\lambda \left\langle p^{*},Y_k-p\right\rangle \le \sum _{k=1}^{K}\left[ \left( \Vert X_k-p\Vert ^2+(1-\alpha _k)\left( \tfrac{3}{2\rho _k(1+L\lambda )}-1\right) \Vert X_{k}\right. \right. \\&\qquad \left. -X_{k-1}\Vert ^2-\alpha _k\Vert X_{k-1}-p\Vert ^2\right) \\&\qquad \left. - \left( \Vert X_{k+1}-p\Vert ^2+(1-\alpha _{k+1})\left( \tfrac{3}{2\rho _{k+1}(1+L\lambda )}-1\right) \Vert X_{k+1}-X_k\Vert ^2-\alpha _{k+1}\Vert X_k-p\Vert ^2\right) \right] \\&\qquad +\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\Delta N_{k}(p)\\&\quad \le \Vert X_1-p\Vert ^2+(1-\alpha _1)\left( \tfrac{3}{2\rho _1(1+L\lambda )}-1\right) \Vert X_{1}-X_{0}\Vert ^2-\alpha _1\Vert X_{0}-p\Vert ^2 \\&\qquad +\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\Delta N_{k}(p) \\&\quad = (1-\alpha _1) \Vert X_1-p\Vert ^2+\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\Delta N_{k}(p), \end{aligned}$$

where we notice \(X_1=X_0\) in the last inequality. \(\square \)

Next, we derive a rate statement in terms of the gap function, using the averaged sequence

$$\begin{aligned} {\bar{X}}_K\triangleq \tfrac{\sum _{k=1}^{K}\rho _kY_{k}}{\sum _{k=1}^{K}\rho _k}. \end{aligned}$$
(73)

Theorem 15

(Rate and oracle complexity under monotonicity of V) Consider the sequence \((X_{k})_{k\in {\mathbb {N}}}\) generated RISFBF. Suppose Assumptions 1-5 hold. Suppose \(m_k \triangleq \lfloor k^a\rfloor \) and \(\lambda _k=\lambda \in (0,1/(2L))\) for every \(k \in {\mathbb {N}}\) where \(a > 1\). Suppose \((\alpha _k)_{k\in {\mathbb {N}}}\) is a non-decreasing sequence such that \(0<\alpha _k\le {\bar{\alpha }}<1\), \(\rho _k=\tfrac{3(1-{\bar{\alpha }})^2}{2(2\alpha _k^2-\alpha _k+1)(1+L\lambda )}\) for every \(k \in {\mathbb {N}}\). Then the following hold for any \(K\in {\mathbb {N}}\):

  1. (i)

    \({\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{K}\vert {\mathsf {C}})] \le {\mathcal {O}}\left( \tfrac{1}{K}\right) .\)

  2. (ii)

    Given \(\varepsilon >0\), define \(K_{\varepsilon }\triangleq \{k\in {\mathbb {N}}\vert {\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{k}\vert {\mathsf {C}})]\le \varepsilon \}\), then \(\sum _{k=1}^{K_{\varepsilon }} m_k \le {\mathcal {O}}\left( \tfrac{1}{\varepsilon ^{1+a}}\right) .\)

The proof of this Theorem builds on an idea which is frequently used in the analysis of stochastic approximation algorithms, and can at least be traced back to the robust stochastic approximation approach of [49]. In order to bound the expectation of the gap function, we construct an auxiliary process which allows us to majorize the gap via a quantity which is independent of the reference points. Once this is achieved, a simple variance bound completes the result.

Proof of Theorem 15

We define an auxiliary process \((\Psi _{k})_{k\in {\mathbb {N}}}\) such that

$$\begin{aligned} \Psi _{k+1}\triangleq \Psi _{k}+\rho _k\lambda _k W_{k},\quad \Psi _{1}\in {{\,\mathrm{dom}\,}}(T). \end{aligned}$$
(74)

Then,

$$\begin{aligned} \left\| \Psi _{k+1}-p\right\| ^{2}&=\left\| (\Psi _{k}-p)+\rho _{k}\lambda _{k}W_{k}\right\| ^{2}=\left\| \Psi _{k}-p\right\| ^{2}+\rho ^{2}_{k}\lambda _{k}^{2}\left\| W_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle \Psi _{k}-p,W_{k}\right\rangle , \end{aligned}$$

so that

$$\begin{aligned} 2\rho _{k}\lambda _{k}\left\langle W_{k},p-\Psi _{k}\right\rangle =\left\| \Psi _{k}-p\right\| ^{2}-\left\| \Psi _{k+1}-p\right\| ^{2}+\rho ^{2}_{k}\lambda _{k}^{2}\left\| W_{k}\right\| ^{2}. \end{aligned}$$

Introducing the iterate \(Y_{k}\), the above implies

$$\begin{aligned} 2\rho _{k}\lambda _{k}\left\langle W_{k},p-Y_{k}\right\rangle&=2\rho _{k}\lambda _{k}\left\langle W_{k},p-\Psi _{k}\right\rangle +2\rho _{k}\lambda _{k}\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle \\&=\left\| \Psi _{k}-p\right\| ^{2}-\left\| \Psi _{k+1}-p\right\| ^{2}+\rho ^{2}_{k}\lambda _{k}^{2}\left\| W_{k}\right\| ^{2}+2\rho _{k}\lambda _{k}\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle . \end{aligned}$$

As \(\Delta N_{k}(p)=2\rho _{k}\lambda _{k}\left\langle W_{k},p-Y_{k}\right\rangle \), this implies via a telescopian sum argument

$$\begin{aligned} \sum _{k=1}^{K}\Delta N_{k}(p)\le \left\| \Psi _{1}-p\right\| ^{2}+\sum _{k=1}^{K}\rho ^{2}_{k}\lambda ^{2}_{k}\left\| W_{k}\right\| ^{2}+\sum _{k=1}^{K}2\rho _{k}\lambda _{k}\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle . \end{aligned}$$
(75)

Using Lemma 14 and setting \(\lambda _k \equiv \lambda \), for any \((p,p^{*})\in {{\,\mathrm{gr}\,}}(F)\) it holds true that

$$\begin{aligned} \sum _{k=1}^{K}2\rho _{k}\lambda \left\langle p^{*},Y_{k}-p\right\rangle&\le (1-\alpha _{1})\left\| X_{1}-p\right\| ^{2}+\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\Delta N_{k}(p). \end{aligned}$$

Define \(\mathtt {c}_{1}\triangleq (1-\alpha _{1})\left\| X_{1}-p\right\| ^{2}\), divide both sides by \(\sum _{k=1}^{K}\rho _{k}\) and using our definition of an ergodic average (73), this gives

$$\begin{aligned} 2\lambda \left\langle p^{*},{\bar{X}}_{K}-p\right\rangle \le \frac{1}{\sum _{k=1}^{K}\rho _{k}}\left\{ \mathtt {c}_{1}+\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\Delta N_{k}(p)\right\} . \end{aligned}$$

Using the bound established in eq. (75), it follows

$$\begin{aligned} 2\lambda \left\langle p^{*},{\bar{X}}_{K}-p\right\rangle&\le \frac{1}{\sum _{k=1}^{K}\rho _{k}}\left\{ \mathtt {c}_{1}+\sum _{k=1}^{K}\Delta M_{k}+\left\| \Psi _{1}-p\right\| ^{2}\right. \\&\quad \left. +\sum _{k=1}^{K}\rho ^{2}_{k}\lambda ^{2}\left\| W_{k}\right\| ^{2}+\sum _{k=1}^{K}2\rho _{k}\lambda \left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle \right\} . \end{aligned}$$

Choosing \(\Psi _{1},p\in {\mathsf {C}}\) and introducing \(\mathtt {c}_{2}\triangleq \mathtt {c}_{1}+4D^{2}\), we see that the above can be bounded by a random quantity which is independent of p:

$$\begin{aligned} 2\lambda \left\langle p^{*},{\bar{X}}_{K}-p\right\rangle \le \frac{1}{\sum _{k=1}^{K}\rho _{k}}\left\{ \mathtt {c}_{2}+\sum _{k=1}^{K}\Delta M_{k}+\sum _{k=1}^{K}\rho ^{2}_{k}\lambda ^{2}\left\| W_{k}\right\| ^{2}+\sum _{k=1}^{K}2\rho _{k}\lambda _{k}\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle \right\} . \end{aligned}$$

Taking the supremum over pairs \((p,p^{*})\) such that \(p\in {\mathcal {C}}\) and \(p^{*}\in F(y)\), it follows

$$\begin{aligned} 2\lambda \mathsf {Gap}({\bar{X}}_{K}\vert {\mathsf {C}})\le & {} \frac{\mathtt {c}_{2}}{\sum _{k=1}^{K}\rho _{k}}\nonumber \\&+\frac{\sum _{k=1}^{K}\Delta M_{k} +\sum _{k=1}^{K}\rho ^{2}_{k}\lambda ^{2}\left\| W_{k}\right\| ^{2}+\sum _{k=1}^{K}2\rho _{k}\lambda _{k}\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle }{\sum _{k=1}^{K}\rho _{k}} \end{aligned}$$
(76)

In order to proceed, we bound the first moment of the process \(\Delta M_{k}\) in the same way as in (34), in order to get

$$\begin{aligned} {\mathbb {E}}[\Delta M_{k}\vert {\mathcal {F}}_{k}]&\le \frac{6\rho _{k}\lambda _{k}^2}{1+L\lambda _{k}}{\mathbb {E}}[\left\| W_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]+\lambda ^{2}_{k}\left( \frac{6\rho _{k}}{1+L\lambda _{k}}+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\right) {\mathbb {E}}[\left\| U_{k}\right\| ^{2}\vert {\mathcal {F}}_{k}]\\&= \frac{\left( \frac{12\rho _{k}\lambda ^{2}_{k}}{1+L\lambda _{k}}\sigma ^{2}+\frac{\rho _{k}\lambda ^{2}_{k}}{2}\sigma ^{2}\right) }{m_{k}}\triangleq \frac{\mathtt {a}_{k}\sigma ^{2}}{m_{k}}. \end{aligned}$$

Next, we take expectations on both sides of inequality (76), and use the bound (17), and \({\mathbb {E}}[\left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle ]={\mathbb {E}}\left[ {\mathbb {E}}\left( \left\langle W_{k},\Psi _{k}-Y_{k}\right\rangle \vert {\hat{{\mathcal {F}}}}_{k}\right) \right] =0.\) This yields

$$\begin{aligned} 2\lambda {\mathbb {E}}\left[ \mathsf {Gap}({\bar{X}}_{K}\vert {\mathsf {C}})\right] \le \frac{\mathtt {c}_{2}}{\sum _{k=1}^{K}\rho _{k}}+\frac{1}{\sum _{k=1}^{K}\rho _{k}}\left( \sum _{k=1}^{K}\frac{\mathtt {a}_{k}\sigma ^{2}}{m_{k}}+\sum _{k=1}^{K}\rho ^{2}_{k}\lambda ^{2}\frac{\sigma ^{2}}{m_{k}}\right) . \end{aligned}$$

Since \(\alpha _{k}\uparrow {\bar{\alpha }}\in (0,1)\), we know that \(\rho _{k}\ge {\tilde{\rho }}\triangleq \frac{3(1-{\bar{\alpha }}^{2})}{2(1+L\lambda )(2{\bar{\alpha }}^{2}+1)}\). Similarly, since \(2\alpha ^{2}_{k}-\alpha _{k}+1\ge 7/8\) for all k, it follows \(\rho _{k}\le {\bar{\rho }}\triangleq \frac{12(1-{\bar{\alpha }})^{2}}{7}\). Using this upper and lower bound on the relaxation sequence, we also see that \(\mathtt {a}_{k}\le \lambda ^{2}\left( \frac{12{\bar{\rho }}}{1+L\lambda }+\frac{{\bar{\rho }}}{2}\right) \equiv {\bar{\mathtt {a}}}\), so that

$$\begin{aligned} 2\lambda {\mathbb {E}}\left[ \mathsf {Gap}({\bar{X}}_{K}\vert {\mathsf {C}})\right] \le \frac{\mathtt {c}_{2}}{{\tilde{\rho }}K}+\frac{1}{{\tilde{\rho }}K}\left( {\bar{\mathtt {a}}}\sigma ^{2}+{\bar{\rho }}^2\lambda ^{2}\sigma ^{2}\right) \sum _{k=1}^{K}\frac{1}{m_{k}}\le \frac{\mathtt {c}_{3}}{K} \end{aligned}$$

where \(\mathtt {c}_{3}\triangleq \frac{\mathtt {c}_{2}}{{\tilde{\rho }}}+\frac{1}{{\tilde{\rho }}}\left( {\bar{\mathtt {a}}}\sigma ^{2}+{\bar{\rho }}\lambda ^{2}\sigma ^{2}\right) \sum _{k=1}^{\infty }\frac{1}{m_{k}}\). Hence, defining the deterministic stopping time \(K_{\varepsilon }=\{k\in {\mathbb {N}}\vert {\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{k}\vert {\mathsf {C}})]\le \varepsilon \}\), we see \(K_{\varepsilon }\ge \frac{\mathtt {c}_{3}}{2\lambda \varepsilon }=\frac{\mathtt {c}_{4}}{\varepsilon }\).

(ii). Suppose \(m_k=\lfloor k^a\rfloor \), for \(a>1\). Then the oracle complexity to compute an \({\bar{X}}_K\) such that \({\mathbb {E}}[\mathsf {Gap}({\bar{X}}_{k}\vert {\mathsf {C}})] \le \epsilon \) is bounded as

$$\begin{aligned} \sum _{k=1}^Km_k \le \sum _{k=1}^{\lceil (\mathtt {c}_{4}/\varepsilon )\rceil }m_k\le \sum _{k=1}^{\lceil (\mathtt {c}_{4}/\varepsilon )\rceil }k^a\le \int _{k=1}^{(\mathtt {c}_{4}/\varepsilon )+1}x^a dx\le \tfrac{((\mathtt {c}_{4}/\varepsilon )+1)^{a+1}}{a+1}\le \left( \tfrac{\mathtt {c}_{4}}{\varepsilon ^{a+1}}\right) . \end{aligned}$$

\(\square \)

Remark 7

In the prior result, we employ a sampling rate \(m_k = \lfloor k^a \rfloor \) where \(a > 1\). This achieves the optimal rate of convergence. In contrast, the authors in [32] employ a sampling rate, loosely given by \(m_k = \lfloor k^{1+a} (\ln (k))^{1+b} \rfloor \) where \(a > 0, b \ge -1\) or \(a = 0, b > 0\). We observe that when \(a > 0\) and \(b \ge -1\), the mini-batch size grows faster than our proposed \(m_k\) while it is comparable in the other case.

5 Applications

In this section, we compare the proposed scheme with its SA counterparts on a class of monotone two-stage stochastic variational inequality problems (Sec. 5.1) and a supervised learning problem (Sec. 5.2) and discuss the resulting performance.

5.1 Two-stage stochastic variational inequality problems

In this section, we describe some preliminary computational results obtained from Algorithm 1 when applied to a class of two-stage stochastic variational inequality problems, recently introduced by [85].

Consider an imperfectly competitive market with N firms playing a two-stage game. In the first stage, the firms decide upon their capacity level \(x_{i}\in [l_{i},u_{i}]\), anticipating the expected revenues to be obtained in the second stage in which they compete by choosing quantities à la Cournot. The second-stage market is characterized by uncertainty as the per-unit cost \(h_{i}(\xi _{i})\) is realized on the spot and cannot be anticipated. To compute an equilibrium in this game, we assume that each player is able to take stochastic recourse by determining production levels \(y_i(\xi )\), contingent on random convex costs and capacity levels \(x_i\). In order to bring this into the terminology for our problem, let use define the feasible set for capacity decisions of firm i as \({\mathcal {X}}_{i}\triangleq [l_{i},u_{i}]\subset {\mathbb {R}}_{+}\). The joint profile of capacity decisions is denoted by an N-tuple \(x=(x_{1},\ldots ,x_{N})\in {\mathcal {X}}\triangleq \prod _{i=1}^{N}{\mathcal {X}}_{i}={\mathcal {X}}\). The capacity choice of player i is then determined as a solution to the parametrized problem (Play\(_i(x_{-i})\))

$$\begin{aligned} \min _{x_i \in {\mathcal {X}}_i} \, c_i(x_i) -\left( p(X)x_i - {\mathbb {E}}_{\xi }[{\mathcal {Q}}_i(x_i,\xi )]\right) ,\qquad \qquad (\text{ Play}_i(x_{-i})) \end{aligned}$$

where \(c_i: {\mathcal {X}}_i \rightarrow {\mathbb {R}}_+\) is a \({\tilde{L}}^c_i\)-smooth and convex cost function and \(p(\cdot )\) denotes the inverse-demand function defined as \(p(X)= d-rX\), \(d, r > 0\). The function \({\mathcal {Q}}_i(\cdot ,\xi )\) denotes the optimal cost function of firm i in scenario \(\xi \in \Xi \), assuming a value \({\mathcal {Q}}_i(x_i,\xi )\) when the capacity level \(x_{i}\) is chosen. The recourse function \({\mathbb {E}}_{\xi }[{\mathcal {Q}}_i(\cdot ,\xi )]\) denotes the expectation of the optimal value of the player i’s second stage problem and is defined as

$$\begin{aligned} {\mathcal {Q}}_i(x_i,\xi )&\triangleq \min \{h_{i}(\xi )y_{i}(\xi )\vert y_{i}(\xi )\in [0,x_{i}]\}\\&=\max \{\pi _{i}(\xi )x_{i}\vert \pi _{i}(\xi )\le 0,h_{i}(\xi )-\pi _{i}(\xi )\ge 0\}.\qquad \qquad (\text{ Rec}_i(x_{-i})) \end{aligned}$$

A Nash equilibrium of this game is given by a tuple \((x^{*}_1, \cdots , x^{*}_N)\) where \( x^{*}_i \text{ solves } (\text{ Play}_i(x_{-i}^*))\) for each \(i=1,2,\ldots ,N\). A simple computation shows that \(Q_{i}(x_{i},\xi )=\min \{0,h_{i}(\xi )x_{i}\}\), and hence it is nonsmooth. In order to obtain a smoothed variant, we introduce \({\mathcal {Q}}_i^{\epsilon }(\cdot ,\xi _i)\), defined as

$$\begin{aligned} {\mathcal {Q}}^{\epsilon }_i(x_i,\xi )\triangleq \max \{ x_i \pi _i(\xi ) - \tfrac{\epsilon }{2} (\pi _i(\xi ))^2\vert \pi _i(\xi ) \le 0, \pi _i(\xi ) \le h_i(\xi )\},\quad \epsilon >0. \end{aligned}$$

This is the value function of a quadratic program, requiring the maximization of an \(\epsilon \)-strongly concave function. Hence, \({\mathcal {Q}}_{i}^{\epsilon }(x_{i},\xi )\) is single-valued and \(\nabla _{x_i}{\mathcal {Q}}^{\epsilon }_i(\cdot ,\xi )\) is \(\tfrac{1}{\epsilon }\)-Lipschitz and \(\epsilon \)-strongly monotone [86, Prop.12.60] for all \(\xi \in \Xi \). The latter is explicitly given by

$$\begin{aligned} \nabla _{x_i}{\mathcal {Q}}^{\epsilon }_i(x_i,\xi )\triangleq \mathop {\mathrm {argmax}}\limits \{x_i \pi _i(\xi ) - \tfrac{\epsilon }{2} (\pi _i(\xi ))^2 \vert \pi _{i}(\xi )\le 0,\pi _i(\xi ) \le h_i(\xi )\}. \end{aligned}$$

Employing this smoothing strategy in our two-stage noncooperative game yields the individual decision problem

$$\begin{aligned} (\forall i\in \{1,\ldots ,N\}): \min _{x_i \in {\mathcal {X}}_i} \ c_i(x_i) - p(X)x_i + {\mathbb {E}}_{\xi }[{\mathcal {Q}}^{\epsilon }_i(x_i,\xi )].\qquad \qquad (\text{ Play}^{\epsilon }_i(x_{-i})) \end{aligned}$$

The necessary and sufficient equilibrium conditions of this \(\epsilon \)-smoothed game can be compactly represented as

$$\begin{aligned} \begin{aligned}&0 \in F^\epsilon (x) \triangleq V^{\epsilon }(x)+T(x), \text{ where } \\&V^{\epsilon }(x) = C(x) + R(x) + D^{\epsilon }(x),\text { and } T(x) = {{\,\mathrm{{\mathsf {N}}}\,}}_{{\mathcal {X}}}(x),\qquad \qquad (\text {SGE}^\epsilon ) \end{aligned} \end{aligned}$$

and C, R, and \(D^\epsilon \) are single-valued maps given by

$$\begin{aligned} C(x) \triangleq \left( \begin{array}{c}c_1'(x_1) \\ \vdots \\ c_N'(x_N)\end{array}\right) ,\quad R(x) \triangleq r(X\text {{{\textbf {1}}}}+ x) - d, \text { and } D^{\epsilon } (x) \triangleq \left( \begin{array}{c} {\mathbb {E}}_{\xi }[\nabla _{x_1}{\mathcal {Q}}_1^{\epsilon }(x_1,\xi )] \\ \vdots \\ {\mathbb {E}}_{\xi }[\nabla _{x_N}{\mathcal {Q}}_N^{\epsilon }(x_N,\xi )]\end{array}\right) . \end{aligned}$$

We note that the interchange between the expectation and the gradient operator can be invoked based on smoothness requirements (cf. [87, Th. 7.47]). The problem (SGE\(^{\epsilon }\)) aligns perfectly with the structured inclusion (MI), in which T is a maximal monotone map and V is an expectation-valued maximally monotone map. In addition, we can quantify the Lipschitz constant of V as \(L_V = L_C + L_R + L_D^{\epsilon }\) ,where \(L_C = \max _{1\le i\le N} {\tilde{L}}^c_i\), \(L_R = r\left\| {{\,\mathrm{Id}\,}}+ {\mathbf {1}}{\mathbf {1}}^{\top }\right\| _2 = r(N+1)\) and \(L_D^{\epsilon } = \tfrac{1}{\epsilon }\). Here, \({{\,\mathrm{Id}\,}}\) is the \(N\times N\) identity matrix, and \({\mathbf {1}}\) is the \(N\times 1\) vector consisting only of ones.

Problem parameters for 2-stage SVI. Our numerics are based on specifying \(N=10\), \(r = 0.1\), and \(d = 1\). We consider four problem settings of \(L_V\) ranging from \(10, \cdots , 10^4\) (See Table 1). For each setting, the problem parameters are defined as follows.

  1. (i)

    Specification of \(h_i(\xi )\). The cost parameters \(h_i(\xi _i) \triangleq \xi _i\) where \(\xi _i \sim {\mathtt {Uniform}}[-5,0]\) and \(i = 1, \cdots , N\).

  2. (ii)

    Specification of \(L_V, L_R,\) \(L_D^{\epsilon }\), \(L_C\), and \({\hat{b}}_1\). Since \(\left\| {{\,\mathrm{Id}\,}}+ {\mathbf {1}}{\mathbf {1}}^{\top }\right\| _2 = 11\) when \(N = 10\), \(L_R = r \left\| {{\,\mathrm{Id}\,}}+ {\mathbf {1}}{\mathbf {1}}^{\top }\right\| = 1.1\). Let \(\epsilon \) be defined as \(\epsilon =\frac{10}{L_{V}}\) and \(L_D^{\epsilon } = \tfrac{1}{\epsilon } = \tfrac{L_V}{10}\). It follows that \(L_C = L_V-L_R-L_D^{\epsilon }\) and \({\hat{b}}_1= L_C\).

  3. (iii)

    Specification of \(c_i(x_i)\). The cost function \(c_i\) is defined as \(c_i(x_i) = \tfrac{1}{2}{\hat{b}}_i x_i^2+a_i{x_i}\) where \(a_1, \ldots , a_N \sim {\mathtt {Uniform}}[2,3]\) and \({\hat{b}}_2, \cdots , {\hat{b}}_N \sim {\mathtt {Uniform}}[0,{\hat{b}}_1].\) Further, \(a \triangleq [a_1,\dots ,a_N]^{\top }\in {\mathbb {R}}^N\) and \(B\triangleq {{\,\mathrm{diag}\,}}({\hat{b}}_1,\dots ,{\hat{b}}_N)\) is a diagonal matrix with nonnegative elements.

Algorithm specifications We compare Algorithm 1 (RISFBF) with a stochastic forward-backward (SFB) scheme and a stochastic forward-backward-forward (SFBF) scheme. Solution quality is compared by estimating the residual function \(\mathsf {res}(x)=\Vert x-\Pi _{\mathcal {X}}(x-\lambda V^\epsilon (x))\Vert \). All of the schemes were implemented in MATLAB on a PC with 16GB RAM and 6-Core Intel Core i7 processor (2.6GHz).

(i) (SFB): The (SFB) scheme is defined as the recursion

$$\begin{aligned} X_{k+1}:= \Pi _{{\mathcal {X}}} \left[ X_k - \lambda _k {\widehat{V}}^{\epsilon }(X_k,\xi _k)\right] , \end{aligned}$$
(SFB)

where \(V^{\epsilon }(X_k) = {\mathbb {E}}_{\xi }[{\widehat{V}}^{\epsilon }(X_k,\xi )]\) and \(\lambda _k = \tfrac{1}{\sqrt{k}}\). The operator \(\Pi _{{\mathcal {X}}}[\cdot ]\) means the orthogonal projection onto the set \({\mathcal {X}}\). Note that \(x_0\) is randomly generated in \([0,1]^N\).

(ii) (SFBF): The Variance-reduced stochastic modified forward-backward scheme we employ is defined by the updates

$$\begin{aligned} \left\{ \begin{array}{l} Y_{k}= \Pi _{\mathcal {X}}[X_k - \lambda _k A_{k}(X_{k})], \\ X_{k+1}=Y_{k}-\lambda _k(B_{k}(Y_k)-A_{k}(X_{k})). \end{array}\right. \end{aligned}$$
(SFBF)

where \(A_{k}(X_{k})=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}{\widehat{V}}^{\epsilon }(X_k,\xi _k)\), \(B_{k}(Y_k)=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}{\widehat{V}}^{\epsilon }(Y_k,\eta _k)\). We choose a constant \(\lambda _k\equiv \lambda =\tfrac{1}{4L_V}\). We assume \(m_k=\lfloor k^{1.01}\rfloor \) for merely monotone problems and \(m_k=\lfloor 1.01^{k}\rfloor \) for strongly monotone problems.

(iii) (RISFBF): In the implementation of Algorithm 1 we choose a constant steplength \(\lambda _k\equiv \lambda =\tfrac{1}{4L_V}\). In merely monotone settings, we utilize an increasing sequence \(\alpha _k=\alpha _0(1-\tfrac{1}{k+1})\), where \(\alpha _0=0.1\), the relaxation parameter sequence \(\rho _k\) defined as \(\rho _k=\tfrac{3(1-\alpha _0)^2}{2(2\alpha _k^2-\alpha _k+1)(1+L_V\lambda )}\), and \(m_k=\lfloor k^{1.01}\rfloor \). In strongly monotone regimes, we choose a constant inertial parameter \(\alpha _k\equiv \alpha =0.1\), a constant relaxation parameter \(\rho _k\equiv \rho =1\), and \(m_k=\lfloor 1.01^k\rfloor \).

In Fig. 1, we compare the three schemes under maximal monotonicity and strong monotonicity, respectively and examine their sensitivities to inertial and relaxation parameters. Both sets of plots are based on selecting \(L_V=10^2\).

Fig. 1
figure 1

Trajectories for (SFB), (SFBF), and (RISFBF) (left: monotone, right: s-monotone)

Table 1 Comparison of (RISFBF) with (SFB) and (SFBF) under various Lipschitz constant

Key insights Several insights may be drawn from Table 1 and Figure 1.

  1. (a)

    First, from Table 1, one may conclude that on this class of problems, (RISFBF) and (SFBF) significantly outperform (SFB) schemes, which is less surprising given that both schemes employ an increasing mini-batch sizes, leading to performance akin to that seen in deterministic schemes. We should note that when \({\mathcal {X}}\) is somewhat more complicated, the difference in run-times between SA schemes and mini-batch variants becomes more pronounced; in this instance, the set \({\mathcal {X}}\) is relatively simple to project onto and there is little difference in run-time across the three schemes.

  2. (b)

    Second, we observe that while both (SFBF) and (RISFBF) schemes can contend with poorly conditioned problems, as seen by noting that as \(L_V\) grows, their performance does not degenerate significantly in terms of empirical error; However, in both monotone and strongly monotone regimes, (RISFBF) provides consistently better solutions in terms of empirical error over (SFBF). Figure 1 displays the range of trajectories obtained for differing relaxation and inertial parameters and in the instances considered, (RISFBF) shows consistent benefits over (SFBF).

  3. (c)

    Third, since such schemes display geometric rates of convergence for strongly monotone inclusion problems, this improvement is reflected in terms of the empirical errors for strongly monotone vs monotone regimes.

5.2 Supervised learning with group variable selection

Our second numerical example considers the following population risk formulation of a composite absolute penalty (CAP) problem arising in supervised statistical learning [7]

$$\begin{aligned} \min _{w\in {\mathcal {W}}} \tfrac{1}{2}{\mathbb {E}}_{(a,b)}[( a^{\top }w-b)^2]+\eta \sum _{\textit{g}\in {\mathcal {S}}} \Vert w_\textit{g}\Vert _2, \end{aligned}$$
(CAP)

where the feasible set \({\mathcal {W}}\subseteq {\mathbb {R}}^{d}\) is a Euclidean ball with \({\mathcal {W}}\triangleq \{w \in {\mathbb {R}}^{d}\mid \Vert w\Vert _2 \le D\}\), \(\xi =(a,b)\in {\mathbb {R}}^{d}\times {\mathbb {R}}\) denotes the random variable consisting of a set of predictors a and output b. The parameter vector w is the sparse linear hypothesis to be learned. The sparsity structure of w is represented by group \({\mathcal {S}}\in 2^{\{1,\dots ,l\}}\). When the groups in \({\mathcal {S}}\) do not overlap, \(\sum _{\textit{g}\in {\mathcal {S}}} \Vert w_\textit{g}\Vert _2\) is referred to as the group lasso penalty [6, 88]. When the groups in \({\mathcal {S}}\) form a partition of the set of predictors, then \(\sum _{\textit{g}\in {\mathcal {S}}} \Vert w_\textit{g}\Vert _2\) is a norm afflicted by singularities when some components \(w_\textit{g}\) are equal to zero. For any \(\textit{g} \in \{1, \cdots , l\}\), \(w_{\textit{g}}\) is a sparse vector constructed by components of x whose indices are in \(\textit{g}\), i.e., \(w_\textit{g} := (w_i)_{i\in \textit{g}}\) with few non-zero components in \(w_\textit{g}\). Here, we assume that each group \(\textit{g} \in {\mathcal {S}}\) consists of k elements. Introduce the linear operator \(L:{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^{k}\underbrace{\times \cdots \times }_{l-\text {times}}{\mathbb {R}}^{k}\), given by \(Lw=[\eta w_{g_{1}},\ldots ,\eta w_{g_{l}}]\). Let us also define

$$\begin{aligned}&Q={\mathbb {E}}_{\xi }[aa^{\top }],q={\mathbb {E}}_{\xi }[ab],c=\frac{1}{2}{\mathbb {E}}_{\xi }[b^{2}], \\&h(w)\triangleq \frac{1}{2}w^{\top }Qw-w^{\top }q+c, \text { and } f(w)\triangleq \delta _{{\mathcal {W}}}(w), \end{aligned}$$

where \(\delta _{{\mathcal {W}}}(\cdot )\) denotes the indicator function with respect to the set \({\mathcal {W}}\). Then (CAP) becomes

$$\begin{aligned} \min _{w\in {\mathbb {R}}^{d}}\ \{h(w)+g(Lw)+f(w)\},\quad \text {where }g(y_{1},\ldots ,y_{l})\triangleq \sum _{i=1}^{l}\left\| y_{i}\right\| . \end{aligned}$$

This is clearly seen to be a special instance of the convex programming problem (2). Specifically, we let \({\mathsf {H}}_{1}={\mathbb {R}}^{d}\) with the standard Euclidean norm, and \({\mathsf {H}}_{2}={\mathbb {R}}^{k}\underbrace{\times \cdots \times }_{l-\text {times}}{\mathbb {R}}^{k}\) with the product norm

$$\begin{aligned} \left\| (y_{1},\ldots ,y_{l})\right\| _{{\mathsf {H}}_{2}}\triangleq \sum _{i=1}^{l}\left\| y_{i}\right\| _{2}. \end{aligned}$$

Since

$$\begin{aligned} g^{*}(v_{1},\ldots ,v_{l})=\sum _{i=1}^{l}\delta _{{\mathbb {B}}(0,1)}(v_{i})\qquad \forall v=(v_{1},\ldots ,v_{l})\in {\mathbb {R}}^{k}\underbrace{\times \cdots \times }_{l-\text {times}}{\mathbb {R}}^{k}, \end{aligned}$$

the Fenchel-dual takes the form (3). Accordingly, a primal-dual pair for (CAP) is a root of the monotone inclusion (MI) with

$$\begin{aligned} V(w,v)= ( \nabla h(w)+L^{*}v, -Lw ) \text { and } T(w,v)\triangleq \partial f(w)\times \partial g^{*}(v) \end{aligned}$$

involving \(d+kl\) variables.

Problem parameters for (CAP) We simulated data with \(d = 82\), covered by 10 groups of 10 variables with 2 variables of overlap between two successive groups: \(\{1,\dots ,10\},\{9,\dots ,18\},\dots ,\{73,\dots ,82\}\). We assume the nonzeros of \(w_{\mathrm{true}}\) lie in the union of groups 4 and 5 and sampled from i.i.d. Gaussian variables. The operator V(wv) is estimated by the mini-batch estimator using \(m_{k}\) iid copies of the random input-output pair \(\xi =(a,b)\in {\mathbb {R}}^{d}\times {\mathbb {R}}\). Specifically, we draw each coordinate of the random vector a from the standard Gaussian distribution \({\mathtt {N}}(0,1)\) and generate \(b=a^{\top }w_{\mathrm{true}}+\varepsilon \), for \(\varepsilon \sim {\mathtt {N}}(0,\sigma ^{2}_{\varepsilon })\). In the concrete experiment reported here, the error variance is taken as \(\sigma _{\varepsilon }=0.1\). In all instances, the regularization parameter is chosen as \(\eta = 10^{-4}\). The accuracy of feature extraction of algorithm output w is evaluated by the relative error to the ground truth, defined as

$$\begin{aligned} \frac{\Vert w-w_{\mathrm{true}}\Vert _2}{\Vert w_{\mathrm{true}}\Vert _2}. \end{aligned}$$

Algorithm specifications We compare (RISFBF) with stochastic extragradient (SEG) and stochastic forward-backward-forward (SFBF) schemes and specify their algorithm parameters. Again, all the schemes are run on MATLAB 2018b on a PC with 16GB RAM and 6-Core Intel Core i7 processor (2.6\(\times \)8GHz).

  1. (i)

    (SEG): Set \({\mathcal {X}}\triangleq {\mathcal {W}}\times {{\,\mathrm{dom}\,}}(g^{*})\). The (SEG) scheme [32] utilizes the updates

    $$\begin{aligned} \begin{aligned} Y_{k}&:= \Pi _{{\mathcal {X}}} \left[ X_k - \lambda _k A_{k}(X_k)\right] ,\\ X_{k+1}&:= \Pi _{{\mathcal {X}}} \left[ X_k - \lambda _k B_{k}(Y_{k})\right] , \end{aligned} \end{aligned}$$
    (SEG)

    where \(A_{k}(X_{k})=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}V(X_k,\xi _k)\), \(B_{k}(Y_k)=\frac{1}{m_{k}}\sum _{t=1}^{m_{k}}V(Y_k,\eta _k)\). In this scheme, \(\lambda _k\equiv \lambda \) is chosen to be \(\tfrac{1}{4L_{V}}\) (\(L_{V}\) is the Lipschitz constant of V). We assume \(m_k=\left\lfloor \tfrac{k^{1.1}}{n}\right\rfloor \).

  2. (ii)

    (SFBF): We employ the algorithm parameters employed in (i). Specifically, we choose a constant \(\lambda _k\equiv \lambda =\tfrac{1}{4L_{V}}\) and \(m_k=\left\lfloor \tfrac{k^{1.1}}{n}\right\rfloor \).

  3. (iii)

    (RISFBF): Here, we employ a constant step-length \(\lambda _k\equiv \lambda =\tfrac{1}{4 L_{V}}\), an increasing sequence \(\alpha _k=\alpha _0(1-\tfrac{1}{k+1})\), where \(\alpha _0=0.85\), a relaxation parameter sequence \(\rho _k=\tfrac{3(1-\alpha _0)^2}{2(2\alpha _k^2-\alpha _k+1)(1+L_{V}\lambda )}\), and assume \(m_k=\left\lfloor \tfrac{k^{1.1}}{n}\right\rfloor \).

Table 2 The comparison of the RISFBF, SFBF and SEG algorithms in solving (CAP)

Insights We compare the performance of the schemes in Table 2 and observe that (RISFBF) outperforms its competitors others in extracting the underlying feature of the datasets. In Fig. 2, trajectories for (RISFBF), (SFBF) and (SEG) are presented where a consistent benefit of employing (RISFBF) can be seen for a range of choices of \(\alpha _0\).

Fig. 2
figure 2

Trajectories for (SEG), (SFBF), and (RISFBF) for problem (CAP)

6 Conclusion

In a general structured monotone inclusion setting in Hilbert spaces, we introduce a relaxed inertial stochastic algorithm based on Tseng’s forward-backward-forward splitting method. Motivated by the gaps in convergence claims and rate statements in both deterministic and stochastic regimes, we develop a variance-reduced framework and make the following contributions: (i) Asymptotic convergence guarantees are provided under both increasing and constant mini-batch sizes, the latter requiring somewhat stronger assumptions on V; (ii) When V is monotone, rate statements provided in terms of a restricted gap function, inspired by the Fitzpatrick function for inclusions, show that the expected gap of an averaged sequence diminishes at the rate of \({\mathcal {O}}(1/k)\) and oracle complexity of computing an \(\epsilon \)-solution is \({\mathcal {O}}(1/\epsilon ^{1+a})\) where \(a> 1\); (iii) When V is strongly monotone, a non-asymptotic linear rate statement can be proven with an oracle complexity of \({\mathcal {O}}(\log (1/\epsilon ))\) of computing an \(\epsilon \)-solution. In addition, a perturbed linear rate is also developed. It is worth emphasizing that the rate statements in the strongly monotone regime accommodate the possibility of a biased stochastic oracle. Unfortunately, the growth rates in batch-size may be onerous in some situations, motivating the analysis of a polynomial growth rate in sample-size which is easily modulated. This leads to an associated polynomial rate of convergence.

Various open questions arise from our analysis. First, we exclusively focused on a variance reduction technique based on increasing mini-batches. From the point of view of computations and oracle complexity, this approach can become quite costly. Exploiting different variance reduction techniques, taking perhaps special structure of the single-valued operator V into account (as in [57]), has the potential of improving the computational complexity of our proposed method. At the same time, this will complicate the analysis of the variance of the stochastic estimators considerably and consequently, we leave this as an important question for future research.

Second, our analysis needs knowledge about the Lipschitz constant L. While in deterministic regimes, line search techniques have obviated such a need, such avenues are far more challenging to adopt in stochastic regimes. Efforts to address this in variational regimes have centered around leveraging empirical process theory [33]. This remains a goal of future research. Another avenue emerges in applications where we can gain a reasonably good estimate about this quantity via some pre-processing of the data (see e.g. Section 6 in [62]). Developing such an adaptive framework robust to noise is an important topic for future research.