Stochastic Relaxed Inertial Forward-Backward-Forward splitting for Monotone Inclusions in Hilbert spaces

We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator $T$ and a single-valued monotone, Lipschitz continuous, and expectation-valued operator $V$. We draw motivation from the seminal work by Attouch and Cabot on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward-backward-forward (RISFBF) method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that (RISFBF) produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an $\epsilon$-solution improves to $O(1/\epsilon)$. Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of $O(1/k)$ while the oracle complexity of computing a suitably defined $\epsilon$-solution is $O(1/\epsilon^{1+a})$ where $a>1$. Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to stochastic forward-backward-forward (SFBF) and SA schemes.


Problem formulation and motivation
A wide range of problems in areas such as optimization, variational inequalities, game theory, signal processing, or traffic theory, can be reduced to solving inclusions involving set-valued operators in a Hilbert space H, i.e. to find a point x ∈ H such that 0 ∈ F(x), where F : H → 2 H is a set-valued operator. In many problems such inclusion problems display specific structure revealing that the operator F can be additively decomposed. This leads us to the main problem we consider in this paper.

Problem 1.
Let H be a real separable Hilbert space with inner product ·, · and associated norm · = √ ·, · . Let T : H → 2 H and V : H → H be maximally monotone operators, such that V is L-Lipschitz continuous. The problem is to find x ∈ H such that 0 ∈ F(x) V(x) + T(x), We assume that Problem 1 is well-posed: We are interested in the case where (MI) is solved by an iterative algorithm based on a stochastic oracle (SO) representation of the operator V. Specifically, when solving the problem, the algorithm calls to the SO. At each call, the SO receives as input a search point x ∈ H generated by the algorithm on the basis of past information so far, and returns the outputV(x, ξ), where ξ is a random variable defined on some given probability space (Ω, F, P), taking values in a measurable set Ξ with law P = P • ξ −1 . In most parts of this paper, and the vast majority of contributions on stochastic variational problems in general, it is assumed that the output of the SO is unbiased, Such stochastic inclusion problems arise in numerous problems of fundamental importance in mathematical optimization and equilibrium problems, either directly or through an appropriate reformulation. An excellent survey on the existing techniques for solving problem (MI) can be found in [9] (in general Hilbert spaces) and [29] (in the finite-dimensional case).

Motivating examples
In what follows, we provide some motivating examples. 2) gains particular relevance in machine learning, where usually h(u) is a convex data fidelity term (e.g. a population risk functional), and (Lu) and f (u) embody penalty or regularization terms; see e.g. total variation [74], hierarchical variable selection [46,88], and graph regularization [83,84]. Applications in control and engineering are given in [5,53]. We refer to (1.2) as the primal problem.
Using Fenchel-Rockafellar duality [9, ch.19], the dual problem of (1.2) is given by Following classical Karush-Kuhn-Tucker theory [69], the primal-dual optimality conditions associated with (1.4) are concisely represented by the following monotone inclusion: Findx = (ū,v) ∈ H 1 × H 2 ≡ H such that −L * v ∈ ∂ f (ū) + ∇h(ū), and Lū ∈ ∂ * (v). (1.5) We may compactly summarize these conditions in terms of the zero-finding problem (MI) using the operators V and T, defined as Note that the operator V : H → H is the sum of a maximally monotone and a skew-symmetric operator. Hence, in general, it is not cocoercive 1 . Conditions on the data guaranteeing Assumption 1 are stated in [21].
Since h(u) is represented as an expected value, we need to appeal to simulation based methods to evaluate its gradient. Also, significant computational speedups can be made if we are able to sample the skew-symmetric linear operator (u, v) → (L * u, −Lu) in an efficient way. Hence, we assume that there exists a SO that can provide unbiased estimator to the gradient operators ∇h(u) and (L * v, −Lu). More specifically, given the current position x = (u, v) ∈ H 1 × H 2 , the oracle will output the random estimatorsĤ(u, ξ),L u (u, ξ),L v (v, ξ) such that This oracle feedback generates the random operatorV(x, ξ) = (Ĥ(u, ξ) +L v (v, ξ), −L u (u, ξ)), which allows us to approach the saddle-point problem (1.4) via simulation-based techniques. problem. Due to their huge number of applications, SVI's received enormous interest over the last several years from various communities [48,58,75,80]. This problem emerges when V(x) is represented as an expected value as in (1.1) and T(x) = ∂ (x) for some proper lower semi-continuous function : H → (−∞, ∞]. In this case, the resulting structured monotone inclusion problem can be equivalently stated as findx ∈ H s.t. V(x), x −x + (x) − (x) ≥ 0 ∀x ∈ H. (1.6) An important and frequently studied special case of (1.6) arises if is the indicator function of a given closed and convex subset C ⊂ H. In this cases the set-valued operator T becomes the normal cone map This formulation includes many fundamental problems including fixed point problems, Nash equilibrium problems and complementarity problems [29]. Consequently, the equilibrium condition (1.6) reduces to findx ∈ C s.t. V(x), x −x ≥ 0 ∀x ∈ C.

Contributions
Despite the advances in stochastic optimization and variational inequalities, the algorithmic treatment of general monotone inclusion problems under stochastic uncertainty is a largely unexplored field. This is rather surprising given the vast amount of applications of maximally monotone inclusions in control and engineering, encompassing distributed computation of generalized Nash equilibria [17,31,86], traffic systems [34,35,43], and PDE-constrained optimization [10]. The first major aim of this manuscript is to introduce and investigate a relaxed inertial stochastic forwardbackward-forward (RISFBF) method, building on an operator splitting scheme originally due to Paul Tseng [85]. RISFBF produces three sequences {(X k , Y k , Z k ); k ∈ N}, defined as The data involved in this scheme are explained as follows: • A k (Z k ) and B k (Y k ) are random estimators of V obtained by consulting the SO at search points Z k and Y k , respectively; • (α k ) k∈N is a sequence of non-negative numbers regulating the memory, or inertia of the method; • (λ k ) k∈N is a positive sequence of step-sizes; • (ρ k ) k∈N is a non-negative relaxation sequence.
If α k = 0 and ρ k = 1 the above scheme reduces to the stochastic forward-backward-forward method developed in [15,24], with important applications in Gaussian communication networks [80] and dynamic user equilibrium problems [82]. However, even more connections to existing methods can be made.
(i) Wide Applicability. A key argument in favor of Tseng's operator splitting method is that it is provably convergent when solving structured monotone inclusions of the type (MI), without imposing cocoercivity of the single-valued part V. This is a remarkable advantage relative to the perhaps more familiar and direct forward-backward splitting methods (aka projected (stochastic) gradient descent in the potential case). In particular, our scheme is applicable to the primal-dual splitting described in Example 1.1.
(ii) Asymptotic guarantees. We show that under suitable assumptions on the relaxation sequence (ρ k ) k∈N , the non-decreasing inertial sequence (α k ) k∈N , and step-length sequence (λ k ) k∈N , the generated stochastic process (X k ) k∈N weakly almost surely converges to a random variable with values in S. Assuming demiregularity of the operators yields strong convergence in the real (possibly infinite-dimensional) Hilbert space.
(iii) Non-asymptotic linear rate under strong monotonicity of V. When V is strongly monotone, strong convergence of the last iterate is shown and the sequence admits a non-asymptotic linear rate of convergence without a conditional unbiasedness of the SO. In particular, we show that the iteration and oracle complexity of computing an -solution is no worse than O(log( 1 )) and O( 1 ), respectively.
(iv) Non-asymptotic sublinear rate under monotonicity of V. When V is monotone, by leveraging the Fitzpatrick function [9,30,77] associated with the structured operator F = T + V, we propose a restricted gap function. We then prove that the expected gap of an averaged sequence diminishes at the rate of O( 1 k ). This allows us to derive an O( 1 ) upper bound on the iteration complexity, and an O( 1 2+δ ) upper bound (for δ > 0) on the oracle complexity for computing an -solution.
The above listed contributions shed new light on a set of urgent open questions, which we summarize below: (i) Absence of rigorous asymptotics. So far no aymptotic convergence guarantees have been available when considering relaxed inertial FBF schemes when T is maximally monotone and V is a single-valued monotone expectation-valued map.
(ii) Unavailability of rate statements. We are not aware of any known non-asymptotic rate guarantees for algorithms solving (MI) under stochastic uncertainty. A key barrier in monotone and stochastic regimes in developing such statements has been in the availability of a residual function. Some recent progress in the special stochastic variational inequality case has been made by [15,44,45], but the general Hilbert-space setting involving set-valued operators seems to be largely unexplored (we will say more in Section 1.4).
(iii) Bias requirements. A standard assumption in stochastic optimization is that the SO generates signals which are unbiased estimators of the deterministic operator V(x). Of course, the requirement that the noise process is unbiased may often fail to hold in practice. In the present Hilbert space setting this is in some sense even expected to be the rule rather than the exception, since most operators are derived from complicated dynamical systems or the optimization method is applied to discretized formulations of the original problem. See the recent work [37,38] for an interesting illustration in the context of PDE-constrained optimization. Some of our results go beyond the standard unbiasedness assumption.

Related research
Understanding the role of inertial and relaxation effects in numerical schemes is a line of research which received enormous interest over the last two decades. Below, we try to give a brief overview about related algorithms.

Inertial, Relaxation, and Proximal schemes.
In the context of convex optimization, Polyak [67] introduced the Heavy-ball method. This is a two-step method for minimizing a smooth convex function f . The algorithm reads as The difference from the gradient method is that the base point of the gradient descent step is taken to be the extrapolated point Z k , instead of X k . This small difference has the surprising consequence that (HB) attains optimal complexity guarantees for strongly convex functions with Lipschitz continuous gradients. Hence, (HB) resembles an optimal method [64]. The acceleration effects can be explained by writing the process entirely in terms of a single updating equation as Choosing α k = 1 − a k δ k and λ k = γ k δ 2 k for δ k a small parameter, we arrive at This can be seen as a discrete-time approximation of the second-order dynamical system [68] introduced this dynamical system in an optimization context. Since then, it has received significant attention in the potential, as well as in the non-potential case (see e.g [3,4,13] for an appetizer). As pointed out in [81], if γ(t) = 1, the above system reduces to a continuous version of Nesterov's fast gradient method [63]. Recently, [36] defined a stochastic version of the Heavy-ball method.
Motivated by the development of such fast methods for convex optimization, Attouch and Cabot [1] studied a relaxed-inertial forward-backward algorithm, reading as If V = 0, this reduces to a relaxed inertial proximal point method analyzed by Attouch and Cabot [2]. If ρ k = 1, an inertial forward-backward splitting method is recovered, first studied by Lorenz and Pock [56].
Convergence guarantees for the forward-backward splitting rely on the cocoercivity (inverse strong monotonicity) of the single-valued operator V. Example 1.1, in which V is given by a monotone plus a skew-symmetric linear operator, illustrates an important instance for which this assumption is not satisfied (see [16] for further examples). A general-purpose operator splitting framework, relaxing the cocoercivity property, is the forward-backward-forward (FBF) method due to Tseng [85]. Inertial [12] and relaxed-inertial [14] versions of FBF have been developed. An all-encompassing numerical scheme can be compactly described as (RIFBF) Weak and strong convergence under appropriate conditions on the involved operators and parameter sequences are established in [14], but no rate statements are given.
Related work on stochastic approximation. Efforts in extending stochastic approximation methods to variational inequality problems have considered standard projection schemes [48] for Lipschitz and strongly monotone operators. Extragradient and (more generally) mirror-prox algorithms [50,61] can contend with merely monotone operators, while iterative smoothing [87] schemes can cope with with the lack of Lipschitz continuity. It is worth noting that extragradient schemes have recently assumed relevance in the training of generative adversarial networks (GANS) [40,59]. Rate analysis for stochastic extragradient (SEG) have led to optimal rates for Lipschitz and monotone operators [50], as well as extensions to non-Lipschitzian [87] and pseudomonotone settings [45,51]. To alleviate the computational complexity single-projection schemes, such as the stochastic forward-backward-forward (SFBF) method [15,24], as well as subgradient-extragradient and projected reflected algorithms [25] have been studied as well.
SFBF has been shown to be nearly optimal in terms of iteration and oracle complexity, displaying significant empirical improvements compared to SEG. While the role of inertia in optimization is well documented, in stochastic splitting problems, the only contribution we are aware of is the work by Rosasco et al. [72]. In that paper asymptotic guarantees for an inertial stochastic forward-backward (SFB) algorithm are presented under the hypothesis that the operators V and T are maximally monotone and the single-valued operator V is cocoercive.
Variance reduction approaches. Variance-reduction schemes address the deterioration in convergence rate and the resulting poorer practical behavior via two commonly adopted avenues: (i) If the single-valued part V appears as a finite-sum (see e.g. [40,66]), variance-reduction ideas from machine learning [41] can be used.
In terms of run-time, improvements in iteration complexities achieved by mini-batch approaches are significant; e.g. in strongly monotone regimes, the iteration complexity improves from O( 1 ) to O(ln( 1 )) [24,25]. Beyond run-time advantages, such avenues provide asymptotic and rate guarantees under possibly weaker assumptions on the problem as well as the oracle; in particular, mini-batch schemes allow for possibly biased oracles and state-dependency of the noise [25]. Concerns about the sampling burdens are, in our opinion, often overstated since such schemes are meant to provide -solutions; e.g. if = 10 −3 and the obtained rate is O(1/k), then the batch-size m k = k a where a > 1, implying that the batch-sizes are O(10 3a ), a relatively modest requirement, given the advances in computing.
Outline. The remainder of the paper is organized in five sections. After dispensing with the preliminaries in Section 2, we present the (RISFBF) scheme in Section 3. Asymptotic and rate statements are developed in Section 4 and preliminary numerics are presented in Section 5. We conclude with some brief remarks in Section 6.

Notation
Throughout, H is a real separable Hilbert space with scalar product ·, · , norm · , and Borel σalgebra B. The symbols → and denote strong and weak convergence, respectively. Id : H → H denotes the identity operator on H. Stochastic uncertainty is modeled on a complete probability space (Ω, F, P), endowed with a filtration F = (F k ) k≥0 . By means of the Kolmogorov extension theorem, we assume that (Ω, F, P) is large enough so that all random variables we work with are defined on this space. A H-valued random variable is a measurable function X : (Ω, F) → (H, B). We denote by 0 (F) the set of sequences of real-valued random variables (ξ k ) k∈N such that, for every We denote the set of summable non-negative sequences by 1 + (N). We now collect some concepts from monotone operator theory. For more details, we refer the reader to [9]. Let F : H → 2 H be a set-valued operator. Its domain and graph are defined as dom F {x ∈ H|F(x) ∅}, and gr(F) {(x, u) ∈ H × H|u ∈ F(x)}, respectively. Recall that an operator F : The set of zeros of F, denoted by Zer(T), defined as Zer(F) {x ∈ H|0 ∈ T(x)}. The inverse of F is If F is maximally monotone, then J F is a single-valued map. We also need the classical notion of demiregularity of an operator.
Definition 2.1. An operator F : H → 2 H is demiregular at x ∈ dom(F) if for every sequence {(y n , u n )} n∈N ⊂ gr(F) and every u ∈ F(y), we have The notion of demiregularity captures various properties typically used to establish strong convergence of dynamical systems. [5] exhibit a large class of possibly set-valued operators F which are demiregular. In particular, demiregularity holds if F is uniformly or strongly monotone, or when F is the subdifferential of a uniformly convex lower semi-continuous function f . Assumption 2 guarantees that the operator F = T+V is maximally monotone [9,Corolllary 24.4].
To access new information about the values of the operator V(x), we adopt a stochastic approximation (SA) approach where samples are accessed iteratively and online: At each iteration, we assume to have access to a stochastic oracle (SO) which generates some estimate on the value of the deterministic operator V(x) when the current position is x. This information is obtained by drawing an iid sample form the law P. These fresh samples are then used in the numerical algorithm after an initial extrapolation step delivering the point Z k = X k + α k (X k − X k−1 ), for some extrapolation coefficient α k ∈ [0, 1]. Departing from Z k , we call the SO to retrieve the mini-batch estimator with sample rate m k ∈ N: is the data sample employed by the SO to return the estimator A k (Z k ). Subsequently we perform a forward-backward update with step size λ k > 0: In the final updates, a second independent call of the SO is made, using the data set η k = (η and the new state This iterative procedure generates a stochastic process {(Z k , Y k , X k )} k∈N , defining the relaxed inertial stochastic forward-backward-forward (RISFBF) scheme. A pseudocode is given as Algorithm 1 below.

Algorithm 1: RISFBF
k } from the law P; Compute A k (Z k ) and Y k as in (3.1) and (3.2); k } from the law P; Compute B k (Y k ) and X k+1 as in (3.3) and (3.4). end for ReportX (3.5) Note that RISFBF is still conceptual since we have not explained how the sequences (α k ) k∈N , (λ k ) k∈N and (ρ k ) k∈N should be chosen. We will make this precise in our complexity analysis, starting in Section 4.

Equivalent form of RISFBF
We can collect the sequential updates of RISFBF as the fixed-point iteration where Φ k,λ : H × Ω → H is the time-varying map given by Formulating the algorithm in this specific way establishes the connection between RISFBF and the Heavy-ball system. Indeed, combining the iterations in (3.6) in one, we get a second-order difference equations, closely resembling the structure present in (HB): Also, it reveals the Markovian nature of the process (X k ) k∈N ; It is clear from the formulation (3.6) that X k is Markov with respect to the sigma-algebra σ({X 0 , . . . , X k−1 }).

Assumptions on the stochastic oracle
In order to tame the stochastic uncertainty in RISFBF, we need to impose some assumptions on the distributional properties of the random fields (A k (x)) k∈N and (B k (x)) k∈N . One crucial statistic we need to control is the SO variance. Define the oracle error at a point x ∈ H as Assumption 4 (Oracle Noise). We say that the SO (ii) enjoys a uniform variance bound: E ξ [ ε(x, ξ) 2 |x] ≤ σ 2 for some σ > 0 and all x ∈ H. Define The introduction of these two processes allows us to decompose the random estimator into a mean component and a residual, so that Hence, under conditional unbiasedness, the processes {(U k , F k ); k ∈ N} and {(W k ,F k ); k ∈ N} are martingale difference sequences, where the filtrations are defined as F 0 F 0 F 1 σ(X 0 , X 1 ), and iteratively, for k ≥ 1, Observe that F k ⊆F k ⊆ F k+1 for all k ≥ 1. The uniform variance bound, Assumption 4(ii), ensures that the processes Remark 3.1. For deriving the stochastic estimates in the analysis to come, it is important to emphasize that X k is F k -measurable for all k ≥ 0, and Y k isF k -measurable.
The mini-batch sampling technology implies an online variance reduction effect, summarized in the next Lemma, whose simple proof we omit.
We see that larger sampling rates lead to more precise point estimates of the single-valued operator. This comes at the cost of more evaluations of the stochastic operator. Hence, any minibatch approach faces a trade-off between the oracle complexity and the iteration complexity. We want to use mini-batch estimators to achieve an online variance reduction scheme, motivating the next assumption.

Analysis
This section is organized into three subsections. The first subsection derives asymptotic convergence guarantees, while the second and third subsections provides linear and sublinear rate statements in strongly monotone and monotone regimes, respectively.

Asymptotic Convergence
Given λ > 0, we define the residual function for the monotone inclusion (MI) as Clearly, for every λ > 0, x ∈ S ⇔ res λ (x) = 0. Hence, res λ (·) is a merit function for the monotone inclusion problem. To put this merit function into context, let us consider the special case where T is the subdifferential of a lower semi-continuous convex function : H → [0, ∞], i.e. T = ∂ . In this case, the resolvent J λT reduces to the well-known proximal-operator In the potential case, where V(x) = ∇ f (x) for some smooth convex function f : H → R, the residual function is thus seen to be a constant multiple of the norm of the so-called gradient mapping , which is a standard merit function in convex [62] and stochastic [26,52] optimization. We use this function to quantify the per-iteration progress of RISFBF. The main result of this subsection is the following.
(ii) the stochastic process (X k ) k∈N generated by algorithm RISFBF weakly converges to a S-valued limiting random variable X; We prove this Theorem via a sequence of technical Lemmas. Our proof strategy for Theorem 4.1 follows [15], where a stochastic quasi-Fejér principle is established, including the residual function. We start with a basic relation.
Proof. By definition, where the last inequality uses the non-expansivity property of the resolvent operator. Rearranging terms gives the claimed result.
Next, for a given pair (p, p * ) ∈ gr(F), we define the stochastic processes (∆M k ) k∈N , (∆N k (p, p * )) k∈N , and (e k ) k∈N as Key to our analysis is the following energy bound on the evolution of the anchor sequence

Lemma 4.3 (Fundamental Recursion).
Let (X k ) k∈N be the stochastic process generated by RISFBF with α k ∈ (0, 1), 0 ≤ ρ k < 5 4(1+Lλ k ) , and λ k ∈ (0, 1/4L). For all k ≥ 1 and (p, p * ) ∈ gr(F), we have Proof. To simplify the notation, let us call Introducing the process (e k ) k∈N from eq. (4.5), the aforementioned set of inequalities reduces to Hence, Pick (p, p * ) ∈ gr(F), so that p * − V(p) ∈ T(p). Then, the monotonicity of T yields the estimate This is equivalent to This implies that Hence, we obtain the following, Rearranging terms, we arrive at the following bound on R k − p 2 : Next, we observe that X k+1 − p 2 may be bounded as follows.
We may then derive a bound on the expression in (4.8), By invoking (4.2), we arrive at the estimate which implies that Multiplying both sides by (4.12) Rearranging terms, and noting that (1/2 − 2Lλ k )(1 + Lλ k ) ≤ 1/2 − 2L 2 λ 2 k , the above estimate becomes Substituting this bound into the first majorization of the anchor process X k+1 − p 2 , we see Observe that and Lemma A.1 gives By hypothesis, α k , ρ k , λ k are defined such that > 0. Then, using both of these relations in the last estimate for X k+1 − p 2 , we arrive at Using the respective definitions of the stochastic increments ∆M k , ∆N k (p, p * ) in (4.3) and (4.4), we arrive at Recall that Y k isF k -measurable. By the law of iterated expectations, we therefore see To prove the a.s. convergence of the stochastic process (X k ) k∈N , we rely on the following preparations. Motivated by the analysis of deterministic inertial schemes, we are interested in a regime under which α k is non-decreasing.
For a fixed reference point p ∈ H, define the anchor sequences φ k (p) 1 2 X k − p 2 , and the energy sequence ∆ k In terms of these sequences, we can rearrange the fundamental recursion from Lemma 4.3 to obtain For a given pair (p, p * ) ∈ gr(F), define Then, in terms of the sequence and using the monotonicity of V, we arrive at Our aim is to use Q k (p) as a suitable energy function for RISFBF. For that to work, we need to identify a specific parameter sequence pair (ρ k , α k ) so that β k ≥ 0 and θ k ≥ 0, taking the following design criteria into account: Incorporating these two restrictions on the inertia parameter α k , we are left with the following constraints: To identify a constellation of parameters (α k , ρ k ) satisfying these two conditions, define Then, Solving this condition for ρ k reveals that 1 . Using the design condition α k ≤ᾱ < 1, we need to choose the relaxation parameter ρ k so that ρ k ≤ . It remains to verify that with this choice we can guarantee β k ≥ 0. This can be deduced as follows: Recalling (4.23), we get In particular, we note that if f (α) We consider two cases: Case 1:ᾱ ≤ 1/2. In this case Using these relations, we see that (4.20) reduces to where θ k ≥ 0. This is the basis for our proof of Theorem 4.1.
Proof of Theorem 4.1. We start with (i). Consider (4.25), with the special choice p * = 0, so that p ∈ S. Taking conditional expectations on both sides of this inequality, we arrive at where ψ k a k σ 2 2m k . By design of the relaxation sequence ρ k , we see that .
Remark 4.1. The above result gives some indication of the balance between the inertial effect and the relaxation effect. Our analysis revealed that the maximal value of the relaxation parameter is ρ ≤ . This is closely aligned with the maximal relaxation value exhibited in Remark 2.13 of [2]. Specifically, the function ρ m (α, ε) = . This function is decreasing in α. For this choice of parameters, one observes that for α → 0 we get ρ → 5(1−ε) 4(1+Lλ) and for α → 1 it is observed ρ → 0.
As an immediate corollary of Theorem 4.1, we obtain a convergence result when all parameter sequences are constant.
. Then (X k ) k∈N converges weakly P-a.s. to a limiting random variable with values in S.
In fact, the a.s. convergence with a larger λ k is allowed as shown in the following corollary.
Then (X k ) k∈N converges weakly P-a.s. to a limiting random variable with values in S.
Proof. First we make a slight modification to (4.10) that the following relation holds for 0 < ν < 1 Then similarly with (4.12), we multiply both sides of (4.11) by , which is positive since λ k ∈ (0, 1−ν 2L ). The convergence follows in a similar fashion to Theorem 4.1.
Another corollary of Theorem 4.1 is a strong convergence result, assuming that F is demiregular (cf. Definition 2.1). Convergence under demiregularity). Let the same Assumptions as in Theorem 4.1 hold. If F = T + V is demiregular, then (X k ) k∈N converges strongly P-a.s. to a S-valued random variable.

Linear Convergence
In this section, we derive a linear convergence rate and prove strong convergence of the last iterate in the case where the single-valued operator V is strongly monotone. Various linear convergence results in the context of stochastic approximation algorithms for solving fixed-point problems are reported in [23] in the context of the random sweeping processes. In a general structured monotone inclusion setting [73] derive rate statements for cocoercive mean operators in the context of forward-backward splitting methods. More recently, Cui and Shanbhag [24] provide linear and sublinear rates of convergence for a variance-reduced inexact proximal-point scheme for both strongly monotone and monotone inclusion problems. However, to the best of our knowledge, our results are the first published for a stochastic operator splitting algorithm, featuring relaxation and inertial effects. Notably, this result does not require imposing Assumption 4(i) (i.e. the noise process be conditionally unbiased.) Instead our derivations hold true under a weaker notion of an asymptotically unbiased SO.
Assumption 6 (Asymptotically unbiased SO). There exists a constant s > 0 such that This definition is rather mild and is imposed in many simulation-based optimization schemes in finite dimensions. Amongst the more important ones is the simultaneous perturbation stochastic approximation (SPSA) method pioneered by Spall [78,79]. In this scheme, it is required that the gradient estimator satisfies an asymptotic unbiasedness requirement; in particular, the bias in the gradient estimator needs to diminish at a suitable rate to ensure asymptotic convergence. In fact, this setting has been investigated in detail in the context of stochastic Nash games [28]. Further examples for stochastic approximation schemes in a Hilbert-space setting obeying Assumption 6 are [7,8] and [38]. We now discuss an example that further clarifies the requirements on the estimator.
whereσ > 0 andb > 0 such that (B k ) k∈N is an H-valued sequence satisfying B k 2 ≤b 2 in an a.s. sense. These statistics can be obtained as Setting s 2 σ 2 +b 2 , we see that condition (4.26) holds. A similar estimate holds for the random noise W k 2 .
Combined with Assumption 1, strong monotonicity implies that S = {x} for somex ∈ H.
Remark 4.2. In the context of a structured operator F = T + V, the assumption that the single-valued part V is strongly monotone can be done without loss of generality. Indeed, if instead T is assumed to be µ-strongly monotone, then (V + µ Id) + (T − µ Id) is maximally monotone and Lipschitz continuous whileṼ V + µ Id may be seen to be µ-strongly monotone operator.
Our first result establishes a "perturbed linear convergence" rate on the anchor sequence Let (α k ) k∈N be a non-decreasing sequence such that 0 < α k ≤ᾱ < 1, where q = 1 − ρη ∈ (0, 1), ρ = 16(3−a)(1−ᾱ) 2 31(1+Lλ) . Then the following hold: In particular, this implies a perturbed linear rate of the sequence ( X k −x 2 ) k∈N as Proof. Our point of departure for the analysis under the stronger Assumption 7 is eq. (4.6), which becomes Repeating the analysis of the previous section with reference point p =x and p * = 0, the unique solution of (MI), yields the bound By Young's inequality, we have for all c > 0, Observe that this estimate is crucial in weakening the requirement of conditional unbiasedness.
Assume that λ k µ ≤ a 2 < 1. Then, Using these bounds, we readily deduce for 0 < λ k ≤ min{ a 2µ , bµ}, that Proceeding as in the derivation of eq. (4.13), one sees first that and therefore, (4.33) Define η k = (1 − b)λ k µ. Using the equality (4.8), (4.15),(4.14) with stochastic error term ∆M k 2ρ k W k 2 + (3−a)ρ k λ 2 k 1+Lλ k e k 2 . From here, it follows that Since λ k = λ, and ρ k = In particular, this implies ηρ k ∈ (0, 1) for all k ∈ N. We then havẽ Next, we show that H k ≥ 1−ᾱ 2 X k −x 2 , for H k defined in (4.28). This can be seen from the next string of inequalities: In this derivation we have used the Young inequality . By recalling (4.34) and invoking (4.35), we are left with the stochastic recursion for every k, we have that q k ≤ q = 1 − ηρ for every k. Furthermore, 1 > ηρ k ≥ ηρ, so that q ∈ (0, 1). Taking conditional expectations on both sides on (4.36), we get E[H k+1 |F k ] +b k ≤ qH k + c k P-a.s.
using the notation c k E[∆M k |F k ]. Applying the operator E[·|F k−1 ] and using the tower property of conditional expectations, this gives Proceeding inductively, we see that This establishes eq. (4.29). To validate eq. (4.30), recall that we assume X 1 = X 0 , so that We now show that (c k ) k∈N ∈ 1 + (N). Simple algebra, combined with Assumption 6, gives Hence, since (ρ k ) k∈N is bounded, Assumption 5 gives lim k→∞ c k = 0 a.s. Using again the tower for every k. Consequently, the discrete convolution Clearly, this implies lim k→∞ E[b k ] = 0, and consequently the subsequently stated two implication follow as well:  (1−α) 2 2α 2 − 1 2 α+1 . Basic calculus reveals that this function is decreasing for α increasing. In the extreme case when α ↑ 1, it is necessary to let ρ ↓ 0, and vice versa. When α → 0 then the limiting value of our specific relaxation policy is 3−a 1+Lλ . In practical applications, it is advisable to choose b small in order to make q large. The value a must be calibrated in a disciplined way in order to allow for a sufficiently large step size λ. This requires some knowledge of the condition number of the problem µ/L. As a heuristic argument, a good strategy, anticipating that b should be close to 0, is to set a 2µ = 1−a 2L . This means a = μ L+µ .
We obtain a full linear rate of convergence when a more aggressive sample rate is employed in the SO. We achieve such global linear rates, together with tuneable iteration and oracle complexity estimates in two settings: First, we consider an aggressive simulation strategy, where the sample size grows over time geometrically. Such a sampling frequency can be quite demanding in some applications. As an alternative, we then move on and consider a more modest simulation strategy under which only polynomial growth of the batch size is required. Whatever simulation strategy is adopted, key to the assessment of the iteration and oracle complexity is to bound the stopping time In order to understand the definition of this stopping time, recall that RISFBF computes the last iterate X K+1 by extrapolating between the current base point Z k and the correction step involving Y k + λ K (A k − B k ), which requires 2m k iid realizations from the law P. In total, when executing the algorithm until the terminal time K , where therefore need to simulate 2 K k=1 m k random variables. We now estimate the integer K under a geometric sampling strategy. , and choose the sampling rate m k = p −k .
Letp ∈ (p, 1), and define C(p, q) (4.40) Then, whenever p q, we see that E X k+1 −x 2 ≤ C(p, q) max{p, q} k , and whenever p = q, In particular, the stochastic process (X k ) k∈N converges strongly and P-a.s. to the unique solutionx at a linear rate.
Proof. Departing from (4.36), ignoring the positive termb k from the right-hand side, and taking expectations on both sides leads to where the equality follows from c k being deterministic. The sequence (c k ) k∈N is further upper bounded by the following considerations: First, the relaxation sequence is bounded by ρ k ≤ρ = 3−a 2(1+Lλ) ; Second, the sample rate is bounded by m k = p −k ≥ 1 2 p −k ≥ 1 2 p −k . Using these facts, eq. (4.37) yields where B = 2ρs 2 1 + 2(3−a)λ 2 1+Lλ . Iterating the recursion above, one readily sees that Consequently, by recalling that h 1 = (1 − α 1 ) X 1 −x 2 and h k ≥ 1−ᾱ 2 E( X k −x 2 ), the bound (4.42) allows us to derive the recursion We consider three cases.
and the same hypothesis as in Theorem 4.7 apply. Then, K ε ≤ τ (p, q) = O(ln(ε −1 )). The corresponding oracle complexity of RISFBF is upper bounded as 2 Proof. First, let us recall that the total oracle complexity of the method is assessed by ln (1/ max{p,q}) . Then, E( X τ +1 −x 2 ) ≤ , and hence K ≤ τ . We now compute This gives the oracle complexity bound If p = q, we can replicate this calculation, after setting τ = ln( −1Ĉ ) ln(1/p) . After so many iterations, we can be ensured that E( X τ +1 −x 2 ) ≤ , with an oracle complexity .
To the best of our knowledge, the provided non-asymptotic linear convergence guarantee appears to be amongst the first in relaxed and inertial splitting algorithms. In particular, by leveraging the increasing nature of mini-batches, this result no longer requires the unbiasedness assumption on the SO, a crucial benefit of the proposed scheme.
There may be settings where geometric growth of m k is challenging to adopt. To this end, we provide a result where the sampling rate is polynomial rather than geometric. A polynomial sampling rate arises if m k = a k (k + k 0 ) θ + b k for some parameters a k , b k , θ > 0. Such a regime has been adopted in related mini-batch approaches [54,55]. This allows for modulating the growth rate by changing the exponent in the sampling rate. We begin by providing a supporting result. We make the specific choice a k = b k = 1 for all k ≥ 1, and k 0 = 0, leaving essentially the exponent θ > 0 as a free parameter in the design of the stochastic oracle.
Proof. From the relation (4.43), we obtain A standard bound based on the integral criterion for series with non-negative summands gives The upper bounding integral can be evaluated using integration-by-parts, as follows: Note that θ t ln(1/q) ≤ 1 2 when t ≥ 2θ/ ln(1/q) . Therefore, we can attain a simpler bound from the above by k+1 Furthermore, Note that (1/q) 2θ/ ln(1/q) = exp(ln(1/q)) 2θ/ ln(1/q) = exp(2θ). Hence, Plugging this into the opening string of inequalities shows Since h 1 = (1 − α 1 ) X 1 −x 2 and h k+1 ≥ 1−ᾱ 2 E X k+1 −x 2 , we finally arrive at the desired expression (4.46). Proof. We first note that (k + 1) −θ ≤ k −θ for all k ≥ 1. Hence, the bound established in Proposition 4.10 yields Then, for any k ≥ K (c q,θ / ) 1/θ , we are ensured that E( X k+1 −x 2 ) ≤ ε. Since (c q,θ ) 1/θ = O(exp(−1)θ), we conclude that K = O(θ −1/θ ). The corresponding oracle complexity is bounded as follows: Remark 4.5. It may be observed that if the θ = 1 or m k = k, there is a worsening of the rate and complexity statements from their counterparts when the sampling rate is geometric; in particular, the iteration complexity worsens from O(ln( 1 )) to O( 1 ) while the oracle complexity degenerates from the optimal level of O( 1 ) to O( 1 2 ). But this deterioration comes with the advantage that the sampling rate is far slower and this may be of signficant consequence in some applications.

Rates in terms of merit functions
In this subsection we estimate the iteration and oracle complexity of RISFBF with the help of a suitably defined gap function. Generally, a gap function associated with the monotone inclusion problem (MI) is a function Gap : H → R such that (i) Gap is sign restricted on H; and (ii) Gap(x) = 0 if and only if x ∈ S. The Fitzpatrick function [9,11,30,77] is a useful tool to construct gap functions associated with a set-valued operator F : H → 2 H . It is defined as the extended-valued function This function allows us to recover the operator F, by means of the following result (cf. [9,Prop. 20.58]): If F : H → 2 H is maximally monotone, then G F (x, x * ) ≥ x, x * for all (x, x * ) ∈ H × H, with equality if and only if (x, x * ) ∈ gr(F). In particular, gr(F) = {(x, x * ) ∈ H × H| G F (x, x * ) ≥ x, x * }. In fact, it can be shown that the Fitzpatrick function is minimal in the family of convex functions f : H × H → (−∞, ∞] such that f (x, x * ) ≥ x, x * for all (x, x * ) ∈ H × H, with equality if (x, x * ) ∈ gr(F) [11]. Our gap function for the structured monotone operator F = V + T is derived from its Fitzpatrick function by setting Gap(x) G F (x, 0) for x ∈ H. This reads explicitly as  to the well-known dual gap function, due to [6], It is easy to check that Γ(x ) ≥ 0, and equality holds only for a primal-dual pair (saddle-point)x ∈ S. Hence, Γ(·) is a gap function for the monotone inclusion derived from the Karush-Kuhn-Tucker conditions (1.5). In fact, the function (4.50) is a standard merit function for saddle-point problems (see e.g. [19]). To relate this gap function to the Fitzpatrick function, we exploit the maximally monotone operators V and T introduced Example 1.1. In terms of these mappings, first observe that for p = (ũ,ṽ), Since h is convex differentiable, the classical gradient inequality reads as h(u) − h(ũ) ≥ ∇h(ũ), u −ũ . Using this estimate in the previous display shows For p * = (ũ * ,ṽ * ) ∈ T(p), we again employ convexity to get Hence, Therefore, we see Hence, It is clear from the definition that a convex gap function can be extended-valued and its domain is contingent on the boundedness properties of dom T. In the setting where T(x) is bounded for all x ∈ H, the gap function is clearly globally defined. However, the case where dom T is unbounded has to be handled with more care. There are potentially two approaches to cope with such a situation: One would be to introduce a perturbation-based termination criterion as defined in [60], and recently used in [20] to solve a class of structured stochastic variational inequality problems. The other solution strategy is based on the notion of restricted merit functions, first introduced in [65], and later on adopted in [57]. We follow the latter strategy.
Let x s ∈ dom T denote an arbitrary reference point and D > 0 a suitable constant. Define the closed set C dom T ∩ {x ∈ H| x − x s ≤ D}, and the restricted gap function Gap(x|C) sup{ y * , x − y |y ∈ C, y * ∈ F(y)}.  Proof. The convexity and non-negativity for x ∈ C of the restricted function is clear. Since Gap(x|C) ≤ Gap(x) for all x ∈ H, we seē To show the converse implication, suppose Gap(x|C) = 0 for somex ∈ C with x − x s < D. Without loss of generality we can choosex ∈ C in this particular way, since we may choose the radius of the ball as large as desired. It follows that y * ,x − y ≤ 0 for all y ∈ C, y * ∈ F(y). Hence,x ∈ C is a Minty solution to the Generalized Variational inequality with maximally monotone operator F(x) + N C (x). Since F is upper semi-continuous and monotone, Minty solutions coincide with Stampacchia solutions, implying that there existsx * ∈ F(x) such that x * , y −x ≥ 0 for all y ∈ C (see e.g. [18]). Consider now the gap program This program is solved at y =x, which is a point for which x − x s < D. Hence, the constraint can be removed, and we conclude x * , y −x ≥ 0 for all y ∈ dom(F). By monotonicity of F, it follows y * , y −x ≥ x * , y −x ≥ 0 ∀(y, y * ) ∈ gr(F).
We start with the first preliminary result. Lemma 4.13. Consider the sequence (X k ) k∈N generated by RISFBF with the initial condition X 0 = X 1 . Suppose λ k = λ ∈ (0, 1/(2L)) for every k ∈ N. Moreover, suppose (α k ) k∈N is a non-decreasing sequence and for (p, p * ) ∈ gr(F), we define ∆N k (p, p * ) as in (4.4). Then, for all (p, p * ) ∈ gr(F), we have Proof. For (p, p * ) ∈ gr(V + T), we know from eq. (4.6) where the last inequality uses the monotonicity of V. We first derive a recursion which is similar to the fundamental recursion in Lemma 4.3. Invoking (4.8) and (4.9), we get Multiplying both sides of (4.11) and noting that (1 − 2Lλ k )(1 + Lλ k ) ≤ 1 − 2L 2 λ 2 k , we obtain the following inequality Inserting the above inequality to (4.54) and using the same fashion in deriving (4.16), we arrive at Invoking the monotonicity of V and rearranging (4.55), it follows that We define β k+1 as and similarly with (4.19), we can show {β k } is non-increasing by choosing ρ k = Together with α k+1 ≥ α k , the last inequality gives Recall that ∆N k (p, p * ) = ∆N k (p, 0) + 2ρ k λ p * , p − Y k . Hence, after setting ∆N k (p, 0) = ∆N k (p), rearranging the expression given in the previous display shows that Summing over k = 1, . . . , K, we obtain where we notice X 1 = X 0 in the last inequality.
Next, we derive a rate statement in terms of the gap function, using the averaged sequencē (4.56) Theorem 4.14 (Rate and oracle complexity under monotonicity of V). Consider the sequence (X k ) k∈N generated RISFBF. Suppose Assumptions 1-5 hold. Suppose m k k a and λ k = λ ∈ (0, 1/(2L)) for every k ∈ N where a > 1. Suppose (α k ) k∈N is a non-decreasing sequence such that 0 < α k ≤ᾱ < 1, ρ k = 3(1−ᾱ) 2 2(2α 2 k −α k +1)(1+Lλ) for every k ∈ N. Then the following hold for any K ∈ N: The proof of this Theorem builds on an idea which is frequently used in the analysis of stochastic approximation algorithms, and can at least be traced back to the robust stochastic approximation approach of [61]. In order to bound the expectation of the gap function, we construct an auxiliary process which allows us to majorize the gap via a quantity which is independent of the reference points. Once this is achieved, a simple variance bound completes the result.
Proof of Theorem 4.14. We define an auxiliary process (Ψ k ) k∈N such that (4.57) Then, Introducing the iterate Y k , the above implies As ∆N k (p) = 2ρ k λ k W k , p − Y k , this implies via a telescopian sum argument Using Lemma 4.13 and setting λ k ≡ λ, for any (p, p * ) ∈ gr(F) it holds true that Define c 1 (1 − α 1 ) X 1 − p 2 , divide both sides by K k=1 ρ k and using our definition of an ergodic average (4.56), this gives Using the bound established in eq. (4.58), it follows Choosing Ψ 1 , p ∈ C and introducing c 2 c 1 + 4D 2 , we see that the above can be bounded by a random quantity which is independent of p: Taking the supremum over pairs (p, p * ) such that p ∈ C and p * ∈ F(y), it follows In order to proceed, we bound the first moment of the process ∆M k in the same way as in (4.17), in order to get Next, we take expectations on both sides of inequality (4.59), and use the bound (3.8), and Since α k ↑ᾱ ∈ (0, 1), we know that ρ k ≥ρ 3(1−ᾱ 2 ) 2(1+Lλ)(2ᾱ 2 +1) . Similarly, since 2α 2 k − α k + 1 ≥ 7/8 for all k, it follows ρ k ≤ρ 12(1−ᾱ) 2 7 . Using this upper and lower bound on the relaxation sequence, we also see that a k ≤ λ 2 12ρ 1+Lλ +ρ 2 ≡ā, so that . Suppose m k = k a , for a > 1. Then the oracle complexity to compute anX K such that Remark 4.6. In the prior result, we employ a sampling rate m k = k a where a > 1. This achieves the optimal rate of convergence. In contrast, the authors in [45] employ a sampling rate, loosely given by m k = k 1+a (ln(k)) 1+b where a > 0, b ≥ −1 or a = 0, b > 0. We observe that when a > 0 and b ≥ −1, the mini-batch size grows faster than our proposed m k while it is comparable in the other case.

Applications
In this section, we compare the proposed scheme with its SA counterparts on a class of monotone two-stage stochastic variational inequality problems (Sec. 5.1) and a supervised learning problem (Sec. 5.2) and discuss the resulting performance.

Two-stage stochastic variational inequality problems
In this section, we describe some preliminary computational results obtained from the (RISFBF) method when applied to a class of two-stage stochastic variational inequality problems, recently introduced by Rockafellar and Wets [70].
Consider an imperfectly competitive market with N firms playing a two-stage game. In the first stage, the firms decide upon their capacity level x i ∈ [l i , u i ], anticipating the expected revenues to be obtained in the second stage in which they compete by choosing quantities à la Cournot. The second-stage market is characterized by uncertainty as the per-unit cost h i (ξ i ) is realized on the spot and cannot be anticipated. To compute an equilibrium in this game, we assume that each player is able to take stochastic recourse by determining production levels y i (ξ), contingent on random convex costs and capacity levels x i . In order to bring this into the terminology for our problem, let use define the feasible set for capacity decisions of firm i as X i [l i , u i ] ⊂ R + . The joint profile of capacity decisions is denoted by an N-tuple x = (x 1 , . . . , x N ) ∈ X N i=1 X i = X. The capacity choice of player i is then determined as a solution to the parametrized problem ( where c i : X i → R + is aL c i -smooth and convex cost function and p(·) denotes the inverse-demand function defined as p(X) = d − rX, d, r > 0. The function Q i (·, ξ) denotes the optimal cost function of firm i in scenario ξ ∈ Ξ, assuming a value Q i (x i , ξ) when the capacity level x i is chosen. The recourse function E ξ [Q i (·, ξ)] denotes the expectation of the optimal value of the player i's second stage problem and is defined as A Nash equilibrium of this game is given by a tuple (x * 1 , · · · , x * N ) where x * i solves (Play i (x * −i )) for each i = 1, 2, . . . , N. A simple computation shows that Q i (x i , ξ) = min{0, h i (ξ)x i }, and hence it is nonsmooth. In order to obtain a smoothed variant, we introduce Q i (·, ξ i ), defined as This is the value function of a quadratric program, requiring the maximization of an -strongly concave function. Hence, Q i (x i , ξ) is single-valued and ∇ x i Q i (·, ξ) is 1 -Lipschitz and -strongly monotone [71,Prop. 12.60] for all ξ ∈ Ξ. The latter is explicitly given by Employing this smoothing strategy in our two-stage noncooperative game yields the individual decision problem (∀i ∈ {1, . . . , N}) : min The necessary and sufficient equilibrium conditions of this -smoothed game can be compactly represented by the inclusion problem (SGE ) and C, R, and D are single-valued maps given by We note that the interchange between the expectation and the gradient operator can be invoked based on smoothness requirements (cf. [76,Th. 7.47]). The problem (SGE ) aligns perfectly with the structured inclusion (MI), in which T is a maximal monotone map and V is an expectationvalued maximally monotone map. In addition, we can quantify the Lipschitz constant of V as Here, Id is the N × N identity matrix, and 1 is the N × 1 vector consisting only of ones.
Problem parameters for 2-stage SVI. Our numerics are based on specifying N = 10, r = 0.1, and d = 1. We consider four problem settings of L V ranging from 10, · · · , 10 4 (See Table 1). For each setting, the problem parameters are defined as follows. Algorithm specifications. We compare (RISFBF) with a standard stochastic approximation (SA) scheme and a stochastic forward-backward-forward (SFBF) scheme. Solution quality is compared by estimating the residual function res(x) = x − Π X (x − λV (x)) . All of the schemes were implemented in MATLAB on a PC with 16GB RAM and 6-Core Intel Core i7 processor (2.6GHz).
The operator Π X [·] means the orthogonal projection onto the set X. Note that x 0 is randomly generated in [0, 1] N .
We choose a constant λ k ≡ λ = 1 4L V . We assume m k = k 1.01 for merely monotone problems and m k = 1.01 k for strongly monotone problems.
In Fig. 1, we compare the three schemes under maximal monotonicity and strong monotonicity, respectively and examine their senstivities to inertial and relaxation parameters. Both sets of plots are based on selecting L V = 10 2 .  Key insights. Several insights may be drawn from Table 1 and Figure 1.
(a) First, from Table 1, one may conclude that on this class of problems, (RISFBF) and (SFBF) significantly outperform (SA) schemes, which is less surprising given that both schemes employ an increasing mini-batch sizes, leading to performance akin to that seen in deterministic schemes. We should note that when X is somewhat more complicated, the difference in run-times between SA schemes and mini-batch variants becomes for more pronounced; in this instance, the set X is relatively simple to project onto and there is little difference in run-time across the three schemes.
(b) Second, we observe that while both (SFBF) and (RISFBF) schemes can contend with poorly conditioned problems, as seen by noting that as L V grows, their performance does not degenerate significantly in terms of empirical error; However, in both monotone and strongly monotone regimes, (RISFBF) provide consistently better solutions in terms of empirical error over (SFBF). Figure 1 displays the range of trajectories obtained for differing relaxation and inertial parameters and in the instances considered, (RISFBF) shows consistent benefits over (SFBF).
(c) Third, since such schemes display geometric rates of convergence for strongly monotone inclusion problems, this improvement is reflected in terms of the empirical errors for strongly monotone vs monotone regimes.

Supervised learning with group variable selection
Our second numerical example considers the following population risk formulation of a composite absolute penalty (CAP) problem arising in supervised statistical learning [88] min where the feasible set W ⊆ R d is a Euclidean ball with W {w ∈ R d | w 2 ≤ D}, ξ = (a, b) ∈ R d × R denotes the random variable consisting of a set of predictors a and output b. The parameter vector w is the sparse linear hypothesis to be learned. The sparsity structure of w is represented by group S ∈ 2 {1,...,l} . When the groups in S do not overlap, g∈S w g 2 is referred to as the group lasso penalty [33,46]. When the groups in S form a partition of the set of predictors, then g∈S w g 2 is a norm afflicted by singularities when some components w g are equal to zero. For any g ∈ {1, · · · , l}, w g is a sparse vector constructed by components of x whose indices are in g, i.e., w g := (w i ) i∈g with few non-zero components in w g . Here, we assume that each group g ∈ S consists of k elements. Introduce the linear operator L : R d → R k × · · · × l−times This is clearly seen to be a special instance of the convex programming problem (1.2). Specifically, we let H 1 = R d with the standard Euclidean norm, and H 2 = R k × · · · × l−times We assume the nonzeros of w true lie in the union of groups 4 and 5 and sampled from i.i.d. Gaussian variables. The operator V(w, v) is estimated by the mini-batch estimator using m k iid copies of the random input-output pair ξ = (a, b) ∈ R d × R. Specifically, we draw each coordinate of the random vector a from the standard Gaussian distribution N(0, 1) and generate b = a w true +ε, for ε ∼ N(0, σ 2 ε ). In the concrete experiment reported here, the error variance is taken as σ ε = 0.1. In all instances, the regularization parameter is chosen as η = 10 −4 . The accuracy of feature extraction of algorithm output w is evaluated by the relative error to the ground truth, defined as w − w true 2 w true 2 .
Algorithm specifications. We compare (RISFBF) with stochastic extragradient (SEG) and stochastic forward-backward-forward (SFBF) schemes and specify their algorithm parameters. Again, all the schemes are run on MATLAB 2018b on a PC with 16GB RAM and 6-Core Intel Core i7 processor (2.6×8GHz).
where A k (X k ) = 1 m k m k t=1 V(X k , ξ k ), B k (Y k ) = 1 m k m k t=1 V(Y k , η k ). In this scheme, λ k ≡ λ is chosen to be 1 4L V (L V is the Lipschitz constant of V). We assume m k = k 1.1 n . (ii) (SFBF): Variance-reduced stochastic modified forward-backward scheme. We employ the algorithm paramters employed in (i). Specifically, we choose a constant λ k ≡ λ = 1 4L V and m k = k 1.1 n .

Conclusion
In a general structured monotone inclusion setting in Hilbert spaces, we introduce a relaxed inertial stochastic algorithm based on Tseng's forward-backward-forward splitting method. Motivated by the gaps in convergence claims and rate statements in both deterministic and stochastic regimes, we develop a variance-reduced framework and make the following contributions: (i) Asymptotic convergence guarantees are provided under both increasing and constant mini-batch sizes, the latter requiring somewhat stronger assumptions on V; (ii) When V is monotone, rate statements provided in terms of a restricted gap function, inspired by the Fitzpatrick function for inclusions, show that the expected gap of an averaged sequence diminishes at the rate of O(1/k) and oracle complexity of computing an -solution is O(1/ 1+a ) where a > 1; (iii) When V is strongly monotone, a non-asymptotic linear rate statement can be proven with an oracle complexity of O(log(1/ )) of computing an -solution. In addition, a perturbed linear rate is also developed. It is worth emphasizing that the rate statements in the strongly monotone regime accommodate the possibility of a biased SO. Unfortunately, the growth rates in batch-size may be onerous in some situations, motivating the analysis of a polynomial growth rate in sample-size which is easily modulated. This leads to an associated polynomial rate of convergence. Various open questions arise from our analysis. First, we exclusively focused on a variance reduction technique based on increasing mini-batches. From the point of view of computations and oracle complexity, this approach can become quite costly. Exploiting different variance reduction techniques, taking perhaps special structure of the single-valued operator V into account (as in [66]), has the potential of improving the computational complexity of our proposed method. At the same time, this will complicate the analysis of the variance of the stochastic estimators considerably and consequently, we leave this as an important question for future research.
Second, our analysis needs knowledge about the Lipschitz constant L. While in deterministic regimes, line search techniques have obviated such a need, such avenues are far more challenging to adopt in stochastic regimes. Efforts to address this in variational regimes have centered around leveraging empirical process theory [44]. This remains a goal of future research. Another avenue emerges in applications where we can gain a reasonably good estimate about this quantity via some pre-processing of the data (see e.g. Section 6 in [39]). Developing such an adaptive framework robust to noise is an important topic for future research.