Forward-reflected-backward method with variance reduction

We propose a variance reduced algorithm for solving monotone variational inequalities. Without assuming strong monotonicity, cocoercivity, or boundedness of the domain, we prove almost sure convergence of the iterates generated by the algorithm to a solution. In the monotone case, the ergodic average converges with the optimal O(1/k) rate of convergence. When strong monotonicity is assumed, the algorithm converges linearly, without requiring the knowledge of strong monotonicity constant. We finalize with extensions and applications of our results to monotone inclusions, a class of non-monotone variational inequalities and Bregman projections.


Introduction
We are interested in solving variational inequalities (VI) where g is a proper lower semicontinuous convex function and F is a monotone operator also given as the finite sum F = 1 (1) Find z ⋆ ∈ Z ∶ ⟨F(z ⋆ ), z − z ⋆ ⟩ + g(z) − g(z ⋆ ) ≥ 0, ∀z ∈ Z, A special case of monotone VIs is the structural saddle point problem where f, h are proper lower semicontinuous convex functions and Ψ is a smooth convex-concave function. Indeed, problem (2) can be formulated as (1) by setting and F(z) = 1 n ∑ n i=1 F i (z) (see [2,Section 2], [5,7] for examples). Another related problem is the monotone inclusion where the aim is to where A ∶ Z ⇉ Z and F ∶ Z → Z are maximally monotone operators and F is Lipschitz continuous with finite sum form. Monotone inclusions generalize (1) and our results also extend to this setting as will be shown in Sect. 4.1. Due to convenient abstraction, it is the problem (1) that will be our main concern.
The case when Ψ in (2) is convex-concave and, in particular when it is bilinear, has found numerous applications in machine learning, image processing and operations research, resulting in efficient methods being developed in the respective areas [6,14,15,33]. As VI methods solve the formulation (1), they seamlessly apply to solve instances of (2) with nonbilinear Ψ.
In addition to the potentially complex structure of Ψ , the size of the data in modern learning tasks lead to development of stochastic variants of VI methods [4,17,28]. An important technique on this front is stochastic variance reduction [18] which exploits the finite sum structures in problems to match the convergence rates of deterministic algorithms.
In the specific case of convex minimization, variance reduction has been transformative over the last decade [13,16,18,21]. As a result, there has been several works on developing variance reduced versions of the standard VI methods, including forward-backward [2], extragradient [7,20], and mirror-prox [5,27]. Despite recent remarkable advances in this field, these methods rely on strong assumptions such as strong monotonicity [2,7] or boundedness of the domain [5] and have complicated structures for handling the cases with nonbilinear Ψ [5].
Contributions In this work, we introduce a variance reduced method with a simple single loop structure, for monotone VIs. We prove its almost sure convergence under mere monotonicity; without any of the aforementioned assumptions. The new method achieves the O(1/k) convergence rate in the general monotone case and linear rate of convergence when strong monotonicity is assumed, without using strong monotonicity constant as a parameter. We also consider natural extensions of our algorithm to monotone inclusions, a class of non-monotone problems, and monotone VIs with general Bregman distances.

Related works
Most of the research in variance reduction has focused on convex minimization [13,16,18,21], leading to efficient methods in both theory and practice. On the other hand, variance reduction for solving VIs is started to be investigated recently. One common technique for reducing the variance in stochastic VIs, is to use increasing mini-batch sizes, which leads to high per iteration costs and slower convergence rates in practice [4,9,17].
A different approach used in [25] was to use the same sample in both steps of stochastic extragradient method [19] to reduce the variance, which results in a slower O(1∕ √ k) rate. The results of [25] for bilinear problems on the other hand are limited to the case when the matrix is full rank. The most related to our work, in the sense how variance reduction is used, are [2,5,7] (see Table 1).
For the specific case of strongly monotone operators, [2] proposed algorithms based on SVRG and SAGA, with linear convergence rates. Two major questions for future work are posed in [2]: (i) obtaining convergence without strong monotonicity assumption and (ii) proving linear convergence without using strong monotonicity constant in the algorithm as a parameter.
The work by [7] proposed an algorithm based on extragradient method [20] and under strong monotonicity assumption, proved linear convergence of the method. The step size in this work depends on cocoercivity constant, which might depend on strong monotonicity constant as discussed in [7, Table 1]. Thus, the result of [7] gave a partial answer to the second question of [2] while leaving the first one unanswered.
An elegant recent work of [5] focused on matrix games and proposed a method based on the mirror prox [27]. The extension of the method of [5] for general min-max problems is also considered there. Unfortunately, this extension not only features a three loop structure, but also uses the bounded domain assumption actively and requires domain diameter as a parameter in the algorithm [5,Corollary 2]. This result has been an important step towards an answer for the first question of [2].  Table 1, our complexity bounds have a worse dependence on n compared to [5], and do not improve the complexity of deterministic VI methods for bilinear games, which was the case in [5]. On the other hand, to our knowledge, our result is the first to show the existence of a variance reduced method that converges under the same set of assumptions as the deterministic methods and also matches the complexity of these deterministic methods. Moreover, our result is also the first variance reduced method to solve monotone inclusions in finite sum form, without strong monotonicity, increasing mini-batch sizes or decreasing step sizes [2].

As highlighted in
Finally, our work answers an open problem posed in [23] regarding a stochastic extensions of the forward-reflected-backward method. Our result improves the preliminary result in [23,Section 6], which still requires evaluating the full operator every iteration.

Preliminaries and notation
We work in Euclidean space Z = ℝ d with scalar product ⟨⋅, ⋅⟩ and induced norm ‖ ⋅ ‖ . Domain of a function g ∶ Z → ℝ ∪ {+∞} is defined as dom g = {z ∈ Z ∶ g(z) < +∞} . Proximal operator of g is defined as We call an operator F ∶ K → Z , where K ⊆ Z, For example, in the context of (2) and (1), F is (strongly) monotone when Ψ is (strongly) convex-(strongly) concave. However, it is worth noting that both cocoercivity and strong monotonicity fail even for the simple bilinear case when Ψ(x, y) = ⟨Ax, y⟩ in (2).
Finally, we state our common assumptions for (1).
The solution set of (1), denoted by Z ⋆ , is nonempty.

3
Forward-reflected-backward method with variance reduction Z

Algorithm
Our algorithm is a careful mixture of a recent deterministic algorithm for VIs, proposed by [23], with a special technique of using variance reduction in finite sum minimization given in [16] and [21]. It is clear that for n = 1 any stochastic variance reduced algorithm for VI reduces to some deterministic one. As a consequence, this immediately rules out the most obvious choice -the well-known forward-backward method (FB) since its convergence requires either strong monotonicity or cocoercivity of F. The classical algorithms that work under mere monotonicity [20,30,34] have a more complicated structure, and thus, it is not clear how to meld them with a variance reduction technique for finite sum problems. Instead, we chose the recent forwardreflected-backward method (FoRB) [23] which converges under Assumption 1 with n = 1.
When g = 0 , this method takes its origin in the Popov's algorithm [30]. In this specific case, FoRB is also equivalent to optimistic gradient ascent algorithm [12,31] which became increasingly popular in machine learning literature recently [11,12,24,26].
Among many variance reduced methods for solving finite sum problems min z f (z) ∶= 1 n ∑ n i=1 f i (z) one of the simplest is the Loopless-SVRG method [21] (see also [16]), which can be seen as a randomized version of the gradient and hence forwardbackward methods. The latter is the exact reason why we cannot extend this method directly to the variational inequality setting, without cocoercivity or strong monotonicity. An accurate blending of [23] and [21], described above, results in Algorithm 1. Compared to Loopless-SVRG, the last evaluation of the operator at step 4 of Algorithm 1 is done at w k−1 , instead of w k . In the deterministic case when n = 1 or p = 1 , this modification reduces the method to FoRB (4) and not FB (3). The other change is that we use the most recent iterate z k+1 in the update of w k+1 , instead of z k in the Loopless-SVRG. Surprisingly, these two small distinctions result in the method which converges for general VIs without the restrictive assumptions of the previous works.
We note that we use uniform sampling for choosing i k in Algorithm 1 for simplicity. Our arguments directly extend to arbitrary sampling as in [2,5] which is used for obtaining tighter Lipschitz constants.

Convergence analysis
We start with a key lemma that appeared in [23] for analyzing a general class of VI methods. The proof of this lemma is given in the appendix for completeness. The only change from [23] is that we consider the proximal operator, instead of a more general resolvent. Then for all x ∈ Z and V 2 ∈ Z , it holds The benefit of Lemma 3.1 is that it gives a candidate for a Lyapunov function that can be used to prove convergence. We will need a slight modification in this function due to randomization in Algorithm 1.

Convergence of the iterates
We start by proving the almost sure convergence of the iterates. Such a result states that the trajectories of the iterates generated by our algorithm converge to a point in the solution set. This type of result is the analogue of sequential convergence results for deterministic methods [23].
For the iterates {z k } , {w k } of Algorithm 1 and any z ∈ dom g , > 0 we define Forward-reflected-backward method with variance reduction The first expression plays the role of a Lyapunov function and the second is essential for the rate.
This lemma is essential in establishing the convergence of iterates and sublinear convergence rates that we will derive in the next section. We now continue with the proof.
, and x 1 = z k , with i k = i . Then by (5) and step 4 of Algorithm 1, x 2 = z k+1 , thus, by (6) First, note that by Lipschitzness of F i , Cauchy-Schwarz and Young's inequalities, Thus, it follows that Taking expectation conditioning on the knowledge of z k , w k−1 and using that which follows from the definition of w k , to (11), we obtain The proof will be complete, if we can show that the expression in the second and third lines are nonpositive. Due to our choice of and this is a matter of a simple algebra. As . Then for the iterates {z k } of Algorithm 1, almost surely there exists z ⋆ ∈ Z ⋆ such that z k → z ⋆ .

Remark 3.1
It is interesting to observe that for p = 1 , i.e., when the algorithm becomes deterministic, the bound for the stepsize is < 1 2L , which coincides with the one in [23] and is known to be tight. In this case analysis will be still valid if for convenience we assume that ∞ ⋅ 0 = 0.
For small p we might use a simpler bound for the stepsize, as the following corollary suggests.

3
Forward-reflected-backward method with variance reduction First, we show that Φ k+1 (z) is nonnegative for all z ∈ dom g . This is straightforward but tedious. Recall that 1 − √ 1 − p = 1+ and hence 2 L ≤ 1+ . Then by Cauchy-Schwarz and Young's inequalities, Therefore, we deduce Now let z =z ∈ Z ⋆ . Then by monotonicity of F and (1), . Unfortunately, this is still not sufficient for us, so we are going to strengthen this inequality by reexamining the proof of Lemma 3.2. In estimating the second line of inequality (13) we used that 2 L ≤ 1 − √ 1 − p , however, both in the statements of Lemma 3.2 and Theorem 3.1 we assumed a strict inequality. Let (14) can be improved to equality as This change results in a slightly stronger version of (7) .
We now pick a realization ∈ Ω and note that z k ( ) − z k−1 ( ) → 0 and z k−1 ( ) − w k−1 ( ) → 0 . Let us denote by z a cluster point of the bounded sequence z k ( ) . By using the definition of z k and convexity of g, as in the proof of Lemma 3.1, we have for any z ∈ Z Taking the limit as k → ∞ and using that g is lower semicontinuous and ∀i , Then, as we have that ‖z k ( ) −z‖ 2 converges and we have shown that ‖z k ( ) −z‖ 2 converges to 0 at least on one subsequence, we conclude that the sequence (z k ( )) converges to some point z , where z ∈ Z ⋆ . ◻

Convergence rate for the general case
In this section, we prove that the average of the iterates of the algorithm exhibits O(1/k) convergence rate which is optimal for solving monotone VIs [27]. The standard quantity to show sublinear rates for VIs is gap function which is defined as As this quantity requires taking a supremum over the whole space Z which is potentially unbounded, restricted versions of gap functions are used, for example in [22,29] (20) where C ⊂ dom g is an arbitrary bounded set. It is known that G C (z) is a valid merit function, as proven by [29,Lemma 1]. As we are concerned with randomized algorithms, we derive the rate of convergence for the expected gap function G C (z k ) .
The high level idea of the proof is that on top of Lemma 3.2 we sum the resulting inequality and accumulate terms Θ k (z) . Then we use Jensen's inequality to obtain the result.
There are two intricate points that need attention in these kind of results. First, the convergence measure is the expected duality gap [G C (z av K )] that includes the expectation of the supremum. In a standard analysis, it is easy to obtain a bound for the supremum of expectation, however obtaining the former requires a technique, which is common in the literature for saddle point problems [1,28]. Roughly, the idea is to use an auxiliary iterate to characterize the difference two quantities, and show that the error term does not degrade the rate.
Second, as duality gap requires taking a supremum over the domain, the rate might contain a diameter term as in [5]. The standard way to adjust this result for unbounded domains is to utilize a restricted merit function as in (22) on which the rate is obtained [29]. We note that the result in [5] not only involves the domain diameter in the final bound, but it also requires the domain diameter as a parameter for the algorithm in the general monotone case [5, Corollary 2].
Proof of Theorem 3.2 First, we collect some useful bounds. Consider (20) with a specific choice z = P Z ⋆ (z 0 ) . Taking a full expectation and then summing that inequality, we get which also implies by Young's inequality that Next, we rewrite (10) as With the definition of k we can rewrite (25) as We use (12), the definition of Φ k , and the arguments in Lemma 3.2 to show that the last line of (13) is nonpositive, to obtain Summing this inequality over k = 0, … , K − 1 and using bound (28) yields

Forward-reflected-backward method with variance reduction
We now take the supremum of this inequality over z ∈ C and then take a full expectation. As Using this and that Φ K (z) ≥ 0 by (16), we arrive at It remains to estimate the last term ∑ K−1 k=0 ‖ k ‖ 2 . For this, we use a standard inequality ‖X − X‖ 2 ≤ ‖X‖ 2 and Lipschitzness of F i k Plugging this bound into (31), we obtain Finally, using monotonicity of F, followed by Jensen inequality, we deduce which combined with (33) finishes the proof. ◻ It is worth mentioning that even though our method is simple and the convergence rate is O(1/k) as in [5], our complexity result has a worse dependence on n, compared to [5]. In particular, our complexity is O(n∕ ) instead of the O( √ n∕ ) of [5]. This is because our step size has the factor of p which is of the order 1 n in general and it appears to be tight based on numerical experiments. This seems like the cost of handling a more general problem without bounded domain assumption. We leave it as an open question to derive a method that works under our general assumptions and features favorable complexity guarantees as in [5].

Convergence rate for strongly monotone case
We show that linear convergence is attained when strong monotonicity is assumed.

Theorem 3.3 Let Assumption 1 hold and let F be -strongly monotone. Let z ⋆ be the unique solution of (1). Then for the iterates {z k } generated by Algorithm
, it holds that Remark 3. 3 We analyzed the case when F is strongly monotone, however, the same analysis would go through when F is monotone and g is strongly convex. One can transfer strong convexity of g to make F strongly monotone.

Proof of Theorem 3.3 We start from (8) with i k = i,
Setting z = z ⋆ and using strong monotonicity of F, Then, we continue as in the proof of Theorem 3.1 until we obtain a stronger version of (20) due to the strong monotonicity term Using the definitions of a k and b k in (35), it follows that for any ≤ , Next, we derive where the last inequality follows from (15) Taking a full expectation and using that 2 = min{ 2 , } and b 0 = 0 , we obtain Now it only remains to compute the contraction factor. By our choice of , we have , and hence, . Thus, we obtain which finally implies

◻
A key characteristic of our result is that strong monotonicity constant is not required in the algorithm as a parameter to obtain the rate. This has been raised as an open question by [2] and a partial answer is studied by [7] (see Table 1). Our result gives a full answer to this question without using strong monotonicity constant in all cases.
We next discuss the dependence of in the convergence rate. Our rate has a dependence of 1 compared to 1 2 of non-accelerated methods of [2] and the method of [7]. This difference is important especially when is small. On the other hand, in terms of n, our complexity has a worse dependence compared to [5] and accelerated method of [2] as discussed before (see the discussions in Sect. 1.1 and Section 3.2).

Beyond monotonicity
Lastly, we illustrate that our method has convergence guarantees for a class of nonmonotone problems. There exist several relaxations of monotonicity that are used in the literature [10,17,22,24]. Among these, we assume the existence of the solutions to Minty variational inequality given as Under (43), we can drop the monotonicity assumption and show almost sure subsequential convergence of the iterates of our method. Naturally, in this case one can no longer show sequential convergence as with monotonicity (see Theorem 3.1). Proof We will proceed as in Theorem 3.1 and [22,Theorem 6]. We note that Lemma 3.2 does not use monotonicity of F, thus its result follows in this case. In the inequality we plug in z =ẑ for a point satisfying (43). Then, by (43), we have We then argue the same way as in Theorem 3.1 to conclude that almost surely, {z k } is bounded and cluster points of {z k } are in Z ⋆ . Note that the steps in Theorem 3.1 for showing sequential convergence relies on the choice of z as an arbitrary point in Z ⋆ , which is not the case here, therefore, we can only use the arguments from Theorem 3.1 for showing subsequential convergence. ◻

Extensions
We illustrate extensions of our results to monotone inclusions and Bregman projections. The proofs for this section are given in the appendix in Section 7.

Monotone inclusions
We have chosen to focus on monotone VIs in the main part of the paper for being able to derive sublinear rates for the gap function. In this section, we show that our analysis extends directly for solving monotone inclusions. In this case, we are

3
interested in finding z such that 0 ∈ (A + F)(z) , where A, F are monotone operators and each F i is Lipschitz with the form F = 1 n ∑ n i=1 F i . In this case, one changes the prox operator in the algorithm, to resolvent operator of A which is defined as J A (z) = (I + A) −1 (z) . Then, one can use Lemma 3.1 as directly given in [23,Proposition 2.3] to prove an analogous result of Theorem 3.1 for solving monotone inclusions. Moreover, when A + F is strongly monotone, one can prove an analogue of Theorem 3.3. We prove the former result and we note that the latter can be shown by applying the steps in Theorem 3.3 on top of Theorem 4.1, which we do not repeat for brevity.
is nonempty and let the iterates {z k } be generated by Algorithm 1 with the update for z k+1

Bregman distances
We developed our analysis in the Euclidean setting, relying on 2 -norm for simplicity. However, we can also generalize it to proximal operators involving Bregman distances. In this setting, we have a distance generating function h ∶ Z → ℝ , which is 1-strongly convex and continuous. We follow the standard convention to assume that subdifferential of h admits a continuous selection, which means that there exists a continuous function ∇h such that ∇h(x) ∈ h(x) for all x ∈ dom h . We define the Bregman distance as D h (z,z) = h(z) − h(z) − ⟨∇h(z), z −z⟩ . Then, we will change the proximal step 4 of Algorithm 1 with We prove an analogue of Lemma 3.2 with Bregman distances from which the convergence rate results will follow.

3
Forward-reflected-backward method with variance reduction

Numerical verification
In this section, we include preliminary experimental results for our algorithm. We would like to note that these results are mainly for verifying our theoretical results and are not intended to serve as complete benchmarks. We suspect that for an extensive practical comparison, some practical enhancements of our method similar to proximal-point acceleration from [2] or restarting from [7] may be useful. We leave such investigations for future work. First, we apply our method to the unconstrained bilinear problem. It was shown in [7] that this simple problem is particularly challenging for stochastic methods, due to unboundedness of the domain, where the standard methods, such as stochastic extragradient method [19], diverges. Our assumptions are general enough to cover this case and we now verify in practice that our method indeed converges for this problem by setting d = n = 100 and generating A i ∈ ℝ d×d randomly with distribution N(0, 1) For this experiment, we test the tightness of our step size rule by progressively increasing it. Recall that our step size is = p cL , where c = 4 is suggested in our analysis, see Corollary 3.1. We try the values of c = {0.5, 1, 2, 4} and observe that for c = 0.5 the algorithm diverges, see Fig. 1(left). The message of this experiment is that even though slightly higher step sizes than what is allowed in our theory might work, it is not possible to significantly increase it.
The second problem we consider is constrained minimization, which is an instance where the dual domain is not necessarily bounded. We want to solve and C is a unit ball. In other words, we want to find a projection of u onto the intersection given by C and the constraint inequalities {x ∶ h i (x) ≤ 0}.
Introducing Lagrange multipliers y i for each constraint, we obtain (see Section 7.3 for further details) As the Lipschitz constant in this problem does not admit a closed-form expression, we first estimate the Lipschitz constant by finding an L such that deterministic method converges. Next, we note that even though we analyzed the algorithm with a single step size for both primal and dual variables x, y, one can use different step sizes for primal and dual variables (see [22,Section 4.1]). Therefore, we tuned the scaling of primal and dual step sizes for both methods with one random instance and we used the same scaling for all tests for both methods.
We set p = 1∕m . Every iteration, the deterministic method needs to go through all m constraints to compute ∑ m i=1 y i ∇h i (x) , whereas our method computes ∇h i (x) for only one i. First setup is with m = 400 , d = 100 , and the data is generated with the normal distribution N(0, 1) . Second setup is with m = 400 , d = 50 , and the data is generated with the uniform distribution U(−1, 1) . We ran both setups with 10 different instances of randomly generated data and plotted all results, see Fig. 1. We observe that in one instance, the tuned scaling diverges for deterministic method, whereas our method with the same tuning converged in all cases.

Conclusion
In this work, we proposed a variance reduced algorithm for solving monotone VIs without assuming bilinearity, strong monotonicity, cocoercivity or boundedness. Even though our method is the first to converge under the same set of assumptions as deterministic methods, a drawback of our approach is the lack of complexity improvements.
In particular, previous approach of [5] showed complexity improvements for bilinear games, while needing more assumptions than deterministic methods to converge. Thus, an important open problem is to obtain a method that i) converges under the minimal set of assumptions as our algorithm, ii) features improved complexity guarantees compared to deterministic methods, while solving structured problems such as bilinear games such as [5] to obtain the best of both worlds.
We use monotonicity for the last term and get The rest of Lemma 3.2 follows in this case the same way with the lack of the terms with Θ k+1 (z) . Then, similar arguments as in Theorem 3.1 with the changes of i) not having Θ k+1 (z) , ii) using the definition of resolvent instead of proximal operator to show cluster points are solutions, will give the result (see also [23,Theorem 2.5] ) , x 1 = z k , then x 2 = z k+1 with i k = i and we plug these into (53) to get (50) (52) First, note that by Lipschitzness of F i , Cauchy-Schwarz, Young's inequalities, and

Thus, it follows that
Taking expectation conditioning on the knowledge of z k , w k−1 and using that Adding which follows by the definition of w k , to (56), we obtain To show that the last line is nonpositive, we use (14), Young's inequality as in Lemma 3.2 and 1 2 ‖z k − z k−1 ‖ 2 ≤ D h (z k , z k−1 ). Nonnegativity of Φ k follows as in Theorem 3.1 after using (54)

Experiment details
Only for this section, we will use superscripts for iterates rather than subscripts that we have used up to now. Recall that our problem is This problem is equivalent to the following variational inequality where The notation reads: h(x) = (h 1 (x), … , h m (x)) and ∇h(x) = (∇h 1 (x), … , ∇h m (x)) . Let us note h ∶ ℝ d → ℝ m , ∇h(x) ∈ ℝ m×d . We note that the residual in y-axes of Fig. 1 is computed as ‖x t − prox g (x t − F(x t ))‖.
We split F as follows where ( i ) m i=1 is a standard basis in ℝ m . Hence, Algorithm 1, with different step sizes for primal and dual, will be

Declarations
Conflict of interest The author declared that there is no conflict of interest.
Data availability Not applicable.
x k+1 = P C x k − F (1) (u k , v k ) − (F (1) i (x k , y k ) − F (1) i (u k−1 , v k−1 )) y k+1 = P ℝ + m (y k − F (2) (u k , v k ) − (F (2) i (x k , y k ) − F (2) i (u k−1 , v k−1 )) (u k+1 , v k+1 ) = (x k+1 , y k+1 ) with probability p (u k , v k ) with probability 1 − p Forward-reflected-backward method with variance reduction Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.