Stochastic Mirror Descent Dynamics and Their Convergence in Monotone Variational Inequalities

We examine a class of stochastic mirror descent dynamics in the context of monotone variational inequalities (including Nash equilibrium and saddle-point problems). The dynamics under study are formulated as a stochastic differential equation, driven by a (single-valued) monotone operator and perturbed by a Brownian motion. The system’s controllable parameters are two variable weight sequences, that, respectively, pre- and post-multiply the driver of the process. By carefully tuning these parameters, we obtain global convergence in the ergodic sense, and we estimate the average rate of convergence of the process. We also establish a large deviations principle, showing that individual trajectories exhibit exponential concentration around this average.


Introduction
Dynamical systems governed by monotone operators play an important role in the fields of optimization (convex programming), game theory (Nash equilibrium and generalized Nash equilibrium problems), fixed point theory, partial differential equations and many other fields of applied mathematics. In particular, the study of the relationship between continuous-and discrete-time models has given rise to a vigor-ous literature at the interface of these fields-see, e.g., [1] for a recent overview and [2] for connections to accelerated methods.
The starting point of much of this literature is that an iterative algorithm can be seen as a discretization of a continuous dynamical system. Doing so sheds new light on the properties of the algorithm, it provides Lyapunov functions, which are useful for its asymptotic analysis, and often leads to new classes of algorithms altogether. A classical example of this arises in the study of (projected) gradient descent dynamics and its connection with Cauchy's steepest descent algorithm-or, more generally, in the relation between the mirror descent (MD) class of algorithms [3] and dynamical systems derived from Bregman projections and Hessian Riemannian metrics [4][5][6].

Problem Formulation and Related Literature
Throughout this paper, X denotes a compact and convex subset of an n-dimensional real space V ∼ = R n with norm · . We will also write Y ≡ V * for the dual of V, y, x for the canonical pairing between y ∈ V * and x ∈ V, and y * := sup{ y, x : x ≤ 1} for the dual norm of y in V * . We denote the relative interior of X by ri(X ), and its boundary by bd(X ).
In this paper, we are interested in deriving dynamical system approaches to solve monotone variational inequalities (VIs). To define them, let v : X → Y be a Lipschitz continuous monotone map, i.e., for some L > 0 and all x, x ∈ X . Throughout this paper, we will be interested in solving the Minty VI: Since v is assumed continuous and monotone, this VI problem is equivalent to the Stampacchia VI: 1 When we need to keep track of X and v explicitly, we will refer to (MVI) and/or (SVI) as VI (X , v). The solution set of VI(X , v) will be denoted as X * ; by standard results, X * is convex, compact and nonempty [9]. Below, we present a selected sample of examples and applications of VI problems; for a more extensive discussion, see [9][10][11].
where f : X → R is convex and continuously differentiable on X . If x * is a solution of (Opt), first-order optimality gives Since f is convex, v = ∇ f is monotone, so (Opt) is equivalent to VI(X , ∇ f ) [12].
Example 2.2 (Saddle-point problems) Let X 1 ⊆ R n 1 and X 2 ⊆ R n 2 be compact and convex, and let U : X 1 × X 2 → R be a smooth convex-concave function (i.e., U (x 1 , x 2 ) is convex in x 1 and concave in x 2 ). Then, the associated saddle-point (or min-max) problem is to determine the value of U , defined here as val = min Existence of val follows directly from von Neumann's minimax theorem. Moreover, it is easy to check that v is monotone as a map from X := X 1 ×X 2 to R n 1 +n 2 (because U is convex in its first argument and concave in the second). Then, as in the case of (Opt), first-order optimality implies that the saddle-points of (Val) are precisely the solutions of VI(X , v) [13].

Example 2.3 (Convex games)
One of the main motivations for this paper comes from determining Nash equilibria of games with convex cost functions. To state the problem, let N := {1, . . . , N } be a finite set of players and, for each i ∈ N , let X i ⊆ R n i be a compact convex set of actions that can be taken by player i. Given an action profile x = (x 1 , . . . , x N ) ∈ X := i X i , the cost for each player is determined by an associated cost function c i : X → R. The unilateral minimization of this cost leads to the notion of Nash equilibrium, defined here as an action profile x * = (x i * ) i∈N such that Of particular interest to us is the case, where each c i is smooth and individually convex in forms a monotone map and, by first-order optimality, the Nash equilibrium problem (NE) boils down to solving VI(X , v) [9,14].
In the rest of this paper, we will consider two important special cases of operators v : X → V * , namely: 1. Strictly monotone problems, i.e., when with equality if and only if x = x .
2. Strongly monotone problems, i.e., when Clearly, strong monotonicity implies strict monotonicity, which, in turn, implies ordinary monotonicity. In the case of convex optimization, strict (respectively, strong) monotonicity corresponds to strict (respectively, strong) convexity of the problem's objective function. Under either refinement, (MVI) admits a unique solution, which will be referred to as "the" solution of (MVI).

Contributions
Building on the above, this paper is concerned with a stochastic dynamical system resulting from Nesterov's well-known "dual-averaging" mirror descent algorithm [13], perturbed by noise, and/or random disturbances. Heuristically, this algorithm aggregates descent steps in the problem's (unconstrained) dual space, and then "mirrors" the result back to the problem's feasible region to obtain a candidate solution at each iteration. This "mirror step" is performed as in the classical setting of [3,4], but the dual aggregate is further post-multiplied by a variable parameter (thus turning "dual aggregates" into "dual averages"). Thanks to this averaging, the resulting algorithm is particularly suited for problems, where only noisy information is available to the optimizer, rendering it particularly useful for machine learning and engineering applications [15], even when the stochastic environment is not stationary [16]. 2 In more detail, the dynamics under study are formulated as a stochastic differential equation (SDE) driven by a (single-valued) monotone operator and perturbed by an Itô martingale noise process. As in Nesterov's original method [13], the dynamics' controllable parameters are two variable weight sequences, that, respectively, pre-and post-multiply the drift of the process: The first acts as a "step-size" of sorts, whereas the second can be seen as an "inverse temperature" parameter (as in simulated annealing). By carefully tuning these parameters, we are then able to establish the following results: First, if the intensity of the noise process decays with time, the dynamics converge to the (deterministic) solution of the underlying VI (cf. Sect. 4.2). Second, in the spirit of the ergodic convergence analysis of [13], we establish that this convergence can be achieved at an O(1/ √ t) rate on average (Sect. 4.3). 3 Finally, in Sect. 4.4, we establish a large deviations principle showing that, as far as ergodic convergence is concerned, the above convergence rate holds with (exponentially) high probability, not only in the mean.
Conceptually, our work has close ties to the literature on dynamical systems that arise in the solution of VIs, see, e.g., [2,4,[19][20][21][22], and references therein. More specifically, a preliminary version of the dynamics considered in this paper was recently studied in the context of convex programming and gradient-like flows in [23,24]. The ergodic part of our analysis here extends the results of [24] to saddle-point problems and monotone variational inequalities, while the use of two variable weight sequences allows us to obtain almost sure convergence results without needing to rely on a parallel-sampling mechanism for variance reduction as in [23].

Stochastic Mirror Descent Dynamics
Mirror descent is an iterative optimization algorithm combining first-order oracle steps with a "mirror step" generated by a projection-type mapping. For the origins of the method, see [3]. The key ingredient defining this mirror step is a generalization of the Euclidean distance known as a "distance-generating" function: for all x, x ∈ X and all λ ∈ [0, 1].
Given a distance-generating function on X , its convex conjugate is given by and the induced mirror map is defined as Thanks to the strong convexity of h, Q(y) is well-defined and single-valued for all y ∈ Y. In particular, as illustrated in the examples below, it plays a role similar to that of a projection mapping: For future reference, some basic properties of mirror maps are collected below: Proposition 2.1 Let h be a distance-generating function on X . Then, the induced mirror map Q : Y → X satisfies the following properties: The properties reported above are fairly standard in convex analysis; for a proof, see, e.g., [12,Theorem 12.60(b)]. Of particular importance is the identity ∇h * = Q, which provides a quick way of calculating Q in "prox-friendly" geometries (such as the examples discussed above). Now, as mentioned above, mirror descent exploits the flexibility provided by a (not necessarily Euclidean) mirror map by using it to generate first-order steps along v. For concreteness, we will focus on the so-called "dual averaging" variant of mirror descent [13], defined here via the recursion where: (1) t = 0, 1, . . . denotes the stage of the process.
(2) y t is an auxiliary dual variable, aggregating first-order steps along v. 4 (3) λ t is a variable step-size parameter, pre-multiplying the input at each stage.
(4) η t is a variable weight parameter, post-multiplying the dual aggregate y t . 5 Passing to continuous time, we obtain the mirror descent dynamics with η(t) and λ(t) serving the same role as before (but now defined over all t ≥ 0). In particular, our standing assumption for the parameters λ and η of (MD) will be that (H2) η(t) and λ(t) are positive, C 1 -smooth and nonincreasing.
Heuristically, the assumptions above guarantee that the dual process y(t) does not grow too large too fast, so blow-ups in finite time are not possible. Together with the basic convergence properties of the dynamics (MD), this is discussed in more detail in Sect. 3 below.
The primary case of interest in our paper is when the oracle information for v(x) in (MD) is subject to noise, measurement errors and/or other stochastic disturbances. To account for such perturbations, we will instead focus on the stochastic mirror descent dynamics where M(t) is a continuous martingale with respect to some underlying stochastic basis (Ω, F, (F t ) t≥0 , P). 6 In more detail, we assume for concreteness that the stochastic disturbance term M(t) is an Itô process of the form where W (t) is a d-dimensional Wiener process adapted to F t , and σ (x, t) is an n × d matrix capturing the volatility of the noise process. Heuristically, the volatility matrix of M(t) captures the intensity of the noise process, and the possible correlations between its components. In terms of regularity, we will be assuming throughout that σ (x, t) is measurable in t, as well as bounded, and Lipschitz continuous in x. Formally, we posit that there exists a constant > 0 such that denotes the Frobenius (matrix) norm of σ . In particular, (H3) implies that there exists a positive constant σ * ≥ 0 such that In what follows, it will be convenient to measure the intensity of the noise affecting (SMD) via σ * ; of course, when σ * = 0, we recover the noiseless, deterministic dynamics (MD).

Deterministic Analysis
To establish a reference standard, we first focus on the deterministic regime of (MD), i.e., when M(t) ≡ 0 in (SMD).

Global Existence
We begin with a basic well-posedness result of (MD). (H1) and (H2), the dynamical system (MD) admits a unique solution from every initial condition (s,

Proposition 3.1 Under Hypotheses
Clearly, A(t, y) is jointly continuous in t and y. Moreover, by (H2), λ(t) has bounded first derivative and η(t) is nonincreasing, so both λ(t) and η(t) are Lipschitz continuous. Finally, by (H1), v is L-Lipschitz continuous, implying in turn that where α is the strong convexity constant of h, and we used Proposition 2.1 to estimate the Lipschitz constant of Q. This shows that A(t, y) is Lipschitz in y for all t, so existence and uniqueness of local solutions follows from the Picard-Lindelöf theorem. Hypothesis (H2) further guarantees that the Lipschitz constant of A(t, ·) can be chosen uniformly in t, so these solutions can be extended for all t ≥ 0.
Based on the above, we may define a nonautonomous semiflow Y : Since the dynamics will usually be started from an initial condition (0, y) ∈ R + × Y, we will simplify the notation by writing φ(t, y) = Y (t, 0, y) for all (t, y) ∈ R + × Y. The resulting trajectory in the primal space is denoted by ξ(t, y) = Q(η(t)φ(t, y)). Note that, if λ(t) and η(t) are constant functions, then the mapping φ(t, y) is the (autonomous) semiflow of the dynamics (MD).

Convergence Properties and Performance
Now, to analyze the convergence of (MD), we will consider two "gap functions" quantifying the distance between the primal trajectory, and the solution set of (MVI): -In the general case, we will focus on the dual gap function [25]: By (H1) and the compactness of X , it follows that g(x) is continuous, nonnegative and convex; moreover, we have g(x) = 0 if and only if x is a solution of VI(X , v) [7, Proposition 3.1].
-For the saddle-point problem Ex. 2.2, we instead look at the Nikaido-Isoda gap function [26]: where the operator involved in the definition of the dual gap function is given by the saddlepoint operator (1). However, it is still true that G( p 1 , p 2 ) = 0 if and only if the pair ( p 1 , p 2 ) is a saddle-point. Since both gap functions vanish only at solutions of (MVI), we will prove trajectory convergence by monitoring the decrease of the relevant gap over time. This is achieved by introducing the so-called Fenchel coupling [14], an auxiliary energy function, defined as Some key properties of F are summarized in the following proposition: ) Let h be a distance-generating function on X . Then:

as a function of y, F(x, y) is convex, differentiable, and its gradient is
given by c) For all x ∈ X and all y, y ∈ Y, we have In the sequel, if there is no danger of confusion, we will use the more concise notation x(t) = ξ(t, y) and y(t) = φ(t, y), for the unique solution to (MD) with initial condition (0, y) ∈ R + × Y. Consider the averaged trajectorȳ where S(t) := t 0 λ(s) ds. We then have the following convergence guarantee: wherex(t) is the averaged trajectory constructed in (4), and In particular, if (MVI) is associated with a convex-concave saddle-point problem as in Example 2.2, we have the guarantee: In both cases, whenever lim t→∞ η(t)S(t) = ∞,x(t) converges to the solution set of VI(X , v).
. Then, with Proposition 3.2, the fundamental theorem of calculus yields and, after rearranging, we obtain Now, let x c := argmin{h(x) : x ∈ X } denote the "prox-center" of X . Since η(0) > 0 and y(0) = 0 by assumption, we readily get From the monotonicity of v, we further deduce that Thus, substituting (8) in (7), maximizing over p ∈ X and plugging the result into (9) gives (5). Suppose now that (MVI) is associated to a convex-concave saddle-point problem, as in Ex. 2.2. In this case, we can replicate the above analysis for each component Using the fact that U is convex-concave, this leads to the value-based bounds Summing these inequalities, dividing by S(t), and using Jensen's inequality gives The bound (6) then follows by taking the supremum over p 1 and p 2 , and using the definition of the Nikaido-Isoda gap function.
The gap-based analysis of Proposition 3.3 can be refined further in the case of strongly monotone VIs. Proposition 3.4 Let x * denote the (necessarily unique) solution of a γ -strongly monotone VI(X , v). Then, with the same assumptions as in Proposition 3.3, we have In particular,x(t) converges to x * whenever lim t→∞ η(t)S(t) = ∞.
Proof By Jensen's inequality, the strong monotonicity of v and the assumption that x * solves VI(X , v), we have: where the last inequality follows as in the proof of Proposition 3.3. The bound (10) is then obtained by dividing both sides by γ .
The two results above are in the spirit of classical ergodic convergence results for monotone VIs [13,27,28]. In particular, taking η(t) = √ L/(2α) and λ(t) = 1/(2 √ t) gives the upper bound g(x(t)) ≤ D(h; X ) √ L/(αt), which is of the same order as the O(1/ √ t) guarantees obtained in the references above. However, the bound (9) does not have a term which is antagonistic to η(t) or λ(t), so, if (MD) is run with constant λ and η, we get an O(1/t) bound for g(x(t)) (and/or x(t) − x * in the case of strongly monotone VIs). 7 This suggests an important gap between continuous and discrete time; for a similar phenomenon in the context of online convex optimization, see the regret minimization analysis of [29].
We close this section with a (nonergodic) trajectory convergence result for strictly monotone problems. For any path X (·) : R + → X , call the limit set Furthermore, sincex is an accumulation point of x(t), there exists an increasing sequence (t k ) k∈N such that t k ↑ ∞ and x(t k ) →x as k → ∞. Thus, relabeling indices if necessary, we may assume without loss of generality that x(t k ) ∈ O for all k ∈ N. Now, for all ε > 0, we have whereλ := λ(0) denotes the maximum value of λ(t). As this bound does not depend on k, we can choose ε > 0 small enough so that x(t k + s) ∈ O for all s ∈ [0, ε] and all k ∈ N. Thus, letting H (t) := η(t) −1 F(x * , η(t)y(t)), and using (7), we obtain where we have set λ := inf t λ(t) > 0. Given that inf t η(t) > 0, the above implies that lim n→∞ H (t n ) = −∞, contradicting the fact that F(x * , y) ≥ 0 for all y ∈ Y. This implies thatx = x * ; by compactness, L{x(·)} = ∅, so our claim follows.

Global Existence
In this section, we turn to the stochastic system (SMD). As in the noise-free analysis of the previous section, we begin with a well-posedness result, stated for simplicity for deterministic initial conditions.  We denote by Y (t, s, y) the unique strong solution of the Itô stochastic differential equation (11), with initial condition (s, y) ∈ R + × Y. As in the deterministic case, we are mostly interested in the process starting from the initial condition (0, y), in which case we abuse the notation by writing Y (t, y) = Y (t, 0, y). The corresponding primal trajectories are generated by applying the mirror map Q to the dual trajectories, so X (t, y) = Q(η(t)Y (t, y)), for all (t, y) ∈ R + × Y. If there is no danger of confusion, we will consistently suppress the dependence on the initial position y ∈ Y in both random processes. Clearly, if λ(t) and η(t) are constant function, the solutions of (SMD) are time-autonomous.
We now give a brief overview of the results we obtain in this section. First, in Sect. 4.2, we use the theory of asymptotic pseudo-trajectories (APTs), developed by Benaïm and Hirsch [31], to establish almost sure trajectory convergence of (SMD) to the solution of VI(X , v), provided that v is strictly monotone and the oracle noise in (SMD) is vanishing at a rather slow, logarithmic rate. This strong convergence result relies heavily on the shadowing property of the dual trajectory, and its deterministic counterpart φ(t, y) (see Sect. 4.2). On the other hand, if the driving noise process is persistent, we cannot expect the primal trajectory X (t) to converge-some averaging has to be done in this case. Thus, following a long tradition on ergodic convergence for mirror descent, we investigate in Sect. 4.3 the asymptotics of a weighted time-average of X (t). Finally, we complement our ergodic convergence results with a large deviation principle showing that the ergodic average of X (t) is exponentially concentrated around its mean (Sect. 4.4).

The Small Noise Limit
We begin with the case, where the oracle noise in (SMD) satisfies the asymptotic decay condition σ (x, t) ≤ β(t) for some nonincreasing function β : R + → R + such that For instance, this condition is trivially satisfied if σ (x, t) vanishes at a logarithmic rate, i.e., β(t) = o(1/ log(t)). For technical reasons, we will also need the additional "Fenchel reciprocity" condition In words, Definition 4.1 states that an APT of (MD) tracks the solutions of (MD) to arbitrary accuracy over arbitrarily long time windows. Thanks to this property, we are able to establish the following global convergence theorem for (SMD) with vanishing oracle noise:

Theorem 4.1 Assume that v is strictly monotone, and let x * denote the (necessarily unique) solution of VI(X , v). If Hypotheses (H1)-(H5) hold, and (SMD) is run with
The proof of Theorem 4.1 requires some auxiliary results, which we provide below. We begin with a strong recurrence result for neighborhoods of the (unique) solution x * of VI(X , v) under (MD): X and let ξ(t, y) = Q(η(t)φ(t, y)). Define the stopping time

Then, t O (y) < ∞ for all y ∈ Y.
Proof Fix the initialization y ∈ Y of (MD), and let y(t) := φ(t, y), x(t) := Q(φ(t, y)) denote the induced solutions of (MD), and let H (t) := F(x * , y(t)). Then, by Proposition 3.2, and the chain rule applied to (MD), we get Since v is strictly monotone and x * solves VI( implying in turn that lim t→∞ H (t) = −∞. This contradicts the fact that H (t) ≥ 0, so we conclude that t O (y) < ∞.

σ (X (s)) · dW (s) is a continuous local martingale.
Since v is strictly monotone, the same reasoning as in the proof of Lemma 4.1 yields for some a ≡ a O > 0 and for all t ∈ [0, τ O (y)). Furthermore, by an argument based on the law of the iterated logarithm and the Dambis-Dubins-Schwarz time-change theorem for martingales as in the proof of Theorem 4.2, we get Combining this with the estimate for H (t) above, we get lim t→∞ H (t) = −∞ for P-almost all ω ∈ Ω 0 . This contradicts the fact that H (t) ≥ 0, and our claim follows.
The above result shows that the primal process X (t) hits any neighborhood of x * in finite time (a.s.). Thanks to this important recurrence property, we are finally in a position to prove Theorem 4.1: Proof of Theorem 4.1 Fix some ε > 0, and let N ε := {x = Q(y) : F(x * , y) < ε}. Let y ∈ Y be arbitrary. We first claim, that there exists a deterministic time T ≡ T (ε) such that F(x * , φ(T , y)) ≤ max{ε, F(x * , y) + ε}. Indeed, consider the hitting time (t, y)). By Hypothesis (H5), N ε contains a neighborhood of x * ; hence, by Lemma 4.1, we have t ε (y) < ∞. Moreover, observe that The strict monotonicity of v and the fact that x * solves (MVI) imply that there exists a positive constant κ ≡ κ ε > 0 such that v(x), x − x * ≥ κ for all x ∈ X \ N ε . Hence, combining this with (12), we readily see that Otherwise, if T ≥ t ε (y), we again use the descent property (12) to get F(x * , φ(T , y)) ≤ F(x * , φ(t ε (y), y)) ≤ ε.
To proceed, pick δ ≡ δ ε > 0 such that where diam(X ) := max{ x − x 2 : x, x ∈ X } denotes the Euclidean diameter of X . By Proposition 4.1 of [31], the strong solution Y of (11) (viewed as a stochastic flow) is an APT of the deterministic semiflow φ with probability 1. Hence, we can choose an (a.s.) finite random time θ ε such that sup Combining this with item (c) of Proposition 3.2, we then get where the last inequality follows from the estimate (13). Now, choose a random time T 0 ≥ max{θ ε (y), t ε (y)} and T = ε/κ as above. Then, by definition, we have F(x * , Y (T 0 , y)) ≤ 2ε with probability 1. Hence, for all s ∈ [0, T ], we get Using this as the basis for an induction argument, we readily get with probability 1. Since ε was arbitrary, we obtain F(x * , Y (t, y)) → 0, implying in turn that X (t) → x * (a.s.) by Proposition 3.2.

Ergodic Convergence
We now proceed with an ergodic convergence result, in the spirit of Proposition 3.3.
The results presented in this section are derived under the assumption that (SMD) is started with the initial conditions (s, y) = (0, 0). This is only done to make the presentation clearer; see Remark 4.   Before discussing the proof of Theorem 4.2, it is worth noting the interplay between the two variable weight parameters, λ(t) and η(t). In particular, if (SMD) is run with weight sequences of the form 1/t q for some q > 0, we obtain: In the above, theÕ(·) notation signifies "O(·) up to logarithmic factors". 8 Up to such factors, (15) is optimized when a + b = 1/2; if these factors are to be considered, any choice with a + b = 1/2 and b > 0 gives the same rate of convergence, indicating that the role of the post-multiplication factor η(t) is crucial to finetune the convergence rate of (SMD). We find this observation particularly appealing, as it is reminiscent of Nesterov's remark that "running the discrete-time algorithm (2) with the best step-size strategy λ t and fixed η […] gives the same (infinite) constant as the corresponding strategy for fixed λ and variable η t " [13, p. 224].
The proof of Theorem 4.2 relies crucially on the following lemma, which provides an explicit estimate for the decay rate of the employed gap functions. (0, 0), and Hypotheses (H1)-(H3) hold, then:

Lemma 4.3 If (SMD) is initialized at
where I (t) := sup p∈X I p (t) and If (MVI) is associated with a convex-concave saddle-point problem as in Example 2.2, we have where we have set D sp := D(h 1 ; X 1 ) + D(h 2 ; X 2 ), 1/α sp := 1/α 1 + 1/α 2 , and To proceed, let with I p (t) given by (17). Then, rearranging and bounding the second term of (18) as in the proof of Proposition 3.3, we obtain The bound (16) then follows by noting that g(X (t)) = max p∈X v( p),X (t) − p . Now, assume that (MVI) is associated to a convex-concave saddle-point problem as in Ex. 2.2. As in the proof of Proposition 3.3, we first replicate the analysis above for each component of the problem, and we then sum the two components to get an overall bound for the Nikaido-Isoda gap function G. Specifically, applying (20) to (1), we readily get where i ∈ {1, 2}. Moreover, Jensen's inequality yields with the last inequality following from (21). Our claim then follows by maximizing over ( p 1 , p 2 ) and recalling the definition (3) of the Nikaido-Isoda gap function.
Clearly, the crucial unknown in the bound (16) is the stochastic term I (t). To obtain convergence ofX (t) to the solution set of VI(X , v), the term I (t) must grow slower than S(t). As we show now, this is indeed the case: Proof of Theorem 4.2 By Lemma 4.3 and Remark 4.1, it suffices to show that the term I (t) grows as O(L(t) log log L(t)) with probability 1. To do so, let κ p := I p denote the quadratic variation of I p . 9 Then, the rules of stochastic calculus yield where diam(X ) := max{ x − x 2 : x, x ∈ X } denotes the Euclidean diameter of X . Hence, for all t ≥ 0, we get the quadratic variation bound Now, let κ p (∞) := lim t→∞ κ p (t) ∈ [0, ∞] and set otherwise.
The process τ p (s) is finite, nonnegative, nondecreasing and right-continuous on [0, κ p (∞)); moreover, it is easy to check that κ p (τ p (s)) = s ∧ κ p (∞) and τ p (κ p (t)) = t [32,Problem 3.4.5]. Therefore, by the Dambis-Dubins-Schwarz timechange theorem for martingales [32,Theorem. 3.4.6 and Problem. 3.4.7], there exists a standard, one-dimensional Wiener process (B p (t)) t≥0 adapted to a modified filtra-tionF s = F τ p (s) (possibly defined on an extended probability space), and such that B p (κ p (t)) = I p (t) for all t ≥ 0 (except possibly on a P-null set). Hence, for all t > 0, we have By the law of the iterated logarithm [32], the first factor above is bounded almost surely; as for the second, (22) gives Thus, combining all of the above, we get so our claim follows from (16).
To complete our proof, note first that the condition lim t→∞ η(t)S(t) = ∞ implies that lim t→∞ S(t) = ∞ (given that η(t) is nonincreasing). Thus, by de l'Hôpital's rule and the assumption lim t→∞ λ(t)η(t) = 0, we also get S(t) −1 t 0 λ 2 (s)η(s) ds = 0. Finally, for the last term of (14), consider the following two cases: 2. Otherwise, if lim t→∞ L(t) = ∞, de l'Hôpital's rule readily yields by the boundedness of λ(t). Another application of de l'Hôpital's rule gives The above shows that, under the stated assumptions, the RHS of (14) converges to 0 almost surely, implying in turn thatX (t) converges to the solution set of VI(X , v) with probability 1.

Large Deviations
In this section, we study the concentration properties of (SMD) in terms of the dual gap function. As in the previous section, we will assume that (SMD) is issued from the initial condition (s, y) = (0, 0). First, recall that for every p ∈ X we have the upper bound with R p (t) and I p (t) defined as in (19) and (17), respectively. Since I p (t) is a continuous martingale starting at 0, we have E[I p (t)] = 0, implying in turn that where Markov's inequality therefore implies that The bound (24) provides a first estimate of the probability of observing a large gap from the solution of (MVI), but because it relies only on Markov's inequality, it is rather crude. To refine it, we provide below a "large deviations" bound that shows that the ergodic gap process g(X (t)) is exponentially concentrated around its mean value: Theorem 4.3 Suppose (H1)-(H3) hold, and that (SMD) is started from the initial condition (s, y) = (0, 0). Then, for all δ > 0 and all t > 0, we have where and with κ > 0 a positive constant depending only on X and · .
The concentration bound (25) can also be formulated as follows: Corollary 4.2 With notation and assumptions as in Theorem 4.3, we have with probability at least 1 − δ. In particular, if (SMD) is run with parameters λ(t) = (1 + t) −a and η(t) = (1 + t) −b for some a, b ∈ [0, 1], we have with arbitrarily high probability.
To prove Theorem 4.3, define first the auxiliary processes We then have: Proof The proof follows the same lines as Lemma 4.3. Specifically, given a reference point p ∈ X , define the processH p (t) := 1 η(t) F( p, η(t)Z (t)). Then, by the weak Itô formula (33) in Sect. 5, we havẽ We thus get as claimed.
We are now ready to establish our large deviations principle for (SMD):

Proof of Theorem 4.3
For p ∈ X and t > 0 fixed, we have where we used (27) to obtain the last inequality. To proceed, let The process Δ(t) is a continuous martingale starting at 0 which is almost surely bounded in L 2 , thus providing an upper bound for R p (t) which is independent of the reference point p ∈ X . Indeed, recalling the definition (23) of K (t), we see that In turn, this implies that for all ε, t > 0, P g(X (t)) ≥ ε ≤ P (Δ(t) ≥ εS(t) − K (t)) for all ε, t > 0.
To prove the theorem, we are left to bound the right-hand side of the above expression. To that end, letting ρ(t) := [Δ(t), Δ(t)] denote the quadratic variation of Δ(t), the Cauchy-Schwarz inequality readily gives

Conclusions
This paper examined a continuous-time dynamical system for solving monotone variational inequality problems with random inputs. The key element of our analysis is the identification of a energy-type function, which allows us to prove ergodic convergence of generated trajectories in the deterministic as well as in the stochastic case. Future research should extend the present work in the following dimensions. First, it is not clear yet how the continuous-time method will help us in the derivation of a consistent numerical scheme. A naive Euler discretization might potentially lead to a loss in speed of convergence (see [2]). Second, it is of great interest to relax the monotonicity assumption we made on the involved operator. We are currently investigating these extensions. Third, it is of interest to consider different noise models as well. In particular, it would be interesting to know how the results derived in this paper change when the stochastic perturbation comes from a jump Markov process, or more generally, a Lévy process. This extension would likely need new techniques, and we regard this as an important contribution for future work.