Error estimates of the backward Euler-Maruyama method for multi-valued stochastic differential equations

In this paper, we derive error estimates of the backward Euler-Maruyama method applied to multi-valued stochastic differential equations. An important example of such an equation is a stochastic gradient flow whose associated potential is not continuously differentiable, but assumed to be convex. We show that the backward Euler-Maruyama method is well-defined and convergent of order at least $1/4$ with respect to the root-mean-square norm. Our error analysis relies on techniques for deterministic problems developed in [Nochetto, Savar\'e, and Verdi, Comm.\ Pure Appl.\ Math., 2000]. We verify that our setting applies to an overdamped Langevin equation with a discontinuous gradient and to a spatially semi-discrete approximation of the stochastic $p$-Laplace equation.


Introduction
In this paper, we investigate the numerical approximation of multi-valued stochastic differential equations (MSDE). An important example of such equations is provided by stochastic gradient flows with a convex potential. More precisely, let T ∈ (0, ∞) and (Ω, F , (F t ) t∈[0,T ] , P) be a filtered probability space satisfying the usual conditions. By W : [0, T ] × Ω → R m , m ∈ N, we denote a standard (F t ) t∈[0,T ] -adapted Wiener process. For instance, let us consider the numerical treatment of nonlinear, overdamped Langevin-type equations of the form dX(t) = −∇Φ(X(t)) dt + g 0 dW (t), t ∈ (0, T ], where X 0 ∈ L p (Ω, F 0 , P; R d ), p ∈ [2, ∞), g 0 ∈ R d,m , and Φ : R d → R are given. These equations have many important applications, for example, in Bayesian statistics and molecular dynamics. We refer to [10,22,23,44,49] and the references therein.
We recall that, if the gradient ∇Φ is of superlinear growth, then the classical forward Euler-Maruyama method is known to be divergent in the strong and weak sense, see [18]. This problem can be circumvented by using modified versions of the explicit Euler-Maruyama method based on techniques such as taming, truncating, stopping, projecting, or adaptive strategies, cf. [4,6,17,19,29,48].
In this paper, we take an alternative approach by considering the backward Euler-Maruyama method. Our main motivation for considering this method lies in its good stability properties, which allow its application to stiff problems arising, for instance, from the spatial semi-discretization of stochastic partial differential equations. Implicit methods have also been studied extensively in the context of stochastic differential equations with superlinearly growing coefficients. For example, see [1,15,16,30,31].
The error analysis in the above mentioned papers on explicit and implicit methods typically requires a certain degree of smoothness of ∇Φ such as local Lipschitz continuity. The purpose of this paper is to derive error estimates of the backward Euler-Maruyama method for equations of the form (1.1), where the associated potential Φ : R d → R is not necessarily continuously differentiable, but assumed to be convex.
An example of a non-smooth potential is found by setting d = m = 1 and Φ(x) = |x| p , x ∈ R, for p ∈ [1,2). Evidently, the gradient of Φ is not locally Lipschitz continuous at 0 ∈ R for p ∈ (1, 2). Moreover, if p = 1, then the gradient ∇Φ has a jump discontinuity of the form ( 1.4) Here, the value c ∈ R at x = 0 is not canonically determined. We have to solve a nonlinear equation of the form x + k∇Φ(x) = y in each step of the backward Euler method (1.3). However, if y ∈ (−k, k), then the sole candidate for a solution is x = 0, since otherwise |x + k∇Φ(x)| ≥ k. But x = 0 is only a solution if kc = y. Therefore, the mapping R ∋ x → x + k∇Φ(x) ∈ R is not surjective for any single-valued choice of c. This problem can be bypassed by considering the multi-valued subdifferential ∂Φ : R d → 2 R d of a convex potential Φ : R d → R, which is given by Recall that ∂Φ(x) = {∇Φ(x)} if the gradient exists at x ∈ R d in the classical sense. See [45,Section 23] for further details.
In the above example, one easily verifies that This allows us to solve the nonlinear inclusion where we want to find x ∈ R with x + k∂Φ(x) ∋ y for any y ∈ R.
For this reason, we study the more general problem of the numerical approximation of multi-valued stochastic differential equations (MSDE) of the form dX(t) + f (X(t)) dt ∋ b(X(t)) dt + g(X(t)) dW (t), t ∈ (0, T ], Here, we assume that the mappings b : R d → R d and g : R d → R d,m are globally Lipschitz continuous. Moreover, the multi-valued drift coefficient function f : R d → 2 R d is assumed to be a maximal monotone operator, cf. Definition 2.1 below. See also Section 4 for a complete list of all imposed assumptions on the MSDE (1.5). Let us emphasize that the subdifferential of a proper, lower semi-continuous and convex potential is an important example of a possibly multi-valued and maximal monotone mapping f , cf. [45,Corollary 31.5.2].
The backward Euler-Maruyama method for the approximation of the MSDE (1.5) on the partition π is then given by the recursion X n ∈ X n−1 − kf (X n ) + kb(X n ) + g(X n−1 )∆W n , n ∈ {1, . . . , N }, We discuss the well-posedness of this method (1.6) under our assumptions on f , b, and g in Section 5. In particular, it will turn out that both problems, (1.5) and (1.6), admit single-valued solutions (X(t)) t∈[0,T ] and (X n ) N n=0 , respectively. The main result of this paper, Theorem 6.4, then states that the backward Euler-Maruyama method is convergent of order at least 1/4 with respect to the norm in L 2 (Ω; R d ). For the error analysis we rely on techniques for deterministic problems developed in [37]. An important ingredient is the additional condition on f that there exists γ ∈ (0, ∞) with . This assumption is easily verified for a subdifferential of a convex potential, cf. Lemma 3.2. As already noted in [37] for deterministic problems, this inequality allows us to avoid Gronwall-type arguments in the error analysis for terms involving the multi-valued mapping f .
Before we give a more detailed outline of the content of this paper let us mention that multi-valued stochastic differential equations have been studied in the literature before. The existence of a uniquely determined solution to the MSDE (1.5) has been investigated, e.g., in [7,21,41]. We also refer to the more recent monograph [40] and the references therein. In [14,51] related results have been derived for multi-valued stochastic evolution equations in infinite dimension. The numerical analysis for MSDEs has also been considered in [3,26,42,53,55]. However, these papers differ from the present paper in terms of the considered numerical methods, the imposed conditions, or the obtained order of convergence.
Further, we also mention that several authors have developed explicit numerical methods for SDEs with discontinuous drifts in recent years. For instance, we refer to [9,24,25,33,34,35,36]. While these results often apply to more irregular drift coefficients, which are beyond the framework of maximal monotone operators, the authors have to employ more restrictive conditions such as the global boundedness of the drift, which is not required in our framework. This paper is organized as follows: In Section 2 we fix some notation and recall important terminology for multi-valued mappings. In Section 3 we demonstrate how to apply the techniques from [37] to the simplified setting of the Langevin equation (1.1). In addition, we also show that if the gradient ∇Φ is more regular, say Hölder continuous with exponent α ∈ (0, 1], then the order of convergence increases to 1+α 4 . Moreover, it turns out that the error constant does not grow exponentially with the final time T . This is an important insight if the backward Euler method is used within an unadjusted Langevin algorithm [44], which typically requires large time intervals. See Theorem 3.7 and Remark 3.8 below. In Section 4 we turn to the more general multi-valued stochastic differential equation (1.5). We state all assumptions imposed on the appearing drift and diffusion coefficients and collect some properties of the exact solution. In Section 5 we show that the backward Euler-Maruyama method (1.6) is well-posed under the assumptions of Section 4. In Section 6 we prove the already mentioned convergence result with respect to the root-mean-square norm. Finally, in Section 7 we verify that the setting of Section 4 applies to a Langevin equation with the discontinuous gradient (1.4). Further, we also show how to apply our results to the spatial discretization of the stochastic p-Laplace equation which indicates their usability for the numerical analysis of stochastic partial differential equations. However, a complete analysis of the latter problem will be deferred to a future work.

Preliminaries
In this section, we collect some notation and introduce some background material. First we recall some terminology for set valued mappings and (maximal) monotone operators. For a more detailed introduction we refer, for instance, to [47,Abschn. 3.3] or [39,Chapter 6].
By R d , d ∈ N, we denote the Euclidean space with the standard norm | · | and inner product ·, · . Let M ⊂ R d be a set. A set-valued mapping f : M → 2 R d maps each x ∈ M to an element of the power set 2 R d , that is, Moreover, a set-valued mapping f : M → 2 R d is called maximal monotone if f is monotone and for all x ∈ M and y ∈ R d satisfying it follows that x ∈ D(f ) and y ∈ f (x).
Next, we recall a Burkholder-Davis-Gundy-type inequality. For a proof we refer to [28,Chapter 1,Theorem 7.1]. For its formulation we take note that the Frobenius or Hilbert-Schmidt norm of a matrix g ∈ R d,m is also denoted by |g|. Lemma 2.2. Let p ∈ [2, ∞) and g ∈ L p (Ω; L p (0, T ; R d,m )) be stochastically integrable. Then, for every s, t ∈ [0, T ] with s < t, the inequality holds.
Let us also recall a stochastic variant of the Gronwall inequality. A proof that can be modified to this setting can be found in [54]. Compare also with [50].
T ] -adapted and almost surely continuous stochastic processes such that M is a local (F t ) t∈[0,T ] -martingale with M (0) = 0. Moreover, suppose that Z and ξ are nonnegative. In addition, let ϕ : [0, T ] → R be integrable and nonnegative. If, for all t ∈ [0, T ], we have Moreover, we often make use of generic constants. More precisely, by C we denote a finite and positive quantity that may vary from occurrence to occurrence but is always independent of numerical parameters such as the step size k = T N and the number of steps N ∈ N.

Application to the Langevin equation with a convex potential
In order to illustrate our approach, we first consider a more regular stochastic differential equation with single-valued (Hölder) continuous drift term. More precisely, we consider the overdamped Langevin equation [23,Section 2.2] dX(t) = −∇Φ(X(t)) dt + g 0 dW (t), t ∈ [0, T ], where X 0 ∈ L 2 (Ω, F 0 , P; R d ), g 0 ∈ R d,m , and W : [0, T ] × Ω → R m is a standard R m -valued Wiener process. In addition, we impose the following assumption on the potential Φ : R d → R. Assumption 3.1. Let Φ : R d → R be a convex, nonnegative, and continuously differentiable function.
In the following, we denote by f : R d → R d the gradient of Φ, that is f (x) = ∇Φ(x). It is well-known that the convexity of Φ implies the variational inequality see, for example, [45, § 23].
In the following lemma, we collect some properties of f , which are direct consequences of Assumption 3.1. Both inequalities are well-known. The proof of (3.4) is taken from [37].
Proof. The first inequality follows directly from (3.2) since For the proof of the second inequality we start by rewriting its left-hand side. For arbitrary v, w, z ∈ R d we rearrange the terms to obtain But (3.2) says that σ(v, w) ≥ 0 for all v, w ∈ R d , which completes the proof.
It follows from Assumption 3.1 and Lemma 3.2 that the drift f = ∇Φ of the stochastic differential equation (3.1) is continuous and monotone. Therefore, by [43, Thm. 3.1.1] the stochastic differential equation (3.1) has a solution in the strong (probabilistic) sense satisfying P-a.s. for all t ∈ [0, ∞) f (X(s)) ds + g 0 W (t). (3.5) Moreover, the solution is unique up to P-indistinguishability and it is squareintegrable with Next, we turn to the numerical approximation of the solution of (3.1). Recall that for a single-valued drift the backward Euler-Maruyama method is given by the recursion X n = X n−1 − kf (X n ) + g 0 ∆W n , n ∈ {1, . . . , N }, where ∆W n = W (t n ) − W (t n−1 ), t n = nk and k = T N . The next lemma contains some a priori estimates for the backward Euler-Maruyama method (3.6). Lemma 3.3. Let g 0 ∈ R d,m be given and let Assumption 3.1 be satisfied. For an arbitrary step size k = T N , N ∈ N, let (X n ) n∈{0,...,N } be a family of (F tn ) n∈{0,...,N }adapted random variables satisfying (3.6).
Proof. First, we recall the identity Using also (3.6), we then get for every n ∈ {1, . . . , N }. Hence, an application of (3.2) yields for every n ∈ {1, . . . , N }. From applications of the Cauchy-Schwarz inequality and the weighted Young inequality we then obtain ≤ 2kΦ(0) + 2 g 0 ∆W n , X n − X n−1 + 2 g 0 ∆W n , X n−1 ≤ 2kΦ(0) + 2 g 0 ∆W n 2 + 1 2 |X n − X n−1 | 2 + 2 g 0 ∆W n , X n−1 , for every n ∈ {1, . . . , N }. The third term on the right-hand side is absorbed in the third term on the left-hand side. Summation then yields An inductive argument over n ∈ {1, . . . , N } then yields that X n is square-integrable due to the assumption X 0 ∈ L 2 (Ω, F 0 , P; R d ). Therefore, after taking expectation the last sum vanishes. Moreover, an application of the Itō isometry then gives Since this is true for any n ∈ {1, . . . , N } the assertion follows.
As the next theorem shows, Assumption 3.1 is also sufficient to ensure the wellposedness of the backward Euler-Maruyama method. The result follows directly from the fact that f is continuous and monotone due to (3.3). For a proof we refer, for instance, to [4,Sect. 4], [38,Chap. 6.4], and [52,Theorem C.2]. The assertion also follows from the more general result in Theorem 5.3 below.
We now turn to an error estimate with respect to the L 2 (Ω; R d )-norm. Since we do not impose any (local) Lipschitz condition on the drift f , classical approaches based on discrete Gronwall-type inequalities are not applicable. Instead we rely on an error representation formula, which was introduced for deterministic problems in [37].
for all t ∈ (t n−1 , t n ], n ∈ {1, . . . , N }. We are now prepared to state Lemma 3.5. The underlying idea of this lemma was introduced in [37], where it is used to derive a posteriori error estimates for the backward Euler method. In fact, in the absence of noise, only the first term on the right-hand side of (3.12) is non-zero. In [37] this term is used as an a posteriori error estimator, since it is explicitly computable by quantities generated by the numerical method.
Proof. From (3.6) we directly deduce that for every n ∈ {1, . . . , N } Then, one easily verifies for all t ∈ (t n−1 , t n ], n ∈ {1, . . . , N }, that Hence, due to (3.5) the error process E := X − X fulfills To estimate the norm of E 1 (t n ), we first note that E 1 has absolutely continuous sample paths with E 1 (0) = 0. Hence, is fulfilled for almost all t ∈ [0, T ]. Therefore, by integration with respect to t, we get Next, we write and use (3.3) and (3.4) to obtain, for almost every t ∈ (t n−1 , t n ], that Furthermore, the expectation of the second integral on the right-hand side of (3.15) is equal to Therefore, Since ti ti−1 (t i − t) dt = 1 2 k 2 the assertion follows. The next lemma contains an estimate of the difference between the Wiener process W and its piecewise linear interpolant W.
Lemma 3.6. For every g 0 ∈ R d,m and every step size where we used that the two increments of the Wiener process are independent for every t ∈ (t n−1 , t n ], n ∈ {1, . . . , N }, and we also applied Itō's isometry. By symmetry of the two terms it then follows that and the proof is complete.
The error estimates in Lemma 3.5 and Lemma 3.6 allow us to determine the order of convergence of the backward Euler-Maruyama method without relying on discrete Gronwall-type inequalities. The following theorem imposes the additional assumption that the drift f is Hölder continuous. We include the parameter value α = 0, which simply means that f is continuous and globally bounded. The case of less regular f is treated in Section 6.
Observe that we recover the standard rate 1 2 if α = 1, that is, if the drift f is assumed to be globally Lipschitz continuous. Compare also with the standard literature, for example, [20,Chap. 12]  For processes X : [0, T ] × Ω → R d and exponents α ∈ [0, 1], we define the family of Hölder semi-norms by Theorem 3.7. Let X 0 ∈ L 2 (Ω, F 0 , P; R d ) as well as g 0 ∈ R d,m be given, let Assumption 3.1 be fulfilled and let f = ∇Φ be Hölder continuous with exponent Then there exists C ∈ (0, ∞) such that for every step size Proof. Since f is assumed to be α-Hölder continuous it follows that In particular, f grows at most linearly. Therefore, as stated in [28, Chap. 2, . We will use Lemma 3.5 to prove the error bound. To this end, we first show that (3.17) Indeed, we make use of the Hölder continuity of f directly and obtain where we also used Hölder's inequality with p = 2 1+α ∈ [1, 2] and 1 p + 1 q = 1 as well as Jensen's inequality. Due to the a priori estimate (3.8) is bounded independently of the step size k. Hence, we arrive at (3.17).
Therefore, it remains to estimate the second error term in Lemma 3.5: tn where we inserted the definition of X from (3.10). Moreover, from (3.11) we get Hence, the random variable in the second slot of the inner product on the right-hand side (3.18) is centered and is independent of any F tj−1 -measurable random variable. Thus, we may write To estimate T 1 we first recall the definitions of X and X from (3.10). Then we apply the Cauchy-Schwarz inequality and obtain From the Hölder continuity of f we then deduce that where the last inequality is in fact an equality if α = 1, 1 q = 0 or if α = 0, 1 q = 1. Otherwise the inequality follows from Hölder's inequality with p = 1 α ∈ (1, ∞) and 1 p + 1 q = 1, followed by an application of Jensen's inequality. Furthermore, Lemma 3.6 states that Therefore, together with (3.8) we arrive at the estimate The estimate of T 2 works similarly by additionally making use of the Hölder continuity of the exact solution. To be more precise, we have that Together with the Cauchy-Schwarz inequality and (3.19), we therefore obtain Inserting the estimates for T 1 , T 2 and (3.17) into Lemma 3.5 completes the proof.
Remark 3.8. The precise form of the constant C appearing in Theorem 3.7 is, after taking squares, Observe that, since we avoid the use of Gronwall-type inequalities, the error constant does not grow exponentially with time T . This indicates that the backward Euler-Maruyama method is particularly suited for long-time simulations as is often required in Markov-chain Monte Carlo methods, for example, in the unadjusted Langevin algorithm [44].

Properties of the exact solution
In this section, we turn our attention to the multi-valued stochastic differential equation (MSDE) in (1.5). We give a complete account of the assumptions imposed on the coefficient functions. In addition, we collect some results on the existence and uniqueness of a strong solution to the MSDE. We also include useful results on higher moment bounds of the exact solution.
Assumption 4.4. The initial value X 0 is an F 0 -measurable and D(f )-valued random variable. Furthermore, where the value of p is the same as in Assumption 4.1.
Observe that Assumptions 4.2 and 4.3 directly imply that b and g grow at most linearly. More precisely, after possibly increasing the values of L b and L g , we obtain the bounds Next, we introduce the notion of a solution of (1.5), which we use for the remainder of this paper. holds for all t ∈ [0, T ] and P-almost surely. (iv) For almost all ω ∈ Ω and t ∈ [0, T ], it follows that η(t, ω) ∈ f (X(t, ω)); in other words, for every y ∈ D(f ) and f y ∈ f (y) the inequality is satisfied for almost every t ∈ [0, T ] and P-almost surely, cf. Definition 2.1.
This notion of a solution has been considered in, for example, [7], [21], [41], and [51], where also the existence of a unique solution is shown. Due to their importance for the error analysis, we next prove certain moment estimates. Furthermore, if p ∈ (1, ∞) and 1 p + 1 q = 1, then Proof. Existence and uniqueness is shown, for instance, in [21]. For Inserting the definition of Z then proves the first estimate.
Furthermore, if Assumption 4.1 holds with p ∈ (1, ∞), then we have, for every In particular, the process H is a continuous, progressively measurable process with bounded total variation and H(0) = 0 almost surely. The stronger condition of absolute continuity of the process H, which is required in Definition 4.6, is essential in the proof of Theorem 6.4 below. This explains why we work with the stronger notion of a solution in Definition 4.6.

Well-posedness of the backward Euler method
In this section, we show that the backward Euler-Maruyama method (1.6) for the MSDE (1.5) is well-posed under the same assumptions as in the previous section.
Lemma 5.1. Let Assumptions 4.1 and 4.2 be satisfied. Furthermore, let w ∈ R d and k ∈ (0, T ] be given with L b k ∈ [0, 1). Then there exist uniquely determined x 0 ∈ D(f ) and η x0 ∈ f (x 0 ), which satisfy the nonlinear equation Proof. We first show that there exists a unique x 0 ∈ D(f ) such that To this end, notice that for all x, y ∈ R d , the inequalities hold due to the step-size bound. In addition, it follows from (4.1) that for all x ∈ R d . Hence, (id+kf −kb) is the sum of the maximal monotone operator kf and the mapping (id − kb), which is single-valued, Lipschitz continuous, monotone and coercive. Thus, we can apply [2, Theorem 2.1] and obtain the existence of x 0 ∈ D(f ) such that (5.2) holds. Furthermore, there necessarily exists a corresponding unique element η x0 ∈ f (x 0 ) with It remains to prove the uniqueness of x 0 , which directly implies the uniqueness of η x0 . Assume that there exist x 1 ∈ D(f ) and η x1 ∈ f (x 1 ) as well as x 2 ∈ D(f ) and η x2 ∈ f (x 2 ) such that By considering the difference of these equations tested with x 1 − x 2 , we obtain Since 1 − kL b > 0 we must have x 1 = x 2 and the proof is complete.
For later use, we note that the solution operator for (5.1) is Lipschitz continuous.
, denote the unique solutions of the equations By considering the difference of these equations, tested with x 1 − x 2 , we obtain By using the Cauchy-Schwarz inequality for the right-hand side as well as the monotonicity and the Lipschitz continuity for the left-hand side, we get Reinserting x i = S k (w i ) then shows that Theorem 5.3. Let Assumptions 4.1 to 4.4 be satisfied. Then for every step size k = T N , N ∈ N, with L b k ∈ [0, 1) there exist uniquely determined families of squareintegrable, R d -valued and (F tn ) n∈{0,...,N } -adapted random variables (X n ) n∈{0,...,N } and (η n ) n∈{0,...,N } such that X n ∈ D(f ), η n ∈ f (X n ) for every n ∈ {0, . . . , N } and X n + kη n = X n−1 + kb(X n ) + g(X n−1 )∆W n (5.3) for every n ∈ {1, . . . , N }, P-almost surely.
Next we state an a priori estimate for the sequence of random variables satisfying recursion (1.6).

Error estimates in the general case
In this section, we derive an error estimate for the backward Euler method given by (1.6) for the MSDE (1.5).
To prove the convergence of the scheme (1.6), let us fix some notation. Throughout this section, we assume that the equidistant step size k = T N is small enough so that the a priori estimates in Lemma 5.4 hold. Further, as in (3.9) and (3.10), we denote the piecewise linear interpolants of the discrete values by X (0) = X 0 , H(0) = η 0 for η 0 ∈ f (X 0 ) and g(X (s)) dW (s).

(6.2)
In view of (1.6) and the definition of G for t ∈ (t n−1 , t n ], n ∈ {1, . . . , N }, we obtain the representation We begin the derivation of our error estimate by considering the difference between the stochastic integral G and its approximation G.
In addition, for every ρ ∈ [2, ∞), there exists K ρ ∈ (0, ∞) such that, for every n ∈ {1, . . . , N } and t ∈ (t n−1 , t n ], the following estimates hold: Proof. Recall the definitions of G and G from (6.1) and (6.2). First, we add and subtract a term and then apply the triangle inequality. Then, for every n ∈ {1, . . . , N } and t ∈ (t n−1 , t n ] we arrive at by an application of Itō's isometry. Furthermore, due to the Lipschitz continuity of g we obtain t 0 E |g(X(s)) − g(X (s))| 2 ds where the last step follows from the identity which holds for every s ∈ (t i−1 , t i ], i ∈ {1, . . . , N }. Finally, it follows from the same arguments as in the proof of Lemma 3.6 and by (4.1) for every t ∈ (t n−1 , t n ] that Together with the a priori bounds from Lemma 5.4 this shows (6.4). It remains to prove the estimates (6.5) and (6.6). For (6.5) we first apply the Burkholder-Davis-Gundy-type inequality from Lemma 2.2 with constant C ρ and obtain for every n ∈ {1, . . . , N } and t ∈ (t n−1 , t n ] that where we also made use of the linear growth bound (4.1) in the last step. This proves (6.5). The bound in (6.6) can be shown by analogous arguments.
The next lemma generalizes an important estimate from the proof of Theorem 3.7 to the multi-valued setting. In particular, we refer to Lemma 3.5 and (3.17).
Lemma 6.2. Let Assumptions 4.1 to 4.4 be satisfied. For every step size k = T N , N ∈ N, with 5L b k ∈ [0, 1), let the families (X n ) n∈{0,...,N } and (η n ) n∈{0,...,N } of random variables be as stated in Theorem 5.3. Then there exists K δη ∈ (0, ∞) independent of the step size k such that Proof. The nonnegativity follows immediately from the monotonicity of f . To prove the second inequality, we insert the scheme (5.3) and obtain because of the telescopic structure. Furthermore, it follows from Assumptions 4.1 and 4.4 that For (6.8) we apply Hölder's inequality with ρ = max(2, p) and 1 ρ + 1 ρ ′ = 1 to obtain Then, from applications of the triangle inequality and Lemma 5.4, we get We apply the polynomial growth bound satisfied by f and see that, for p ∈ [2, ∞), In both cases the appearing terms are finite because of Assumption 4.4. Moreover, a further application of the triangle inequality yields Due to the linear growth bound (4.1) on b and the a priori bound (5.4), it then follows that By application of Lemma 2.2 with constant C ρ , we obtain Together with the linear growth bound (4.1) on g, this shows that Putting the estimates together proves the desired bound.
We are now prepared to state and prove the main result of this section. While the main ingredients of the proof still consist of techniques introduced in [37, Sect. 4] for deterministic problems, the proof is somewhat more technical than the proof of Theorem 3.7. In particular, due to the presence of Lipschitz perturbations in the general problem (1.5) it is no longer possible to avoid an application of a Gronwall lemma. Moreover, as in [37,Sect. 4] we impose the following additional assumption on the multi-valued mapping f . Assumption 6.3. There exists γ ∈ (0, ∞) such that, for every v, w, z ∈ D(f ), In Lemma 3.2, we already proved that, if f is the subdifferential of a convex potential, then Assumption 6.3 is satisfied with γ = 1. For a further example, we refer to Section 7.  Proof. Let us first introduce some additional notation. We will denote the error between the exact solution X to (1.5) and the numerical approximation X defined in (6.3) by E(t) := X(t) − X (t), t ∈ [0, T ]. Furthermore, it will be convenient to split the error into two parts where we define P-almost surely for every t ∈ (0, T ]. We expand the square of the norm of E as In order to estimate the terms on the right-hand side of (6.11), we first observe in (6.9) that E 1 has absolutely continuous sample paths with E 1 (0) = 0. Hence we have 1 (6.12) Furthermore, we also have Thus, after combining (6.12) and (6.13) we obtain (6.14) For the first integral on the right-hand side of (6.14) we insert the derivative of E 1 and the definition of the error process E. This yields, for almost every s ∈ (0, T ], After recalling the definition of X we use Assumptions 4.1 and 6.3. Then, for almost every s ∈ (t n−1 , t n ] and all n ∈ {1, . . . , N }, we get where the second term in the last step is non-positive due to the monotonicity of f (cf. Definition 2.1). Moreover, because of the Lipschitz continuity of b, we have for almost every s ∈ (0, T ] that b(X(s)) − b(X (s)), X(s) − X (s) where we also made use of Young's inequality. In addition, for every n ∈ {1, . . . , N } and s ∈ (t n−1 , t n ], we have that Altogether, for every t ∈ (t n−1 , t n ] and n ∈ {1, . . . , N }, we have shown that t tn−1 where we also inserted that Hence, together with Lemma 5.4 and Lemma 6.2 this shows that Next, we give an estimate for the second integral on the right-hand side of (6.14). For every n ∈ {1, . . . , N } and t ∈ (t n−1 , t n ] we decompose the integral as follows For every i ∈ {1, . . . , n − 1} we then add and subtract E 2 (t i ) in the second slot of the inner product in the first term on the right-hand side of (6.16). This gives ti ti−1 After inserting the definition of E 2 from (6.10) the first integral is then equal to ti ti−1 for all n ∈ {1, . . . , N }, t ∈ (t n−1 , t n ] and t i < t. Hence, after taking expectations in (6.16) we arrive at Inserting the definitions (6.9) and (6.10) of E 1 and E 2 and applying Hölder's inequality with ρ = max(2, p) and 1 ρ + 1 ρ ′ = 1, we get In the following, we will estimate Γ 1 , Γ 2 , Γ 3 , and Γ 4 separately. For Γ 1 we obtain after an application of Hölder's inequality for sums that If p ∈ [2, ∞) then ρ = p and ρ ′ = q. In this case all integrals appearing are finite due to the bounds in Theorem 4.7 and Lemma 5.4. Moreover, if p ∈ (1, 2) then ρ = ρ ′ = 2 < q. Then it follows from further applications of Hölder's inequality and Jensen's inequality that Hence, we arrive at the same conclusion. If p = 1 then the processes (η(t)) t∈[0,T ] and (η n ) n∈{1,...,N } are globally bounded due to the bound on f in Assumption 4.1. Using Lemma 6.1 we see that Altogether, this yields for a suitable constant C Γ ∈ (0, ∞), which is independent of k. To estimate Γ 2 we argue analogously as in the case for Γ 1 to obtain that The first factor is bounded as we saw in the case for Γ 1 . Furthermore, using Lemma 6.1, we have that Due to the a priori bound (5.4), it follows that there exists a constant C Γ2 ∈ (0, ∞), which does not depend on k such that The estimates Γ 3 and Γ 4 follow analogously with the only new term that appears is of the form which is bounded due to Theorem 4.7 and the a priori bound (5.4). Therefore, there exist constants C Γ3 , C Γ4 ∈ (0, ∞) such that Hence, we obtain After taking expectations in (6.11) and inserting (6.14), (6.15), (6.17) as well as (6.4) from Lemma 6.1, we obtain for every t ∈ (0, T ] that The assertion then follows from an application of Gronwall's lemma, see for example, [11, Appendix B]. Remark 6.5. Up to this point, we only proved convergence for X but not for η. However, from the existence of X n we also obtain that kη n = −(X n − X n−1 ) + kb(X n ) + g(X n−1 )∆W n a.s. in Ω.
Therefore, from the convergence of X to X and the Lipschitz continuity of b and g we also obtain the estimate  Furthermore, we notice that f x x = sgn(x)x = |x| as well as |f x | ≤ 1 for every x ∈ R and f x ∈ f (x). This shows that f fulfills all the conditions of Assumption 4.1. It remains to verify Assumption 6.3. Since f is the subdifferential of Φ the variational inequality (3.2) is still satisfied in the sense that for all x, y ∈ R and f x ∈ f (x). Following the same steps as in the proof of Lemma 3.2 but replacing f (v), f (w), and f (z) by arbitrary elements f v ∈ f (v), f w ∈ f (w), and f z ∈ f (z), respectively, shows that Assumption 6.3 is fulfilled. Therefore, the backward Euler-Maruyama method (1.6) is well-defined and yields an approximation of the exact solution X of dX(t) + f (X(t)) dt ∋ b(X(t)) dt + g(X(t)) dW (t), t ∈ (0, T ], where b : R → R and g : R → R 1,m are Lipschitz continuous and X 0 ∈ L 2 (Ω).
To be more precise, the piecewise linear interpolant X of the values (X n ) n∈{0,...,N } defined in (6.3) fulfills max t∈[0,T ] for C ∈ (0, ∞) that does not depend on the step size k = T N . However, let us mention that the strong order of convergence of 1/4 is not necessarily optimal in this particular example. We refer the reader to [9] for a corresponding result on the forward Euler-Maruyama method. 7.2. Stochastic p-Laplace equation. As a second example, we consider the discretization of the stochastic p-Laplace equation. A similar setting is studied in [5]. For a more detailed introduction to this class of problems, we refer the reader to this work and the references therein.
For p ∈ [2, ∞) and T ∈ (0, ∞) the stochastic p-Laplace equation is given by for all (t, ξ) ∈ (0, T ) × D, where D ⊂ R n , n ∈ N, is a bounded Lipschitz domain. By W : [0, T ] × Ω → R m , m ∈ N, we denote a standard (F t ) t≥0 -adapted Wiener process. We also assume that the initial value u 0 : D × Ω → R fulfills Furthermore, let Ψ : R → L 2 (R m ; R) be a Lipschitz continuous mapping, where L 2 (R m ; R) denotes the space of Hilbert-Schmidt operators from R m to R. Note that the Nemytskii operatorΨ : L 2 (D) → L 2 (R m ; L 2 (D)), given by [Ψ(u)](x) = Ψ(u(x)) for u ∈ L 2 (D), is also Lipschitz continuous and will be of importance in the weak formulation below.
Further, let W 1,p 0 (D) be the Sobolev space of weakly differentiable and p-fold integrable functions on D with vanishing trace on the boundary ∂D, see [46,Section 1.2.3] or [39,Section 4.5] for a precise definition. The dual space of W 1,p 0 (D) is denoted by W −1,p (D) in the following. Then, the stochastic p-Laplace equation (7.1) has a solution (u(t)) t∈[0,T ] which is progressively measurable and an element of L 2 (Ω; C([0, T ]; L 2 (D))) ∩ L p (Ω; L p (0, T ; W 1,p 0 (D))). For further details we refer to [ For a spatial discretization of (7.1), we use a family of finite element spaces (V h ) h>0 such that V h ⊂ W 1,p 0 (D) for every h > 0. Hereby, we interpret h as a spatial refinement parameter. In the following, we consider a fixed parameter value h > 0. By d ∈ N we then denote the dimension of the space V h .
The spatially semi-discrete problem is to find a progressively measurable stochastic process (u h (t)) t∈[0,T ] in the space L 2 (Ω; C([0, T ]; L 2 (D))) ∩ L p (Ω; L p (0, T ; V h )) such that In order to apply our results from the previous sections, we rewrite (7.3) as a problem in R d . To this end, we consider a one-to-one relation between V h and R d given by for every x ∈ R d . Observe that the norm · 0 is also induced by the inner product x, y 0 := v x , v y L 2 (D) = M h x, y , with M h = ( ϕ i , ϕ j L 2 (D) ) i,j∈{1,...,d} , where the mass matrix M h is symmetric and positive definite. Since all norms on R d are equivalent, for each i ∈ {−1, 0, 1} there exists c i , C i ∈ (0, ∞) such that The p-Laplace operator in the spatially semi-discrete problem (7.3) can be written as A h : V h → V h which is implicitly defined by A h (v h ), w h L 2 (D) = D |∇v h | p−2 ∇v h · ∇w h dξ for all v h , w h ∈ V h . By the same arguments as in [27,Example 4.1.9] one can easily verify that A h fulfills for all v h , w h ∈ V h . Then, for x, y ∈ R d and associated v x , v y ∈ V h , we introduce mappingsf : |x − y| 2 for x, y ∈ R d and v x , v y ∈ V h fulfilling (7.4) and an orthonormal basis {e j } j∈{1,...,m} of R m . Thus, g fulfills Assumption 4.3. Due the integrability condition to (7.2) for u 0 , it follows that X 0 fulfills Assumption 4.4. Moreover, we see that f is monotone, coercive, and bounded as we can write as well as for all x, y ∈ R d and v x , v y ∈ V h fulfilling (7.4). Here, · L(R m ) denotes the matrix norm in R m which is induced by | · |. Therefore, Assumption 4.1 is satisfied. To prove that f fulfills Assumption 6.3 we note that the mapping Φ : V h → [0, ∞) given by is a potential of A h , compare [46,Example 4.23]. Since Φ is convex it follows that where we use [13, Kapitel III, Lemma 4.10]. In the same way as in Lemma 3.2 we obtain that for all v z , v x , v y ∈ V h . Applying the definition of f , we then get x, y, z ∈ R d and v x , v y , v z ∈ V h fulfilling (7.4). This shows that f also fulfills Assumption 6.3. Consequently, the results of the previous sections are applicable. More precisely, the backward Euler scheme (1.6) has a unique solution (X n ) n∈{0,...,N } (cf. Theorem 5.3). Theorem 6.4 then states that the piecewise linear interpolant X of the values (X n ) n∈{1,...,N } defined in (6.3) fulfills max t∈[0,T ] X(t) − X (t) L 2 (Ω;R d ) ≤ Ck 1 4 for C ∈ (0, ∞) that does not depend on the step size k where X is the solution to the single-valued stochastic differential equation dX(t) + f (X(t)) dt = g(X(t)) dW (t), t ∈ (0, T ], Observe that our proof does not yet rule out that the constant C above depends on the dimension d of the finite element space V h . Hence, this is not a complete analysis of a full discretization of the stochastic partial differential equation (7.1) and a more detailed analysis is subject to future work. We refer to [5] for a related result in this direction.
Let us emphasize that, unlike the results in [5], we do not have to impose any temporal regularity assumption on the exact solution of (7.1) or on the solution of the semi-discrete problem (7.3). Since such regularity conditions are often not easily verified for quasi-linear stochastic partial differential equations we are confident that our approach could lead to interesting new insights in the numerical analysis of such infinite dimensional problems.