On the properties of the exceptional set for the randomized Euler and Runge-Kutta schemes

We show that the probability of the exceptional set decays exponentially for a broad class of randomized algorithms approximating solutions of ODEs, admitting a certain error decomposition. This class includes randomized explicit and implicit Euler schemes, and the randomized two-stage Runge-Kutta scheme (under inexact information). We design a confidence interval for the exact solution of an IVP and perform numerical experiments to illustrate the theoretical results.

The structure of this paper is as follows.In section 1 we introduce notation, the class of initial value problems, and the model of computation.We also recall definitions of randomized Euler and Runge-Kutta schemes under inexact information.In section 2 we provide an upper bound for the probability of the exceptional set for each algorithm admitting a certain error decomposition.The general setting considered in this paper covers all algorithms analysed in [2,3,6].In section 3 we construct a confidence region for the exact solution of the IVP, based on the inequality established in the previous section.We also carry out numerical experiments which illustrate theoretical findings.Summary of this paper is provided in section 4, as well as directions for further research.We consider IVPs of the following form: where As in [2,3], by F R = F R (a, b, d, , K, L) we denote the class of pairs (η, f ) satisfying the following conditions: Under the above assumptions, the solution of (1) exists and is unique, cf.[2].1.2.Model of computation -general description.Let (Ω, Σ, P) be a complete probability space and let N = {A ∈ Σ : P(A) = 0}.By F we denote a certain class of IVPs of the form (1) -or equivalently pairs (η, f ).For example, we can choose F = F R with specified parameters a, b, d, , K, L, R.
For a given n ∈ N, we consider the class Φ n of algorithms A which aim to compute the approximate solution z of (1), using N (η, f ) based on at most n noisy evaluations of f .That is, the class Φ n contain algorithms of the following form: 1.3.The algorithms.We recall the algorithms which will be investigated in this paper.
For each of them, we specify the class of IVPs and the class of noise functions, for which error bounds have been established in [2,3].Notation introduced in this subsection is generally consistent with those articles, however some changes have been introduced in order to facilitate reading of this paper.
The randomized explicit Euler method under inexact information is defined as follows: The approximate solution of ( The randomized explicit Euler scheme has been analyzed in [3] assuming that (η, f , where the class of noise functions is defined as and similarly as in ( 2)-(3), we define The randomized implicit Euler scheme under inexact information is given by the following relation: Function lIE : [a, b] → R d is defined in the same fashion as lEE (i.e. through the linear interpolation) but this time we use knots (t j , Ūj ) instead of (t j , Wj ) for j ∈ {0, 1, . . ., n}.
The error bound for this scheme in [3] has been obtained assuming that (η, f ) ∈ F ∞ and η, f ∈ V IE (η,f ) (δ), where and definitions of V IE f (δ) and V IE (η,f ) (δ) are analogous as definitions formulated above for the explicit scheme.
The randomized two-stage Runge Kutta scheme under inexact information is given by where j ∈ {1, . . ., n}.In [2] it was assumed that (η, f , where and are analogous as for Euler schemes.

Main result
The following theorem is the generalization of Proposition 2 in [6].As we will see, it can be applied to any randomized algorithm admitting a certain error decomposition.
n=1 be a sequence of algorithms approximating the solution of the IVP (1) such that A n ∈ Φ n and for each (η, f ) in certain class F and η, f ∈ V (η,f ) (δ), the following bound for an approximation error holds with probability 1 for each n ∈ N, n ≥ n 0 : where with probability 1 for all j ∈ {1, . . ., n}; Then there exist constants c 1 , c 2 > 0 not dependent on n, such that for all (η, f ) ∈ F , η, f ∈ V (η,f ) (δ), n ∈ N, n ≥ n 0 , and for all ξ ≥ c 1 , the algorithm A n satisfies Proof.Let us define It is easy to see that Let us consider any i ∈ {1, . . ., d} and let E i j (h) denotes the i-th coodinate of E j (h).By Remark 1 in [1], the sequence (E i j (h)/K j (h)) n j=1 satisfies the assumptions of Lemma 2 in [1].Hence, for any sequence (b j ) n j=1 of real numbers and for any real number t, the following inequality holds: Let us consider an arbitrary β > 0. Then by exponential Chebyshev's inequality (cf.[7], p. 96) and ( 16) with b j = K j (h) for j ∈ {1, . . ., n} and t = βh γ v 2 (h) , we obtain By (15) we get for any β > 0 and for any i ∈ {1, . . ., d}.Let us note that with probability 1.Hence, by (17), By ( 13) and (18), the following inequality holds for ξ > C 2 + C 3 : For sufficiently large ξ we have As a result, for sufficiently big ξ, which completes the proof.
The property pointed out in this remark implies that A n (t) is a strongly consistent estimator of z(t) for each t ∈ [a, b].
In the next three corollaries we apply Theorem 1 to randomized explicit and implicit Euler schemes, and to the randomized two-stage Runge-Kutta scheme.
Corollary 2. There exist constants c 1 , c 2 > 0 dependent only on a, b, d, K, L, such that for all (η, f , and for all ξ ≥ c 1 , the randomized implicit Euler scheme satisfies Proof.Let E j (h) for j ∈ {1, . . ., n} be defined as in the proof of Corollary 1.Of course 1}+1/2 with probability 1 for all j ∈ {1, . . ., n}, where C 0 = C 0 (a, b, K, L) > 0. From the proof of Theorem 2 in [3] we conclude that with probability 1 for some α 1 = α 1 (a, b, d, K, L) > 0 and α 2 = α 2 (a, b, K, L) > 0. The first passage above follows from the first two lines of the proof of Theorem 2 in [3] -compare also with (20) in this paper.The second passage can be justified using two lines before (28), (28), inequality after (28) and before (29), and Fact 3 in [3].It remains to use Theorem 1 to conclude the proof.
Remark 2. We note that the randomized two-stage Runge-Kutta scheme considered in [2] is the special case of randomized Taylor schemes considered in [6].However, in [2] and in Corollary 3, we assume only local Lipschitz and Hölder conditions, see Assumptions (A3) and (A4), and we allow noisy evaluations of f .Thus, Corollary 3 is not a special case of Proposition 2 in [6], where global Lipschitz and Hölder conditions as well as exact information were assumed.

Confidence regions and numerical experiments
3.1.Construction of the confidence region.The following corollary is an immediate consequence of Theorem 1.By a suitable choice of ξ in ( 14) one may construct the confidence region for the exact solution of (1).
Corollary 4. Let c 1 , c 2 be the constants which have appeared in Theorem 1 and let for ε ∈ (0, 1).Under the assumptions of Theorem 1 it holds that With max{h, δ} approaching 0, the confidence region for z constructed in Corollary 4 tightens, whereas the confidence level 1 − ε remains unchanged.Hence, this confidence region is uniform with respect to max{h, δ}.

Numerical experiments. Let us consider the following test problems:
The exact solution of (A) is z A (t) = exp(t 2 ) for t ∈ [0, 1].By z B we denote the exact solution of (B).Let lS,X n be the approximate solutions of test problem X ∈ {A, B}, generated by scheme S ∈ {EE, RK} (the randomized explicit Euler scheme or the randomized two-stage Runge-Kutta scheme, respectively) with n steps.We note that = 3  2 for both test problems, cf.assumption (A3).Thus, γ RK = 3 2 and γ EE = 1, cf. ( 14), Corollary 1 and Corollary 3. We consider the setting where the class of IVPs is restricted to the test problem (A or B), and the class of algorithms is restricted to the randomized explicit Euler scheme or the randomized two-stage Runge-Kutta scheme.By Corollary 4, are some positive constants, dependent on S and X.
Since the constants c 1 , c 2 are not known, we propose the following approach to illustrate the property (24).Let ξ S,X ε,n,δ be given by the following equation: In practice, we perform N Monte Carlo simulations of sup a≤t≤b z X (t)− lS,X n (t) , sort the obtained values in the increasing order: r 1:N ≤ . . .≤ r N :N , and consider the following estimator: In general, ξ S,X ε,n,δ is only a lower bound for ξ S,X (ε).However, as we will see, some patterns can be noticed in the behaviour of ξ S,X ε,n,δ for different choices of n and δ (at least for the considered test problems).Thus, with a certain degree of caution, we may provide estimates of ξ S,X (ε) for S ∈ {EE, RK} and X ∈ {A, B}. . . .In the performed tests, each evaluation of f has been disrupted by a random noise (in cases other than δ = 0).Specifically, f (t, x) has been simulated as f (t, x) + (1 + |x|) • e (for S = EE) or f (t, x) + e (for S = RK), where e is taken from the uniform distribution on [−δ, δ], independently for all noisy evaluations of f .Thus, in numerical experiments we have further restricted the class of noise functions specified by (7) and (12).Moreover, we have assumed that the exact solution z B of (B) can be replaced by l RK,B 10 8 (under exact information).We note that some deviation in estimates of ξ S,X ε,n,δ may be attributed to errors inherited from MC simulations.
Based on the results displayed in Tables 1-4, we make the following observations.
• In case 0 ≤ δ ≤ n −γ S , estimates ξ S,X ε,n,δ appear to stabilise when n increases.This means that the probability of hitting the confidence region (24) is stable for sufficiently big values of n, provided that δ is bounded by n −γ S .
• Generally ξ S,X ε,n,δ < ξ S,X ε,n,0 when δ > n −γ S , which indicates that assumptions imposed on the noise functions, cf. ( 7) and (12), can be relaxed.Some exceptions are observed in Tables 1 and 2 for small values of n in the case δ(n) = n −0.9 .When δ > n −γ S , the expression ξ S,X ε,n,δ appears to underestimate ξ S,X (ε).• When n is fixed, ξ S,X ε,n,δ seems to achieve its maximum for δ = n −γ S .This is in line with the intuition as δ = n −γ S is the maximal value of δ with no impact on max n −γ S , δ , cf. (25).
• Based on the above, we suppose that ξ EE,A ≈ 3, ξ EE,B ≈ 0.7, ξ RK,A ≈ 5.9, and ξ RK,B ≈ 0.6.However, since it is impossible to test numerically all possible choices of n and δ, these approximations may be valid only under additional conditions on n and δ.
In Figures 1 and 2, we plotted sample confidence regions for test problems (A) and (B), respectively, based on randomized explicit Euler and two-stage Runge-Kutta schemes.We used n = 25 steps with noise level δ = n −γ S , S ∈ {EE, RK}.We took ξ EE,A = 3, ξ EE,B = 0.7, ξ RK,A = 5.9, and ξ RK,B = 0.6 in order to achieve the confidence level of 1 − ε = 0.95 (cf. the last bullet above).Confidence regions are shaded in navy blue.In Figure 1, the white curve represents the exact solution z A of (A), whereas in Figure 2 -the approximated solution l RK,B 10 8 of (B) obtained through the randomized RK scheme with the large number of steps.As we can see, for both test problems the randomized two-stage Runge-Kutta scheme generates more accurate confidence regions in comparison to the randomized explicit Euler scheme.This is due to the fact that γ EE = 1 < 3 2 = γ RK .

Conclusions and future work
In this paper we have shown that the probability of the exceptional set for a class of randomized algorithms admitting a particular error decomposition has an exponential decay (see Theorem 1).A uniform almost sure convergence of such algorithms to the exact solution has been established when the step size and the noise parameter tend to 0 (see Remark 1).Furthermore, we have used Theorem 1 to design a confidence region for the exact solution of the IVP, uniform with respect to max{h, δ} (see Corollary 4 and Remark 3).
The general setting which has been considered comprises randomized explicit and implicit Euler schemes, and the randomized two-stage Runge-Kutta scheme under inexact information (see Corollaries 1-3).Theorem 1 covers also the family of Taylor schemes under exact information, which has been investigated in [6].
Our future plans include further research related to the probabilistic distribution of the error of randomized algorithms for ODEs, e.g.investigation of its asymptotic behaviour.

1 . Preliminaries 1 . 1 .
Notation and the class of IVPs.Let • be the one norm in R d , i.e. x = d k=1 |x k | for x ∈ R d .For x ∈ R d and r ∈ [0, ∞), we denote by B(x, r) = {y ∈ R d : y − x ≤ r} the closed ball in R d with center x and radius r.Moreover, B(x, ∞) = R d for all x ∈ R d .

Corollary 3 .
There exist constants c 1 , c 2 > 0 dependent only on a, b, d, K, L, such that for all Confidence region based on the randomized explicit Euler method.
Confidence region based on the randomized two-stage Runge-Kutta method.

Figure 1 .
Figure 1.Confidence regions for the solution of the test problem (A) with confidence level 1 − ε = 0.95.
Confidence region based on the randomized explicit Euler method.Confidence region based on the randomized two-stage Runge-Kutta method.

Figure 2 .
Figure 2. Confidence regions for the solution of the test problem (B) with confidence level 1 − ε = 0.95.