Derivative of the expected supremum of fractional Brownian motion at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H=1$$\end{document}H=1

The H-derivative of the expected supremum of fractional Brownian motion \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{B_H(t),t\in {\mathbb {R}}_+\}$$\end{document}{BH(t),t∈R+} with drift \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a\in {\mathbb {R}}$$\end{document}a∈R over time interval [0, T] \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{\partial }{\partial H} {\mathbb {E}}\Big (\sup _{t\in [0,T]} B_H(t) - at\Big ) \end{aligned}$$\end{document}∂∂HE(supt∈[0,T]BH(t)-at)at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H=1$$\end{document}H=1 is found. This formula depends on the quantity \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr {I}}$$\end{document}I, which has a probabilistic form. The numerical value of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr {I}}$$\end{document}I is unknown; however, Monte Carlo experiments suggest \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr {I}}\approx 0.95$$\end{document}I≈0.95. As a by-product we establish a weak limit theorem in C[0, 1] for the fractional Brownian bridge, as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H\uparrow 1$$\end{document}H↑1.


Introduction
Extremes of Gaussian stochastic processes play important role in many areas of stochastic modelling, including, e.g. queueing theory, risk theory, financial mathematics. Despite a substantial research effort taken in the analysis of distributional properties of suprema of Gaussian processes, most of the available results are of asymptotic nature (as for example of the tail distribution); see e.g. [9,14,16,19,22].
In this contribution we consider the expected supremum of fractional Brownian motion with drift a ∈ R over time horizon T > 0, that is where {B H (t), t ∈ R + }, with R + := [0, ∞), is a fractional Brownian motion with Hurst index H ∈ (0, 1] (or H -fBm), that is, a centred Gaussian process with the covariance function for all s, t ∈ R + . It is noted that due to self-similarity and long range dependence property (for H > 1/2) the class of fractional Brownian motions takes a notable place in modelling of many phenomena in applied probability, as e.g. traffic in modern telecommunication networks (e.g. [17,21]), oceanography (e.g. [24]), geophysics (e.g. [18,20]), finance (e.g. [23]). We also refer to [13,14] for the overview of applications and simulation techniques for H -fBm.
The functional M H (T , a) plays an important role in the theory of Gaussian-driven queueing models [10-12, 15, 19, 21, 25]. More precisely, consider a single-node queue with infinite buffering capacity. Let c be the service rate and {B H (t) + dt, t ∈ R + } be the input process, that is, the traffic that enters the buffer in time interval (s, t] equals B H (t) + dt − (B H (s) + ds). We refer to [11,17,25] for the formal justification that an appropriately normalized input process that is modelled by a superposition of N i.i.d. sources built on alternating 0-1 processes η i (t), i.e.
with regularly varying tail distributions with indices in (1, 2) of the alternating epochtimes of receiving the traffic with intensity 0 or 1, respectively, and as N → ∞ and T → ∞, weakly converges to fractional Brownian motion with H ∈ (1/2, 1). Interestingly, for the same model but as N → ∞ but T → 0, the limiting process is always a fractional Brownian motion with H = 1, see [11]. For any T > 0, and a := c − d the buffer content process {Q(t), t ∈ R + } satisfies the following equation Suppose now that Q(0) = 0. Then, by time-reversibility of fractional Brownian motion, In this paper we continue our studies of the H -derivative of the expected supremum from [6], that is, we consider focusing on the case H = 1. More specifically, in Theorem 1, which presents the main result of this contribution, we derive the formula for M 1 (T , a). One of motivations for our studies, which arose from the analysis of simulations of M H (T , a), is its behaviour for H close to 1. There is some indication that for sufficiently large T , M H (T , a) as function of H has a U-shape in some sub-interval This gives one of motivations to study the derivative at H = 1.
This paper complements our previous work [6], where we focused on the case H = 1 2 and found that where γ (s, x) := ∂ ∂s γ (s, x) and γ (s, x) := x 0 t s−1 e −t dt is the lower incomplete gamma function; see [6,Theorem 1] and also [4,Corollary 4]. The values of M 1/2 (T , a) in the cases a = 0 and T = ∞ can be found by passing to the limit in the formula above with a → 0 and T → ∞, respectively. See also [6, Corollary 1(i-ii)].
where erf(·) and erfc(·) is the error and complementary error functions, respectively. is an non-increasing function. However, note that it need not to be true for T > 1. Second, due to Borovkov et al [7,8], with a recent improvement by Bisewski [4], for sufficiently small H , which supports the conjecture that there exists a constant C ∈ (0, ∞) such that We also refer to [5] for the analysis of M H (∞, a) as a function of H , with a > 0.
Organization of the paper: in Sect. 2 we derive some useful properties of fractional Brownian bridges, that will play important role in the proof of the main result, which is given in Sect. 3. In Proposition 1 we establish a weak limit theorem in C[0, 1] for the fractional Brownian bridge, as H ↑ 1. The main result is given in Theorem 1. All the proofs are postponed to Sect. 4.

Fractional Brownian bridge and its limit
In this section, we derive some properties of fractional Brownian bridges, that will play crucial role in the proofs of the main result of this contribution. The main result is the limit in distribution at H = 1.
It is noted that applying the standard formula for the distribution of the multivariate Gaussian vector conditioned by the value of a given coordinate (see, e.g. Introduction in [22]), we have and the following equality in distribution holds Analogously, the fBm pinned at B H (1) = x, which is defined by conditioning follows the representation When H = 1, the fBB becomes a deterministic straight line from (0, 0) to (1, 0). However, it turns out that if we blow up this process by factor (1 − H ) −1/2 , its distribution converges to a non-trivial limit as H ↑ 1. More precisely, for every H ∈ (0, 1) let In the following let {X (t), t ∈ [0, 1]} be a centred Gaussian bridge with X (0) = X (1) = 0 and the covariance function where we follow the convention that 0 2 log(0) := lim t→0 + t 2 log(t) = 0. We remark that It is also noted that (7) constitutes a covariance function, since the limit of positively definite functions is a positively definite function.

Proposition 1 The scaled fractional Brownian bridge {X
We postpone the proof of Proposition 1 to Sect. 4.

The main theorem
Before proceeding to the main result of this paper, we note that by simple time-reversal argument we can find that M H (T , −a) In the following, the standard normal probability density function is denoted by φ(·).

Theorem 1 For any T > 0 and a ∈ R it holds that
and I ∈ (0, ∞).
The proof of Theorem 1 is postponed to Sect. 4. M 1 (∞, a) = ∞ for all a ∈ R; see e.g. [5] and therefore M 1 (∞, a) does not exist. Hence, intuitively it is clear that one should expect M 1 (T , a) > 0 for sufficiently large T . Indeed, it straightforwardly follows from Theorem 1 that the criterion for the sign of M 1 (T , a) is T to be smaller or larger than exp(I ).

Remark 1 It is noted that
Remark 2 Using the fact that the process X (t) is time-reversible, it is easy to see that One can recognize that the function z → sup t∈[0,1] {X (t) − tz} is the convex conjugate (or Legendre-Fenchel transformation) of a random trajectory t → X (t). While we were not able to calculate the theoretical value of I , our numerical experiments strongly suggest that I ≈ 0.95.
In this contribution we have analysed the first derivative of M H (T , a) at H = 1. We suspect that the use of similar techniques can give some insight into derivatives of higher moments of sup t∈[0,T ] B H (t) − at at H = 1, which however looks to be more technically challenging. There are also some hopes that the line of argumentation used in this contribution can be useful for the analysis of higher derivatives of M H (T , a) at H = 1. Complementary results, for H = 1/2 have been obtained in [6], with the use of tools that work for processes close to Brownian motion (that is around H = 1/2) and hence are different than applied in this paper. The half-widths of 95% confidence intervals are at most 0.0020 and 0.013 in cases T = 1 and T = 5 correspondingly. We observe that, on this scale, it is hard to distinguish between the numerical and the theoretical results

Lemma 1 There exists C > 0 such that
for all H ∈ ( 1 2 , 1). Proof of Lemma 1 Till the end of the proof, without loss of generality, we assume that s < t. Utilizing that, for any t, s ∈ [0, 1], where by subtracting and adding (t − s) 2H to (10) we obtain the following bound After applying the mean value theorem (with respect to H ) to the first term, we obtain (12) Similarly, noting that f 1 (t, s) = 2(t − s) we apply the mean value theorem to the second term in (11) which, with f H (t, s) : Since and similarly we may apply the mean value theorem again (but now with respect to s), which yields Finally, we have and therefore ∂ ∂θ w H (θ ) ≤ 4 for all H ∈ [1/2, 1). Similarly, and therefore sup θ∈ [s,t] |w H (θ )| ≤ 2 sup Now, it is clear that there exists some C > 0 such that sup x∈ [0,1] x 2H −1 | log(x)| ≤ C for all H < 1 large enough. Therefore, using (14) and going back to (13) we obtain Finally, the bound above combined with (12) and (11) yields

Proof of Proposition 1
First we will show that finite dimensional distributions (fdds) of X H converge to those of X , which in case of centred Gaussian processes is equivalent to the convergence of the covariance function, i.e.

Cov(X H (t), X H (s)) → Cov(X (t), X (s))
, H ↑ 1 (15) for all t, s ∈ [0, 1]. Moreover, due to Lemma 1, the conditions of Theorem 12.3 from Billingsley [2] are satisfied (see also Eq. (12.51) immediately below Theorem 12.3) and therefore the sequence X H is tight in C[0, 1]. This, together with the convergence of fdds demonstrated below would complete the proof of the weak convergence. Till this end we will show (15). Using Taylor expansion (at H = 1), for every fixed t ∈ (0, 1) we have as H ↑ 1. Without loss of generality assume that 0 < s < t < 1. The proof in case s = t is analogous and slightly simpler. Using the notation for function g(t, s) introduced in (8) we have which implies that Cov(X H (t), X H (s)) = g(t, s) − tg(1, s) − sg(1, t) + o(1) and concludes the proof.

Lemma 2
For every ε ∈ (0, 1) there exists a random variable κ H ,ε , which satisfies for all H ∈ (1 − ε 2 , 1). Moreover, for every ε, p > 0 there exists a finite constant Proof Using Lemma 1 we find that for t, s ∈ [0, 1] we have where in the last line we used H > 1 − ε 2 . It can be seen that for every sup t∈[0,1] t ε | log(t)| < ∞ for every ε > 0, therefore for every ε > 0 there exists C ε such that for all H ∈ (1 − ε 2 , 1). The first part of Lemma 2 now follows from [1, Theorem 1]. The fact that K can be chosen uniformly for al H ∈ (1 − ε 2 , 1) is implicit from the proof of [1, Theorem 1]. In particular, it follows from the fact that the constant C ε above is chosen uniformly for all H ∈ (1 − ε 2 , 1) and that the constant in the Garsia-Rodemich-Rumsey inequality used in the proof of [1, Theorem 1] depends only on ε.
Before we show the proof of Theorem 1, we need one technical result. In the following, let for all H < 1 sufficiently large.
The supremum on the right-hand side above can be found explicitly, which finally yields the following upper bound (in distribution) for all H < 1 large enough. Now, the bounds in (22) and (23), as functions of the variable z are integrable over [−2, 2] and R \ [−2, 2], respectively, for all p ≥ 1. Therefore, by combining them together we obtain a dominating, integrable function. Using the convergence in (20) and an inequality on integrable majorant, by the Corollary from page 348 of [ Finally, since 0 ≤ r H (z) ≤ (φ(a) √ 2π) −1 , z ∈ R we may apply the Lebesgue dominated convergence theorem to conclude that