Abstract
We consider a Brownian motion (BM) \(x(\tau )\) and its maximal value \(x_{\max } = \max _{0 \le \tau \le t} x(\tau )\) on a fixed time interval [0, t]. We study functionals of the maximum of the BM, of the form \(\mathcal{O}_{\max }(t)=\int _0^t\, V(x_{\max } - x(\tau )) {\mathrm {d}}\tau \) where V(x) can be any arbitrary function and develop various analytical tools to compute their statistical properties. These tools rely in particular on (i) a “counting paths” method and (ii) a path-integral approach. In particular, we focus on the case where \(V(x) = \delta (x-r)\), with r a real parameter, which is relevant to study the density of near-extreme values of the BM (the so called density of states), \(\rho (r,t)\), which is the local time of the BM spent at given distance r from the maximum. We also provide a thorough analysis of the family of functionals \({T}_{\alpha }(t)=\int _0^t (x_{\max } - x(\tau ))^\alpha \, {{\mathrm {d}}}\tau \), corresponding to \(V(x) = x^\alpha \), with \(\alpha \) real. As \(\alpha \) is varied, \(T_\alpha (t)\) interpolates between different interesting observables. For instance, for \(\alpha =1\), \(T_{\alpha = 1}(t)\) is a random variable of the “area”, or “Airy”, type while for \(\alpha =-1/2\) it corresponds to the maximum time spent by a ballistic particle through a Brownian random potential. On the other hand, for \(\alpha = -1\), it corresponds to the cost of the optimal algorithm to find the maximum of a discrete random walk, proposed by Odlyzko. We revisit here, using tools of theoretical physics, the statistical properties of this algorithm which had been studied before using probabilistic methods. Finally, we extend our methods to constrained BM, including in particular the Brownian bridge, i.e., the Brownian motion starting and ending at the origin.
Similar content being viewed by others
References
Chandrasekhar, S.: Stochastic problems in physics and astronomy. Rev. Mod. Phys. 15, 1 (1943)
Feller, W.: An Introduction to Probability Theory and its Applications. Wiley, New York (1968)
Hughes, B.: Random Walks and Random Environments. Clarendon Press, Oxford (1968)
Koshland, D.E.: Bacterial Chemotaxis as a Model Behavioral System. Raven, New York (1980)
Asmussen, S.: Applied Probability and Queues. Springer, New York (2003)
Kearney, M.J.: On a random area variable arising in discrete-time queues and compact directed percolation. J. Phys. A 37, 8421 (2004)
Kearney, M.J., Majumdar, S.N.: On the area under a continuous time Brownian motion till its first-passage time. J. Phys. A: Math. Gen. 38, 4097 (2005)
Majumdar, S.N.: Brownian functionals in physics and computer science. Curr. Sci. 89, 2076 (2005)
Majumdar, S.N.: Universal first-passage properties of discrete-time random walks and Lévy flights on a line: statistics of the global maximum and records. Physica A 389, 4299 (2010)
Williams, R.J.: Introduction to the Mathematics of Finance. AMS, Providence (2006)
Majumdar, S.N., Bouchaud, J.P.: Optimal time to sell a stock in the Black-Scholes model: comment on ’Thou shalt buy and hold’, by A. Shiryaev, Z. Xu and XY Zhou. Quant. Fin. 8, 753 (2008)
Comtet, A., Desbois, J., Texier, C.: Functionals of Brownian motion, localization and metric graphs. J. Phys. A 38, R341 (2005)
Yor, M.: Exponential Functionals of Brownian Motion and Related Topics. Springer, Berlin (2000)
Pitman, J.: The Distribution of Local Times of Brownian Bridge. Lecture Notes in Mathematics, vol. 1709, pp. 388–394. Springer, Berlin (1999)
Darling, D.A.: On the supremum of certain Gaussian processes. Ann. Probab. 11, 803 (1983)
Louchard, G.: Kac’s formula, Levy’s local time and Brownian excursion. J. Appl. Prob. 21, 479 (1984)
Flajolet, P., Poblete, P., Viola, A.: On the analysis of linear probing hashing. Algorithmica 22, 490 (1998)
Janson, S., Louchard, G.: Tail estimates for the Brownian excursion area and other Brownian areas. Electronic J. Probab. 12, 1600 (2007)
Majumdar, S.N., Comtet, A.: Exact maximal height distribution of fluctuating interfaces. Phys. Rev. Lett. 92, 225501 (2004)
Majumdar, S.N., Comtet, A.: Airy distribution function: from the area under a Brownian excursion to the maximal height of fluctuating interfaces. J. Stat. Phys. 119, 777 (2005)
Kessler, D.A., Medalion, S., Barkai, E.: The distribution of the area under a bessel excursion and its moments. J. Stat. Phys. 156, 686 (2014)
Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Pol. Econ. 81, 637 (1973)
Kesten, H., Kozlov, M.V., Spitzer, F.: A limit law for random walk in a random environment. Compos. Math. 30, 145 (1975)
Oshanin, G., Mogutov, A.: Steady flux in a continuous-space Sinai chain. J. Stat. Phys. 73, 379 (1993)
Monthus, C., Comtet, A.: On the flux distribution in a one dimensional disordered system. J. Phys. I (France) 4, 635 (1994)
Oshanin, G., Rosso, A., Schehr, G.: Anomalous fluctuations of currents in Sinai-type random chains with strongly correlated disorder. Phys. Rev. Lett. 110, 100602 (2013)
Kac, M.: On distributions of certain Wiener functionals. Trans. Am. Math. Soc. 65, 1 (1949)
Sabhapandit, S., Majumdar, S.N.: Density of near-extreme events. Phys. Rev. Lett. 98, 140201 (2007)
Perret, A., Comtet, A., Majumdar, S.N., Schehr, G.: Near-extreme statistics of Brownian motion. Phys. Rev. Lett. 111, 240601 (2013)
Odlyzko, A.M.: Search for the maximum of a random walk. Random Struct. Algor. 6, 275 (1995)
Hwang, H.K.: A constant arising from the analysis of algorithms for determining the maximum of a random walk. Random Struct. Algor. 10, 333 (1997)
Chassaing, P.: How many probes are needed to compute the maximum of a random walk? Stoch. Proc. Appl. 81, 129 (1999)
Chassaing, P., Marckert, J.F., Yor, M.: A stochastically quasi-optimal search algorithm for the maximum of the simple random walk. Ann. Appl. Probab. 13, 1264 (2003)
Vervaat, W.: A relation between Brownian bridge and Brownian excursion. Ann. Probab. 7, 143 (1979)
Biane, P., Yor, M.: Valeurs principales associées aux temps locaux browniens. Bull. Sci. Maths 111, 23 (1987)
P. Chassaing, J. F. Marckert, M. Yor: The height and width of simple trees. Math. Computer Science. Birkhäuser Basel, pp. 17–30 (2000)
Takács, L.: A Bernoulli excursion and its various applications. Adv. Appl. Prob. 23, 557 (1991)
Takács, L.: Limit distributions for the Bernoulli meander. J. Appl. Prob. 32, 375 (1995)
Takács, L.: Brownian local times. J. Appl. Math. Stoch. Anal. 8, 209 (1995)
Burkhardt, T.W., Györgyi, G., Moloney, N.R., Racz, Z.: Extreme statistics for time series: distribution of the maximum relative to the initial value. Phys. Rev. E 76(4), 041119 (2007)
Lévy, P.: Sur certains processus stochastiques homogènes. Compos. Math. 7, 283 (1940)
Krivine, H.: Exercices de mathématiques pour physiciens, corrigés et commentés. Cassini, Paris (2003)
Feller, W.: The asymptotic distribution of the range of sums of independent random variables. Ann. Math. Stat. 22, 427 (1951)
Kundu, A., Majumdar, S.N., Schehr, G.: Exact distributions of the number of distinct and common sites visited by N independent random walkers. Phys. Rev. Lett. 110, 220602 (2013)
Chung, K.L.: Excursions in Brownian motion. Ark. Mat. 14(2), 155 (1976)
Takács, L.: Limit theorems for random trees. Proc. Natl. Acad. Sci. USA 89(11), 5011 (1992)
Schehr, G., Majumdar, S.N., Comtet, A., Randon-Furling, J.: Exact distribution of the maximal height of p vicious walkers. Phys. Rev. Lett. 101, 150601 (2008)
Chassaing, P., Louchard, G.: Reflected Brownian bridge area conditioned on its local time at the origin. J. Algorithm 44(1), 29 (2002)
Bollobàs, B.: Random Graphs. Academic Press, Boston (1985)
Gradshteyn, I.S., Ryzhik, I.M.: Tables of Integrals, Series, and Products, 6th edn. Academic Press, San Diego, CA (2000)
Landau, L.D., Lifshitz, E.M.: Quantum Mechanics: Non-Relativistic Theory. Pergamon, London (1981)
Devroye, L: On exact simulation algorithms for some distributions related to Brownian motion and Brownian meanders. In: Recent Developments in Applied Probability and Statistics, vol. 1. Springer, Berlin (2010)
Williams, D.: Decomposing the Brownian path. B. Am. Math. Soc. 76, 871 (1970)
Imhof, J.P.: Density factorizations for Brownian motion, meander and the three-dimensional Bessel process, and applications. J. Appl. Probab. 21, 500 (1984)
Acknowledgments
We acknowledge support by the Indo-French Centre for the Promotion of Advanced Research under Project 4604-3. We acknowledge a useful correspondence with Philippe Chassaing.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Some Useful Functions
We introduce the family of functions \(\Phi ^{(j)}\), \(j \in {\mathbb N}\), which satisfy
These functions can be obtained explicitly by induction, using [48]
The first functions can easily be computed as
More generally, one can show [48] that they can be written in the form
where \(p_j(x)\) and \(q_j(x)\) are rational polynomials of degree \(j-2\) and \(j-1\), respectively, for \(j\ge 2\) [48]. We refer the interested reader to Ref. [48] for efficient algorithms, which can be implemented numerically, to compute these polynomials in a systematic way.
Appendix 2: Average DOS for Reflected Brownian Motion
Using the method based on propagators, explained in Sect. 3.1, see Eq. (27), we can also compute the average DOS for the reflected Brownian motion \(x_{R}(\tau )\) which is the absolute value of the Brownian motion, \(x_R(\tau ) = |x(\tau )|\). The expression in (27), see also Fig. 4, indicates that we need to compute the propagator of the reflected Brownian motion such that \(x_R(\tau ) \le M\) or equivalently \(-M \le x(\tau ) \le M\). Therefore, we compute the propagator of a Brownian particle confined in a given interval \([-M,M]\) with absorbing boundary conditions both in \(x=-M\) and \(x=M\). Denoting by \(G_M^R(\alpha |\beta ,t)\) the propagator of such a particle starting at \(\alpha \) and ending, at time t, at \(\beta \), its LT wrt t is given by
In order to compute the average DOS \(\langle \rho _R(r,t) \rangle \) for the reflected BM, we evaluate the “number” of Brownian trajectories satisfying the following constraints: the process reaches its maximum M or its minimum \(-M\) at time \(t_{{\mathrm {ext}}}\), passes through \(M-r\) or \(-M+r\) at time \(\tau \) and end in \(x_F \in [-M,M]\) at time t. The total number of such trajectories is then obtained by integrating over \(x_F, M\) and \(t_{{\mathrm {ext}}}\). When dividing the time interval [0, t] into three parts delimited by \(\tau \) and \(t_{{\mathrm {ext}}}\), 8 different cases may arise: \(\tau < t_{{\mathrm {ext}}}\) or \(\tau > t_{{\mathrm {ext}}}\), \(x( t_{{\mathrm {ext}}})=\pm M\) and \(x(\tau )=\pm (M-r)\). Using the invariance of the process under the reflection symmetry \(x \rightarrow - x\) we have to consider only four different cases (each one with a multiplicity of 2):
where we have used the Markov property of BM and where \(Z_R(\epsilon )\) is the normalization constant (such that \(\int _0^\infty {\mathrm {d}}r \, \langle \rho _R(r,t)\rangle = t\))
The normalization is easily computed as \(Z_R(\epsilon ) \sim 2 \varepsilon ^2\), when \(\varepsilon \rightarrow 0\). Using the same kind of calculations as in Sect. 3.1—exploiting the convolution structure of the integrals in Eq. (149)—we find, after some manipulations
where \(\Phi ^{(2)}(x)\) is given in Eq. (146).
Similarly, we can study the DOS of the reflected Brownian bridge \(x_{RB}(\tau )\), which is the absolute value of a Brownian bridge \(x_{RB}(\tau ) = |x_{BB}(\tau )|\). The calculation of the DOS in this case is very similar to the case of the free reflected BM in (149) without the integral over \(x_F\) which is set to \(x_F = 0\). Using time reversal symmetry, we can show that the average DOS \(\langle \rho _{RB}(r,t) \rangle \) is given by
where \(Z_{RB}(\epsilon )\) is the normalization constant, given by
The normalization is easily computed as \(Z_{RB}(\varepsilon ) \sim {2 \varepsilon ^2}/{(\sqrt{2 \pi t})}\), as \(\varepsilon \rightarrow 0\) and eventually the average DOS \(\langle \rho _{RB}(r,t) \rangle \) is obtained as:
where \(\Phi ^{(1)}(x)\) is given in Eq. (145).
Appendix 3: Odlyzko’s Algorithm
1.1 Main Ideas Behind Odlyzko’s Algorithm
To get familiar with this algorithm, it is useful to consider a simpler search algorithm, denoted by u, belonging to \(A_n\) (that denotes the ensemble of the algorithms that find the maximum \(M_n\) of a random walk of n steps), which proceeds as follows: u probes always the random walk at the step where the upper envelope of the (still) possible trajectories reaches its maximum. This algorithm u is based on the idea that, as illustrated in Fig. 11, if \(X_m\) and \(X_{m+k}\) have been probed, then the searcher knows for sure that, between step m and step \(m+k\), the position of the random walker can not exceed \((X_{m}+X_{m+k}+k)/2\). This can be shown as follows. Let us denote by \(n_+\) the number of up-steps (\(+1\)) and \(n_-\) the number of down-steps (\(-1\)) between step m and step \(m+k\). Then \(n_+\) and \(n_-\) satisfy the equations
Hence one has
Therefore the position of the random walker can not exceed \(X_{m} + n_+ = (X_{m}+X_{m+k}+k)/2\), as shown in Fig. 11. This simple algorithm is illustrated in Fig. 12 on a realization of the RW for \(n=14\) steps. This basic idea is at the heart of the algorithm proposed by Odlyzko.
Here we also want to explain briefly the occurrence of this particular functional of the maximum I in (4), in the analysis of this optimal algorithm, following the line of reasoning of [32, 33]. To understand this, let us consider a traveler, moving on a line, its position being denoted by y. Suppose that its velocity v(y) at position y is bounded by some function z(y), such that \(0<v(y) \le z(y)\). Then the time t to reach the point x starting from the origin satisfies the bound
Now let us consider an algorithm a, its cost being C(a) and denote by \({m_1, \ldots , m_{C(a)}}\) the steps at which the RW has been probed by the searcher—which has eventually found the maximum \(M_n\) after C(a) probes. To be sure that the maximum is not in the interval \([m_i, m_{i+1}]\), the potential maximum of the RW between these two steps, which is \((X_{m_i} + X_{m_{i+1}} + m_{i+1} - m_{i})/2\) (see Fig. 11), must be smaller than \(M_n\) (by definition of the maximum). Hence this yields the following inequality
Notice that \(m_{i+1} - m_i\) can be seen as the velocity \(v(m_i)\) of the algorithm at point \(m_i\). One can further argue [30], using the fact most of the RWs are “slowly varying” [see Eq. (163) below], that \(2 M_n - X_{m_i} - X_{m_{i+1}} \sim 2(M_n - X_{m_i})\) when n is large. Hence
can be viewed as the speed limit at step k of the random walk. Finally, by analogy with (159), C(a) satisfies
which in the continuum limit yields the functional of the maximum I in Eq. (4). It is rather clear that these heuristic arguments leading to Eq. (162) can be straightforwardly extended to the case of the Random Walk bridge, \(X_{i,B}\), which is a RW conditioned to start and end at the origin \(X_{0,B} = X_{n,B} = 0\). Of course in this case the maximum \(M_n\) in (162) is then replaced by the maximum of the Brownian bridge \(M_{n,B} = \max _{1\le i \le N} X_{i,B}\).
1.2 Description of the Odlyzko’s Algorithm
Here we describe in more detail Odlyzko’s algorithm which finds the maximum of a random walk \(X_{i+1} = X_i \pm 1\) with equal probability 1 / 2 (starting from \(X_0=0\)). Let c be a positive real number, which is sufficiently large. The algorithm is essentially based on the fact that most of the RWs has “slow variations” (SV), i.e., check the identity [30]:
Indeed if c is large enough, the probability that a realization of the RW does not satisfy the SV property (163) decays as \(n^{-1}\). This statement can be easily shown, as in [30, 33], by using that for fixed j, \(\mathrm{Pr} (|X_j| > x) \le 2 \exp (-x^2/(2j))\) [the so called Chernoff’s bound, see [49] p. 12]. Although the realizations of the RW that do not satisfy (163) necessitates a large number of probes \(\sim n\), their contribution to the average cost of the algorithm turns out to be negligible as they occur with a very small probability \(\propto 1/n\). On the other hand, as we shall see below, it is relatively easy to find the maximum of a RW which satisfies the “SV” property.
The algorithm proposed by Odlyzko consists in two steps:
-
In a first stage, one searches a good estimate \(M^*\)of \(M_n\). This is done by probing \(X_N\), \(X_{2N}\), \(X_{3N}\),...where \(N=\lfloor \sqrt{n} \log n \rfloor \), where \(\lfloor x \rfloor \) denotes the largest integer not larger than x. If the algorithm finds, here or later, a violation of the SV inequality (163), one has to probe all the positions of the RW (but this happens very rarely). We denote by \(M'=\max \{X_0,X_N,X_{2N},X_{3N},\ldots \}\le M_n\). If the RW satisfies SV (163), then
$$\begin{aligned} M_n-M' \le c \sqrt{N \log n} = c n^{1/4} \log n. \end{aligned}$$(164)Indeed, if we denote by \(k_{\max }\) such that \(X_{k_{\max } N}\le M_n \le X_{(k_{\max }+1) N}\) then \(M_n - \max (X_{k_{\max } N},X_{(k_{\max }+1) N}) \le c \sqrt{N \log n}\), which follows from (163), and which implies (164) as \(M' \ge \max (X_{k_{\max } N},X_{(k_{\max }+1) N})\). As we discuss it below, it turns out that this estimate \(M'\) of \(M_n\) (164) is however not precise enough for the forthcoming steps of the algorithm. It is indeed necessary to scan the neighborhood of the large \(X_{rN}\)’s on a finer window. If for some integer r one finds
$$\begin{aligned} X_{rN} \ge M' - c n^{1/4} \log n, \end{aligned}$$(165)we probe \(X_{rN \pm j K}\), \(j=1,2,\ldots \lfloor N/K \rfloor \), \(K=\lfloor n^{1/4}\rfloor \). If the RW has SV, then any k with \(X_k=M_n\) must be as close as of a rN for some r for which (165) is true. We now denote by \(M^*\) the maximum of all probes found until now. Because we scan with intervals \(\le n^{1/4} \log n\) around the maximum, the SV inequality (163) give
$$\begin{aligned} 0 \le M_n-M^* \le c \sqrt{n^{1/4} \log ^2 n} \le n^{1/6}. \end{aligned}$$(166)One can prove [30] that the average cost of this first phase of the algorithm is of order \(\mathcal{O}(\sqrt{n}/\log n)\) negligible compared to the cost of the second phase, that we now describe, and which is of order \(\mathcal{O}(\sqrt{n})\).
-
With this estimate \(M^*\) of the actual maximum \(M_n\), the second phase will eventually find \(M_n\) in a number of probes that is of order \(\mathcal{O}(\sqrt{n})\), which is the leading contribution to the cost of this algorithm. To do this, we will scan the sample path from left to right as follows. We introduce m the index of the RW position \(X_m\) which is currently probed by the algorithm. We start with \(m=0\) and we denote by \(M^\#\) the greatest position probed so far by the algorithm including \(M^*\). At each step of this phase, two cases may occur:
-
(i)
If \(M^{\#}-X_m \le n^{1/6}\), the algorithm will probe the right neighbor of \(X_m\) and m is incremented by 1, \(m \rightarrow m+1\).
-
(ii)
If \(M^{\#}-X_m > n^{1/6}\), this means that the algorithm is still far from the maximum, because we know that \(M_n-M^{\#}\le n^{1/6}\). In this case, the immediate vicinity of \(X_m\) does not need to be explored and the strategy is to jump from \(X_{m}\) to \(X_{m+k}\), where k is still to be determined. In order to be sure that the RW does not exceed \(M^\#\) between m and \(m+k\), we must have in mind the upper envelope of the RW on that interval \([m, m+k]\) (see Fig. 11). Hence we impose the following bound
$$\begin{aligned} k \le 2 (M^\#-X_m) +(X_{m}-X_{m+k}). \end{aligned}$$(167)The first term in the right hand side of this inequality (167), \(2 (M^\#-X_m)\), is larger than \(2 n^{1/6}\), while the second term, \((X_{m}-X_{m+k})\) is bounded by \(c \sqrt{k \log n}\), thanks to SV (163)—as stated above, if \(X_{m+k}-X_m\) does not satisfy the SV inequality (163), we abort this approach and probe every position. Hence we can choose k slightly smaller than \(2(M^\#-X_m)\). If \(m+k >n\), we probe \(X_n\) and stop. When the full path has been scanned, the maximum \(M_n\) of the RW has been found by the algorithm.
For a RW which satisfies SV (163), one can show [30] that the major contribution to the cost of the algorithm is when \(M^{\#}-X_m > n^{1/6}\). Indeed, one can show that the contributions of the probes of the type (i) to the cost of the algorithm is of the order \(\mathcal{O}(n^{1/3})\). In fact, one can show that if the estimate \(M^*\) of \(M_n\) is such that \(M_n - M^* < n^{\alpha }\) then the cost of these contributions is of order \(\mathcal{O}(n^{2\alpha })\). If we want that the cost of this part of the algorithm to be smaller than the cost of the last one, which is of order \(\mathcal{O}(\sqrt{n})\), then this requires \(2\alpha < 1/2\), for instance \(2\alpha = 1/3\), hence the choice \(\alpha = 1/6\) made by Odlyzko [30] [see Eq. (166)]. The step size k is slightly smaller than \(2(M_n-X_m)\) and we need only one probe to control the k positions between m and \(m+k\). Since k can be interpreted as the velocity of the algorithm [see Eq. (159)], the average cost of the algorithm is, at leading order when n goes to infinity, \(\langle C(\mathrm{Od})\rangle \) given by
$$\begin{aligned} \langle C(\mathrm{Od})\rangle =\frac{1}{2} \left\langle \sum _{i=0}^{n} \frac{1}{M_n-X_i+1}\right\rangle , \end{aligned}$$(168)where we recall that \(\langle \cdots \rangle \) denotes an average over the different realizations of the RW \(X_i\)’s. When n goes to infinity, the RW becomes a BM and
$$\begin{aligned} \frac{C(\mathrm{Od})}{\sqrt{n}} \underset{n \rightarrow \infty }{\rightarrow } I = \frac{1}{2} \int _0^1 \frac{{\mathrm {d}}\tau }{ x_{\max }-x(\tau )}, \end{aligned}$$(169)as described in the text in (4).
-
(i)
1.3 Odlyzko’s Algorithm for the Bridge
It is easy to check that the arguments presented above can be easily transposed to the case of a random walk bridge. In particular, given that the bridge is pinned at both extremities \(X_{0,B} = X_{n,B}=0\), its variations are typically smaller than the one of the free walk and hence the property of “slow variations” (163), which plays a crucial role in this algorithm, would follow naturally. Therefore we conjecture that Odlyzko’s algorithm would be the optimal one to find the maximum \(M_{n,B}\) and its cost would be given by \((1/2) T_{\alpha =-1}^B(t)\) given in Eq. (13).
Appendix 4: Some Useful Integrals Involving Confluent Hypergeometric Functions Relevant for the Case \(V(x) = 1/x\)
1.1 An Integral Involving a Single Confluent Hypergeometric Function
For the analysis of the functional \(T_{\alpha = -1}(t)\) [see Eq. (127)], a useful integral involving the confluent hypergeometric function U(a, 2, z) is the following (see [50] as well as Mathematica):
where \(\mathrm{csc}(x) = 1/\sin {x}\) and H(x) are harmonic numbers, \(H(x) = \psi (x) + \gamma _E\) where \(\psi (x) = \Gamma '(x)/\Gamma (x)\) is the di-gamma function and \(\gamma _E\) the Euler constant. The function H(x) admits the following series expansion
where \(\zeta (x)\) is the Riemann zeta function. By combining (170), together with (172) one arrives straightforwardly at the formula given in Eq. (128) in the text.
1.2 An Integral Involving the Product of Two Confluent Hypergeometric Functions
To compute the amplitudes \(c_E\) such that the functions \(\phi _E(x)\) in (138) with \(d_E =0\) satisfy the orthogonality condition in Eq. (139) we used the following relation, derived by Landau and Lifshitz [51] (see formula (f.9) in Appendix f):
where \(\,_2F_1(\alpha ,\alpha ',\gamma ,z)\) is a generalized hypergeometric series. Such integrals (173) arise naturally in the study of certain matrix elements of quantum Hamiltonian involving Coulomb interactions. In our case (138), one has \(\alpha = 1- i s/\sqrt{E}\), \(\alpha ' = 1- i s/\sqrt{E'}\), \(\gamma = \gamma ' = 2\), \(k = 2i\sqrt{2E}\), \(k' = 2i\sqrt{2E'}\) and \(\lambda = 2\sqrt{2 s}\). Hence the desired formula in our case (139) can be obtained by differentiating (173) once wrt \(\lambda \) and analyzing in detail the limit \(k \rightarrow k'\) of the resulting formula (173). These somewhat cumbersome manipulations yield the expression for \(c_E\) given in (140).
Appendix 5: Numerical Simulations of Constrained Brownian Motion
In this appendix, we describe the algorithms that we have used here to simulate various constrained Brownian motions (Fig. 13). We refer the interested reader to [52] for an extended discussion of these algorithms.
1.1 Brownian Motion
In order to simulate a Brownian motion \(x(\tau )\), we consider the discrete random walk
where \(\eta _i\)’s are identical and independent Gaussian standard variables of variance unity. When N goes to infinity, \(X_{[\tau N]} \rightarrow x(\tau )\) where \(x(\tau )\) is a Brownian motion, with \(\tau \in [0,1]\) : \(\dot{x}(\tau )= \zeta (\tau )\), where \(\zeta (\tau )\) is a Gaussian white noise \(\langle \zeta (\tau ) \zeta (\tau ') \rangle = \delta (\tau -\tau ')\). This is the building block (174), to simulate different constrained Brownian motions.
1.2 Brownian Bridge
For a Brownian bridge \(x_B(\tau )\), which is a Brownian motion starting and ending at the origin \(x_B(0)=x_B(1)=0\), we use the identity \(x(\tau )-\tau x(1)=x_B(\tau )\)
where \(X_i\)’s are generated by (174). One can show that \(Y_{[\tau N]}\) converges to a Brownian bridge \(x_B(\tau )\).
1.3 Brownian Excursion
For a Brownian excursion \(x_E(\tau )\), which is a Brownian motion that starts and ends at the origin \(x_E(0)=x_E(1)=0\) and staying positive in the interval [0, 1], we use the identity \(\sqrt{[x_{B,1}(\tau )]^2 + [x_{B,2}(\tau )]^2+[x_{B,3}(\tau )]^2}=x_E(\tau )\) where \(x_{B,1}, x_{B,2}\) and \(x_{B,3}\) are three independent Brownian bridges [53, 54]. Hence we simulate
where \(Y_{1,i},Y_{2,i}\) and \(Y_{3,i}\) are three independent realisations of (175). \(E_{[\tau N]}\) converges to a Brownian excursion \(x_E(\tau )\).
1.4 Brownian Meander
For a Brownian meander \(x_{Me}(\tau )\), a Brownian motion which begins at the origin and stays positive on [0, 1], one can show that the PDF of its final position \(x_F > 0\) at time 1 is \(p(x_F) = x_F e^{-x_F^2/2}\). One can then use the following representation of the meander ending at \(x_F\): [53, 54] \(\sqrt{[x_{B,1}(\tau )]^2 + [x_{B,2}(\tau )]^2+[x_{B,3}(\tau )+\tau \, x_F]^2}=x_{Me}(\tau )\) where \(x_{{B},1},x_{{B},2}\) and \(x_{{B},3}\) are three independent Brownian bridges and \(x_F\) is a random variable drawn from \(p(x_F) = x_F e^{-x_F^2/2}\). Hence the Brownian meander \(x_{Me}(\tau )\) can be generated numerically as
where \(Y_1,Y_2\) and \(Y_3\) are three independent realizations of (175), where \(f>0\) is a random variable, whose PDF is given by \(p(f) = f e^{-f^2/2}\). \(M_{[tN]}\) converges to a Brownian meander \(x_{Me}(\tau )\).
Rights and permissions
About this article
Cite this article
Perret, A., Comtet, A., Majumdar, S.N. et al. On Certain Functionals of the Maximum of Brownian Motion and Their Applications. J Stat Phys 161, 1112–1154 (2015). https://doi.org/10.1007/s10955-015-1377-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-015-1377-8