Skip to main content
Log in

On Certain Functionals of the Maximum of Brownian Motion and Their Applications

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We consider a Brownian motion (BM) \(x(\tau )\) and its maximal value \(x_{\max } = \max _{0 \le \tau \le t} x(\tau )\) on a fixed time interval [0, t]. We study functionals of the maximum of the BM, of the form \(\mathcal{O}_{\max }(t)=\int _0^t\, V(x_{\max } - x(\tau )) {\mathrm {d}}\tau \) where V(x) can be any arbitrary function and develop various analytical tools to compute their statistical properties. These tools rely in particular on (i) a “counting paths” method and (ii) a path-integral approach. In particular, we focus on the case where \(V(x) = \delta (x-r)\), with r a real parameter, which is relevant to study the density of near-extreme values of the BM (the so called density of states), \(\rho (r,t)\), which is the local time of the BM spent at given distance r from the maximum. We also provide a thorough analysis of the family of functionals \({T}_{\alpha }(t)=\int _0^t (x_{\max } - x(\tau ))^\alpha \, {{\mathrm {d}}}\tau \), corresponding to \(V(x) = x^\alpha \), with \(\alpha \) real. As \(\alpha \) is varied, \(T_\alpha (t)\) interpolates between different interesting observables. For instance, for \(\alpha =1\), \(T_{\alpha = 1}(t)\) is a random variable of the “area”, or “Airy”, type while for \(\alpha =-1/2\) it corresponds to the maximum time spent by a ballistic particle through a Brownian random potential. On the other hand, for \(\alpha = -1\), it corresponds to the cost of the optimal algorithm to find the maximum of a discrete random walk, proposed by Odlyzko. We revisit here, using tools of theoretical physics, the statistical properties of this algorithm which had been studied before using probabilistic methods. Finally, we extend our methods to constrained BM, including in particular the Brownian bridge, i.e., the Brownian motion starting and ending at the origin.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Chandrasekhar, S.: Stochastic problems in physics and astronomy. Rev. Mod. Phys. 15, 1 (1943)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  2. Feller, W.: An Introduction to Probability Theory and its Applications. Wiley, New York (1968)

    MATH  Google Scholar 

  3. Hughes, B.: Random Walks and Random Environments. Clarendon Press, Oxford (1968)

    Google Scholar 

  4. Koshland, D.E.: Bacterial Chemotaxis as a Model Behavioral System. Raven, New York (1980)

    Google Scholar 

  5. Asmussen, S.: Applied Probability and Queues. Springer, New York (2003)

    MATH  Google Scholar 

  6. Kearney, M.J.: On a random area variable arising in discrete-time queues and compact directed percolation. J. Phys. A 37, 8421 (2004)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  7. Kearney, M.J., Majumdar, S.N.: On the area under a continuous time Brownian motion till its first-passage time. J. Phys. A: Math. Gen. 38, 4097 (2005)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  8. Majumdar, S.N.: Brownian functionals in physics and computer science. Curr. Sci. 89, 2076 (2005)

    MathSciNet  Google Scholar 

  9. Majumdar, S.N.: Universal first-passage properties of discrete-time random walks and Lévy flights on a line: statistics of the global maximum and records. Physica A 389, 4299 (2010)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  10. Williams, R.J.: Introduction to the Mathematics of Finance. AMS, Providence (2006)

    Book  MATH  Google Scholar 

  11. Majumdar, S.N., Bouchaud, J.P.: Optimal time to sell a stock in the Black-Scholes model: comment on ’Thou shalt buy and hold’, by A. Shiryaev, Z. Xu and XY Zhou. Quant. Fin. 8, 753 (2008)

    Article  MATH  Google Scholar 

  12. Comtet, A., Desbois, J., Texier, C.: Functionals of Brownian motion, localization and metric graphs. J. Phys. A 38, R341 (2005)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  13. Yor, M.: Exponential Functionals of Brownian Motion and Related Topics. Springer, Berlin (2000)

    Google Scholar 

  14. Pitman, J.: The Distribution of Local Times of Brownian Bridge. Lecture Notes in Mathematics, vol. 1709, pp. 388–394. Springer, Berlin (1999)

    MATH  Google Scholar 

  15. Darling, D.A.: On the supremum of certain Gaussian processes. Ann. Probab. 11, 803 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  16. Louchard, G.: Kac’s formula, Levy’s local time and Brownian excursion. J. Appl. Prob. 21, 479 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  17. Flajolet, P., Poblete, P., Viola, A.: On the analysis of linear probing hashing. Algorithmica 22, 490 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  18. Janson, S., Louchard, G.: Tail estimates for the Brownian excursion area and other Brownian areas. Electronic J. Probab. 12, 1600 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Majumdar, S.N., Comtet, A.: Exact maximal height distribution of fluctuating interfaces. Phys. Rev. Lett. 92, 225501 (2004)

    Article  ADS  Google Scholar 

  20. Majumdar, S.N., Comtet, A.: Airy distribution function: from the area under a Brownian excursion to the maximal height of fluctuating interfaces. J. Stat. Phys. 119, 777 (2005)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  21. Kessler, D.A., Medalion, S., Barkai, E.: The distribution of the area under a bessel excursion and its moments. J. Stat. Phys. 156, 686 (2014)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  22. Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Pol. Econ. 81, 637 (1973)

    Article  MATH  Google Scholar 

  23. Kesten, H., Kozlov, M.V., Spitzer, F.: A limit law for random walk in a random environment. Compos. Math. 30, 145 (1975)

    MathSciNet  MATH  Google Scholar 

  24. Oshanin, G., Mogutov, A.: Steady flux in a continuous-space Sinai chain. J. Stat. Phys. 73, 379 (1993)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  25. Monthus, C., Comtet, A.: On the flux distribution in a one dimensional disordered system. J. Phys. I (France) 4, 635 (1994)

    Article  Google Scholar 

  26. Oshanin, G., Rosso, A., Schehr, G.: Anomalous fluctuations of currents in Sinai-type random chains with strongly correlated disorder. Phys. Rev. Lett. 110, 100602 (2013)

    Article  ADS  Google Scholar 

  27. Kac, M.: On distributions of certain Wiener functionals. Trans. Am. Math. Soc. 65, 1 (1949)

    Article  MATH  MathSciNet  Google Scholar 

  28. Sabhapandit, S., Majumdar, S.N.: Density of near-extreme events. Phys. Rev. Lett. 98, 140201 (2007)

    Article  ADS  Google Scholar 

  29. Perret, A., Comtet, A., Majumdar, S.N., Schehr, G.: Near-extreme statistics of Brownian motion. Phys. Rev. Lett. 111, 240601 (2013)

    Article  ADS  Google Scholar 

  30. Odlyzko, A.M.: Search for the maximum of a random walk. Random Struct. Algor. 6, 275 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  31. Hwang, H.K.: A constant arising from the analysis of algorithms for determining the maximum of a random walk. Random Struct. Algor. 10, 333 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  32. Chassaing, P.: How many probes are needed to compute the maximum of a random walk? Stoch. Proc. Appl. 81, 129 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  33. Chassaing, P., Marckert, J.F., Yor, M.: A stochastically quasi-optimal search algorithm for the maximum of the simple random walk. Ann. Appl. Probab. 13, 1264 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  34. Vervaat, W.: A relation between Brownian bridge and Brownian excursion. Ann. Probab. 7, 143 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  35. Biane, P., Yor, M.: Valeurs principales associées aux temps locaux browniens. Bull. Sci. Maths 111, 23 (1987)

    MathSciNet  MATH  Google Scholar 

  36. P. Chassaing, J. F. Marckert, M. Yor: The height and width of simple trees. Math. Computer Science. Birkhäuser Basel, pp. 17–30 (2000)

  37. Takács, L.: A Bernoulli excursion and its various applications. Adv. Appl. Prob. 23, 557 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  38. Takács, L.: Limit distributions for the Bernoulli meander. J. Appl. Prob. 32, 375 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  39. Takács, L.: Brownian local times. J. Appl. Math. Stoch. Anal. 8, 209 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  40. Burkhardt, T.W., Györgyi, G., Moloney, N.R., Racz, Z.: Extreme statistics for time series: distribution of the maximum relative to the initial value. Phys. Rev. E 76(4), 041119 (2007)

    Article  MathSciNet  ADS  Google Scholar 

  41. Lévy, P.: Sur certains processus stochastiques homogènes. Compos. Math. 7, 283 (1940)

    MATH  Google Scholar 

  42. Krivine, H.: Exercices de mathématiques pour physiciens, corrigés et commentés. Cassini, Paris (2003)

    Google Scholar 

  43. Feller, W.: The asymptotic distribution of the range of sums of independent random variables. Ann. Math. Stat. 22, 427 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  44. Kundu, A., Majumdar, S.N., Schehr, G.: Exact distributions of the number of distinct and common sites visited by N independent random walkers. Phys. Rev. Lett. 110, 220602 (2013)

    Article  ADS  Google Scholar 

  45. Chung, K.L.: Excursions in Brownian motion. Ark. Mat. 14(2), 155 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  46. Takács, L.: Limit theorems for random trees. Proc. Natl. Acad. Sci. USA 89(11), 5011 (1992)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  47. Schehr, G., Majumdar, S.N., Comtet, A., Randon-Furling, J.: Exact distribution of the maximal height of p vicious walkers. Phys. Rev. Lett. 101, 150601 (2008)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  48. Chassaing, P., Louchard, G.: Reflected Brownian bridge area conditioned on its local time at the origin. J. Algorithm 44(1), 29 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  49. Bollobàs, B.: Random Graphs. Academic Press, Boston (1985)

    MATH  Google Scholar 

  50. Gradshteyn, I.S., Ryzhik, I.M.: Tables of Integrals, Series, and Products, 6th edn. Academic Press, San Diego, CA (2000)

    MATH  Google Scholar 

  51. Landau, L.D., Lifshitz, E.M.: Quantum Mechanics: Non-Relativistic Theory. Pergamon, London (1981)

    MATH  Google Scholar 

  52. Devroye, L: On exact simulation algorithms for some distributions related to Brownian motion and Brownian meanders. In: Recent Developments in Applied Probability and Statistics, vol. 1. Springer, Berlin (2010)

  53. Williams, D.: Decomposing the Brownian path. B. Am. Math. Soc. 76, 871 (1970)

    Article  MATH  MathSciNet  Google Scholar 

  54. Imhof, J.P.: Density factorizations for Brownian motion, meander and the three-dimensional Bessel process, and applications. J. Appl. Probab. 21, 500 (1984)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We acknowledge support by the Indo-French Centre for the Promotion of Advanced Research under Project 4604-3. We acknowledge a useful correspondence with Philippe Chassaing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Grégory Schehr.

Appendices

Appendix 1: Some Useful Functions

We introduce the family of functions \(\Phi ^{(j)}\), \(j \in {\mathbb N}\), which satisfy

$$\begin{aligned} \frac{e^{-\sqrt{2s}u}}{(\sqrt{2s})^{j+1}}= \int _0^{\infty } {{t}}^{\frac{j-1}{2}} \Phi ^{(j)}\left( \frac{u}{\sqrt{t}}\right) e^{-s t} {{\mathrm {d}}}t . \end{aligned}$$
(142)

These functions can be obtained explicitly by induction, using [48]

$$\begin{aligned} \Phi ^{(0)}(x)=\frac{1}{\sqrt{2\pi }}e^{-\frac{x^2}{2}}, \Phi ^{(j+1)}(x)=\int _{x}^{\infty } \Phi ^{(j)}(u) {{\mathrm {d}}}u . \end{aligned}$$
(143)

The first functions can easily be computed as

$$\begin{aligned} \Phi ^{(0)}(x)= & {} \frac{e^{-\frac{x^2}{2}}}{\sqrt{2 \pi }} ,\end{aligned}$$
(144)
$$\begin{aligned} \Phi ^{(1)}(x)= & {} \frac{1}{2} \, \text {erfc}\left( \frac{x}{\sqrt{2}}\right) ,\end{aligned}$$
(145)
$$\begin{aligned} \Phi ^{(2)}(x)= & {} \frac{e^{-\frac{x^2}{2}}}{\sqrt{2 \pi }}-\frac{1}{2} x \, \text {erfc}\left( \frac{x}{\sqrt{2}}\right) . \end{aligned}$$
(146)

More generally, one can show [48] that they can be written in the form

$$\begin{aligned} \Phi ^{(j)}(x)=p_j(x) \frac{1}{\sqrt{2\pi }}e^{-\frac{x^2}{2}}+q_j(x) \mathrm {erfc}\left( \frac{x}{\sqrt{2}}\right) , \end{aligned}$$
(147)

where \(p_j(x)\) and \(q_j(x)\) are rational polynomials of degree \(j-2\) and \(j-1\), respectively, for \(j\ge 2\) [48]. We refer the interested reader to Ref. [48] for efficient algorithms, which can be implemented numerically, to compute these polynomials in a systematic way.

Appendix 2: Average DOS for Reflected Brownian Motion

Using the method based on propagators, explained in Sect. 3.1, see Eq. (27), we can also compute the average DOS for the reflected Brownian motion \(x_{R}(\tau )\) which is the absolute value of the Brownian motion, \(x_R(\tau ) = |x(\tau )|\). The expression in (27), see also Fig. 4, indicates that we need to compute the propagator of the reflected Brownian motion such that \(x_R(\tau ) \le M\) or equivalently \(-M \le x(\tau ) \le M\). Therefore, we compute the propagator of a Brownian particle confined in a given interval \([-M,M]\) with absorbing boundary conditions both in \(x=-M\) and \(x=M\). Denoting by \(G_M^R(\alpha |\beta ,t)\) the propagator of such a particle starting at \(\alpha \) and ending, at time t, at \(\beta \), its LT wrt t is given by

$$\begin{aligned} \tilde{G}^{R}_M(\alpha |\beta ,s) = \frac{2 \sinh {\left( \sqrt{2s}(M-\max (\alpha ,\beta )) \right) } \sinh {\left( \sqrt{2s}(M+\min (\alpha ,\beta )) \right) } }{\sqrt{2s} \sinh {\left( \sqrt{2s}2M \right) } } . \end{aligned}$$
(148)

In order to compute the average DOS \(\langle \rho _R(r,t) \rangle \) for the reflected BM, we evaluate the “number” of Brownian trajectories satisfying the following constraints: the process reaches its maximum M or its minimum \(-M\) at time \(t_{{\mathrm {ext}}}\), passes through \(M-r\) or \(-M+r\) at time \(\tau \) and end in \(x_F \in [-M,M]\) at time t. The total number of such trajectories is then obtained by integrating over \(x_F, M\) and \(t_{{\mathrm {ext}}}\). When dividing the time interval [0, t] into three parts delimited by \(\tau \) and \(t_{{\mathrm {ext}}}\), 8 different cases may arise: \(\tau < t_{{\mathrm {ext}}}\) or \(\tau > t_{{\mathrm {ext}}}\), \(x( t_{{\mathrm {ext}}})=\pm M\) and \(x(\tau )=\pm (M-r)\). Using the invariance of the process under the reflection symmetry \(x \rightarrow - x\) we have to consider only four different cases (each one with a multiplicity of 2):

$$\begin{aligned} \langle \rho _R(r,t) \rangle= & {} \underset{\varepsilon \rightarrow 0}{\lim }\frac{2}{Z_R(\epsilon )}\int _{r}^\infty {\mathrm {d}}M \int _0^t {\mathrm {d}}t_{{\mathrm {ext}}}\int _{-M}^{M} {\mathrm {d}}x_F \nonumber \\&\times \,\Big [ \int _{0}^{t_{{\mathrm {ext}}}} {\mathrm {d}}\tau G^{R}_M(0 | M-r,\tau ) G^{R}_M(M-r|M-\varepsilon ,t_{{\mathrm {ext}}}-\tau )G^{R}_M(M -\varepsilon |x_F,t-t_{{\mathrm {ext}}})\nonumber \\&+\,\int _{t_{{\mathrm {ext}}}}^{t} {\mathrm {d}}\tau G^{R}_M(0 | M-\varepsilon ,t_{{\mathrm {ext}}}) G^{R}_M(M-\varepsilon | M-r,\tau -t_{{\mathrm {ext}}})G^{R}_M(M-r|x_F,t-\tau ) \nonumber \\&+\, \int _{0}^{t_{{\mathrm {ext}}}} {\mathrm {d}}\tau G^{R}_M(0 | r-M,\tau ) G^{R}_M(r-M|M-\varepsilon ,t_{{\mathrm {ext}}}-\tau )G^{R}_M(M-\varepsilon |x_F,t-t_{{\mathrm {ext}}})\nonumber \\&+\,\int _{t_{{\mathrm {ext}}}}^{t} {\mathrm {d}}\tau G^{R}_M(0 | M-\varepsilon ,t_{{\mathrm {ext}}}) G^{R}_M(M-\varepsilon | r-M,\tau -t_{{\mathrm {ext}}})G^{R}_M(r-M|x_F,t-\tau ) \Big ] ,\nonumber \\ \end{aligned}$$
(149)

where we have used the Markov property of BM and where \(Z_R(\epsilon )\) is the normalization constant (such that \(\int _0^\infty {\mathrm {d}}r \, \langle \rho _R(r,t)\rangle = t\))

$$\begin{aligned} Z_R(\varepsilon )=2 \int _{0}^\infty {\mathrm {d}}M \int _0^t {\mathrm {d}}t_{{\mathrm {ext}}}\int _{-M}^{M} {\mathrm {d}}x_F G^{R}_M(0 |M-\varepsilon ,t_{{\mathrm {ext}}})G^{R}_M(M-\varepsilon |x_F,t-t_{{\mathrm {ext}}}) .\qquad \end{aligned}$$
(150)

The normalization is easily computed as \(Z_R(\epsilon ) \sim 2 \varepsilon ^2\), when \(\varepsilon \rightarrow 0\). Using the same kind of calculations as in Sect. 3.1—exploiting the convolution structure of the integrals in Eq. (149)—we find, after some manipulations

$$\begin{aligned} \langle \rho _{R}(r,t=1) \rangle \,= & {} 8 \sum _{n=0}^{\infty } \frac{(-1)^{n+1} }{-3-2 n+12 n^2+8 n^3} \nonumber \\&\times \left( \sum _{k=0}^1(3+(-1)^k (2n+k)^{k+1})(2n+k)^2 \Phi ^{(2)}((2n+k)r)\right) ,\qquad \end{aligned}$$
(151)

where \(\Phi ^{(2)}(x)\) is given in Eq. (146).

Similarly, we can study the DOS of the reflected Brownian bridge \(x_{RB}(\tau )\), which is the absolute value of a Brownian bridge \(x_{RB}(\tau ) = |x_{BB}(\tau )|\). The calculation of the DOS in this case is very similar to the case of the free reflected BM in (149) without the integral over \(x_F\) which is set to \(x_F = 0\). Using time reversal symmetry, we can show that the average DOS \(\langle \rho _{RB}(r,t) \rangle \) is given by

$$\begin{aligned} \langle \rho _{RB}(r,t) \rangle \,= & {} \underset{\varepsilon \rightarrow 0}{\lim }\frac{4}{Z}\int _{r}^\infty {\mathrm {d}}M \int _0^t {\mathrm {d}}t_{{\mathrm {ext}}} \int _{t_{{\mathrm {ext}}}}^{t} {\mathrm {d}}\tau \nonumber \\&\times \,\Big [ G_M^R(0 | M-\varepsilon , t_{\mathrm {ext}}) G_M^R(M-\varepsilon | r-M,\tau -t_{{\mathrm {ext}}})G_M^R(r-M|0,t-\tau )\nonumber \\&+ \,G_M^R(0 | M-\varepsilon ,t_{{\mathrm {ext}}}) G_M^R(M-\varepsilon | M-r,\tau -t_{{\mathrm {ext}}})G_M^R(M-r|0,t-\tau )\Big ] ,\nonumber \\ \end{aligned}$$
(152)

where \(Z_{RB}(\epsilon )\) is the normalization constant, given by

$$\begin{aligned} Z_{RB}(\varepsilon )=2 \int _{0}^\infty {\mathrm {d}}M \int _0^t {\mathrm {d}}t_{{\mathrm {ext}}}\, G_M^R(0 | M-\varepsilon ,t_{{\mathrm {ext}}}) G_M^R(M-\varepsilon | 0,t-t_{{\mathrm {ext}}}). \end{aligned}$$
(153)

The normalization is easily computed as \(Z_{RB}(\varepsilon ) \sim {2 \varepsilon ^2}/{(\sqrt{2 \pi t})}\), as \(\varepsilon \rightarrow 0\) and eventually the average DOS \(\langle \rho _{RB}(r,t) \rangle \) is obtained as:

$$\begin{aligned} \langle \rho _{RB}(r,t=1) \rangle \, = 2\sqrt{2 \pi } \left( 4\sum _{n=0}^{\infty } n (-1)^{n+1} \Phi ^{(1)}(2nr) - \Phi ^{(1)}(2r)\right) \end{aligned}$$
(154)

where \(\Phi ^{(1)}(x)\) is given in Eq. (145).

Fig. 11
figure 11

Illustration of the main idea of Odlyzko’s optimal algorithm. The RW can not exceed \((X_m+X_{m+k}+k)/2\) between m and \(m+k\). If this quantity is smaller than \(M^\#\), a new probe between m and \(m+k\) is useless

Appendix 3: Odlyzko’s Algorithm

1.1 Main Ideas Behind Odlyzko’s Algorithm

To get familiar with this algorithm, it is useful to consider a simpler search algorithm, denoted by u, belonging to \(A_n\) (that denotes the ensemble of the algorithms that find the maximum \(M_n\) of a random walk of n steps), which proceeds as follows: u probes always the random walk at the step where the upper envelope of the (still) possible trajectories reaches its maximum. This algorithm u is based on the idea that, as illustrated in Fig. 11, if \(X_m\) and \(X_{m+k}\) have been probed, then the searcher knows for sure that, between step m and step \(m+k\), the position of the random walker can not exceed \((X_{m}+X_{m+k}+k)/2\). This can be shown as follows. Let us denote by \(n_+\) the number of up-steps (\(+1\)) and \(n_-\) the number of down-steps (\(-1\)) between step m and step \(m+k\). Then \(n_+\) and \(n_-\) satisfy the equations

$$\begin{aligned}&n_+ + n _- = k \end{aligned}$$
(155)
$$\begin{aligned}&n_+-n_- = X_{m+k} - X_{m}. \end{aligned}$$
(156)

Hence one has

$$\begin{aligned}&n_+ = \frac{X_{m+k} - X_{m} + k}{2} \end{aligned}$$
(157)
$$\begin{aligned}&n_- = \frac{X_{m} - X_{m+k} + k}{2}. \end{aligned}$$
(158)

Therefore the position of the random walker can not exceed \(X_{m} + n_+ = (X_{m}+X_{m+k}+k)/2\), as shown in Fig. 11. This simple algorithm is illustrated in Fig. 12 on a realization of the RW for \(n=14\) steps. This basic idea is at the heart of the algorithm proposed by Odlyzko.

Fig. 12
figure 12

An example of the algorithm u for finding \(M_{14}=7\) for a RW in 4 probes. a Typical realization of a 14 steps RW, for which we want to find the maximum. We know without any probe that \(0\le M_{14}\le 14\), and if \(M_{14}=14\), the maximum would be at position 14 (RW with only \(+1\) jumps) so we probe the position 14. b The first probe shows \(X_{14}=6\) and we know now (see Fig. 11) that \(6\le M_{14}\le 10\), and if \(M_{14}=10\), the maximum would be at position 10 (dashed line) so we probe the position 10. c The second probe shows \(X_{10}=4\) and we know now that \(6\le M_{14}\le 7\), and if \(M_{14}=7\), the maximum would be at position 6 or 13 (dashed line) so we probe the position 6. d The third probe shows \(X_6=4\) and we know now that \(6\le M_{14}\le 7\), and if \(M_{14}=7\), the maximum would be at position 13 (dashed line) so we probe position 13 and find the maximum \(M_{14}=X_{13}=7\) in 4 probes

Here we also want to explain briefly the occurrence of this particular functional of the maximum I in (4), in the analysis of this optimal algorithm, following the line of reasoning of [32, 33]. To understand this, let us consider a traveler, moving on a line, its position being denoted by y. Suppose that its velocity v(y) at position y is bounded by some function z(y), such that \(0<v(y) \le z(y)\). Then the time t to reach the point x starting from the origin satisfies the bound

$$\begin{aligned} t = \int _0^x \frac{{{\mathrm {d}}}y}{v(y)} \ge \int _0^x \frac{{{\mathrm {d}}}y}{z(y)}. \end{aligned}$$
(159)

Now let us consider an algorithm a, its cost being C(a) and denote by \({m_1, \ldots , m_{C(a)}}\) the steps at which the RW has been probed by the searcher—which has eventually found the maximum \(M_n\) after C(a) probes. To be sure that the maximum is not in the interval \([m_i, m_{i+1}]\), the potential maximum of the RW between these two steps, which is \((X_{m_i} + X_{m_{i+1}} + m_{i+1} - m_{i})/2\) (see Fig. 11), must be smaller than \(M_n\) (by definition of the maximum). Hence this yields the following inequality

$$\begin{aligned} m_{i+1} - m_i \le 2 M_n - X_{m_i} - X_{m_{i+1}}. \end{aligned}$$
(160)

Notice that \(m_{i+1} - m_i\) can be seen as the velocity \(v(m_i)\) of the algorithm at point \(m_i\). One can further argue [30], using the fact most of the RWs are “slowly varying” [see Eq. (163) below], that \(2 M_n - X_{m_i} - X_{m_{i+1}} \sim 2(M_n - X_{m_i})\) when n is large. Hence

$$\begin{aligned} Z_k = 2(M_n-X_k) \end{aligned}$$
(161)

can be viewed as the speed limit at step k of the random walk. Finally, by analogy with (159), C(a) satisfies

$$\begin{aligned} C(a) \ge \sum _{k=1}^n \frac{1}{Z_k} = \frac{1}{2} \sum _{k=1}^n \frac{1}{M_n-X_k}, \end{aligned}$$
(162)

which in the continuum limit yields the functional of the maximum I in Eq. (4). It is rather clear that these heuristic arguments leading to Eq. (162) can be straightforwardly extended to the case of the Random Walk bridge, \(X_{i,B}\), which is a RW conditioned to start and end at the origin \(X_{0,B} = X_{n,B} = 0\). Of course in this case the maximum \(M_n\) in (162) is then replaced by the maximum of the Brownian bridge \(M_{n,B} = \max _{1\le i \le N} X_{i,B}\).

1.2 Description of the Odlyzko’s Algorithm

Here we describe in more detail Odlyzko’s algorithm which finds the maximum of a random walk \(X_{i+1} = X_i \pm 1\) with equal probability 1 / 2 (starting from \(X_0=0\)). Let c be a positive real number, which is sufficiently large. The algorithm is essentially based on the fact that most of the RWs has “slow variations” (SV), i.e., check the identity [30]:

$$\begin{aligned} |X_{i+k}-X_i| \le c \sqrt{k \log {n}}, \; \forall i, k \; \mathrm{with} \; i + k \le n. \end{aligned}$$
(163)

Indeed if c is large enough, the probability that a realization of the RW does not satisfy the SV property (163) decays as \(n^{-1}\). This statement can be easily shown, as in [30, 33], by using that for fixed j, \(\mathrm{Pr} (|X_j| > x) \le 2 \exp (-x^2/(2j))\) [the so called Chernoff’s bound, see [49] p. 12]. Although the realizations of the RW that do not satisfy (163) necessitates a large number of probes \(\sim n\), their contribution to the average cost of the algorithm turns out to be negligible as they occur with a very small probability \(\propto 1/n\). On the other hand, as we shall see below, it is relatively easy to find the maximum of a RW which satisfies the “SV” property.

The algorithm proposed by Odlyzko consists in two steps:

  • In a first stage, one searches a good estimate \(M^*\)of \(M_n\). This is done by probing \(X_N\), \(X_{2N}\), \(X_{3N}\),...where \(N=\lfloor \sqrt{n} \log n \rfloor \), where \(\lfloor x \rfloor \) denotes the largest integer not larger than x. If the algorithm finds, here or later, a violation of the SV inequality (163), one has to probe all the positions of the RW (but this happens very rarely). We denote by \(M'=\max \{X_0,X_N,X_{2N},X_{3N},\ldots \}\le M_n\). If the RW satisfies SV (163), then

    $$\begin{aligned} M_n-M' \le c \sqrt{N \log n} = c n^{1/4} \log n. \end{aligned}$$
    (164)

    Indeed, if we denote by \(k_{\max }\) such that \(X_{k_{\max } N}\le M_n \le X_{(k_{\max }+1) N}\) then \(M_n - \max (X_{k_{\max } N},X_{(k_{\max }+1) N}) \le c \sqrt{N \log n}\), which follows from (163), and which implies (164) as \(M' \ge \max (X_{k_{\max } N},X_{(k_{\max }+1) N})\). As we discuss it below, it turns out that this estimate \(M'\) of \(M_n\) (164) is however not precise enough for the forthcoming steps of the algorithm. It is indeed necessary to scan the neighborhood of the large \(X_{rN}\)’s on a finer window. If for some integer r one finds

    $$\begin{aligned} X_{rN} \ge M' - c n^{1/4} \log n, \end{aligned}$$
    (165)

    we probe \(X_{rN \pm j K}\), \(j=1,2,\ldots \lfloor N/K \rfloor \), \(K=\lfloor n^{1/4}\rfloor \). If the RW has SV, then any k with \(X_k=M_n\) must be as close as of a rN for some r for which (165) is true. We now denote by \(M^*\) the maximum of all probes found until now. Because we scan with intervals \(\le n^{1/4} \log n\) around the maximum, the SV inequality (163) give

    $$\begin{aligned} 0 \le M_n-M^* \le c \sqrt{n^{1/4} \log ^2 n} \le n^{1/6}. \end{aligned}$$
    (166)

    One can prove [30] that the average cost of this first phase of the algorithm is of order \(\mathcal{O}(\sqrt{n}/\log n)\) negligible compared to the cost of the second phase, that we now describe, and which is of order \(\mathcal{O}(\sqrt{n})\).

  • With this estimate \(M^*\) of the actual maximum \(M_n\), the second phase will eventually find \(M_n\) in a number of probes that is of order \(\mathcal{O}(\sqrt{n})\), which is the leading contribution to the cost of this algorithm. To do this, we will scan the sample path from left to right as follows. We introduce m the index of the RW position \(X_m\) which is currently probed by the algorithm. We start with \(m=0\) and we denote by \(M^\#\) the greatest position probed so far by the algorithm including \(M^*\). At each step of this phase, two cases may occur:

    1. (i)

      If \(M^{\#}-X_m \le n^{1/6}\), the algorithm will probe the right neighbor of \(X_m\) and m is incremented by 1, \(m \rightarrow m+1\).

    2. (ii)

      If \(M^{\#}-X_m > n^{1/6}\), this means that the algorithm is still far from the maximum, because we know that \(M_n-M^{\#}\le n^{1/6}\). In this case, the immediate vicinity of \(X_m\) does not need to be explored and the strategy is to jump from \(X_{m}\) to \(X_{m+k}\), where k is still to be determined. In order to be sure that the RW does not exceed \(M^\#\) between m and \(m+k\), we must have in mind the upper envelope of the RW on that interval \([m, m+k]\) (see Fig. 11). Hence we impose the following bound

      $$\begin{aligned} k \le 2 (M^\#-X_m) +(X_{m}-X_{m+k}). \end{aligned}$$
      (167)

      The first term in the right hand side of this inequality (167), \(2 (M^\#-X_m)\), is larger than \(2 n^{1/6}\), while the second term, \((X_{m}-X_{m+k})\) is bounded by \(c \sqrt{k \log n}\), thanks to SV (163)—as stated above, if \(X_{m+k}-X_m\) does not satisfy the SV inequality (163), we abort this approach and probe every position. Hence we can choose k slightly smaller than \(2(M^\#-X_m)\). If \(m+k >n\), we probe \(X_n\) and stop. When the full path has been scanned, the maximum \(M_n\) of the RW has been found by the algorithm.

    For a RW which satisfies SV (163), one can show [30] that the major contribution to the cost of the algorithm is when \(M^{\#}-X_m > n^{1/6}\). Indeed, one can show that the contributions of the probes of the type (i) to the cost of the algorithm is of the order \(\mathcal{O}(n^{1/3})\). In fact, one can show that if the estimate \(M^*\) of \(M_n\) is such that \(M_n - M^* < n^{\alpha }\) then the cost of these contributions is of order \(\mathcal{O}(n^{2\alpha })\). If we want that the cost of this part of the algorithm to be smaller than the cost of the last one, which is of order \(\mathcal{O}(\sqrt{n})\), then this requires \(2\alpha < 1/2\), for instance \(2\alpha = 1/3\), hence the choice \(\alpha = 1/6\) made by Odlyzko [30] [see Eq. (166)]. The step size k is slightly smaller than \(2(M_n-X_m)\) and we need only one probe to control the k positions between m and \(m+k\). Since k can be interpreted as the velocity of the algorithm [see Eq. (159)], the average cost of the algorithm is, at leading order when n goes to infinity, \(\langle C(\mathrm{Od})\rangle \) given by

    $$\begin{aligned} \langle C(\mathrm{Od})\rangle =\frac{1}{2} \left\langle \sum _{i=0}^{n} \frac{1}{M_n-X_i+1}\right\rangle , \end{aligned}$$
    (168)

    where we recall that \(\langle \cdots \rangle \) denotes an average over the different realizations of the RW \(X_i\)’s. When n goes to infinity, the RW becomes a BM and

    $$\begin{aligned} \frac{C(\mathrm{Od})}{\sqrt{n}} \underset{n \rightarrow \infty }{\rightarrow } I = \frac{1}{2} \int _0^1 \frac{{\mathrm {d}}\tau }{ x_{\max }-x(\tau )}, \end{aligned}$$
    (169)

    as described in the text in (4).

1.3 Odlyzko’s Algorithm for the Bridge

It is easy to check that the arguments presented above can be easily transposed to the case of a random walk bridge. In particular, given that the bridge is pinned at both extremities \(X_{0,B} = X_{n,B}=0\), its variations are typically smaller than the one of the free walk and hence the property of “slow variations” (163), which plays a crucial role in this algorithm, would follow naturally. Therefore we conjecture that Odlyzko’s algorithm would be the optimal one to find the maximum \(M_{n,B}\) and its cost would be given by \((1/2) T_{\alpha =-1}^B(t)\) given in Eq. (13).

Appendix 4: Some Useful Integrals Involving Confluent Hypergeometric Functions Relevant for the Case \(V(x) = 1/x\)

1.1 An Integral Involving a Single Confluent Hypergeometric Function

For the analysis of the functional \(T_{\alpha = -1}(t)\) [see Eq. (127)], a useful integral involving the confluent hypergeometric function U(a, 2, z) is the following (see [50] as well as Mathematica):

$$\begin{aligned} {\tilde{\varphi }}(s)= & {} 2^{3/2} \lambda \Gamma (\lambda /\sqrt{2 s}) \int _0^\infty \; e^{-\sqrt{2s} y} \, y \,U\left( 1+\frac{\lambda }{\sqrt{2s}},2,2 \sqrt{2s} y\right) {{\mathrm {d}}}y\end{aligned}$$
(170)
$$\begin{aligned}= & {} \frac{1}{\sqrt{2}s} \left( \sqrt{2s} - 2 \pi \,\lambda \, \mathrm{csc}\left( \frac{\pi \lambda }{\sqrt{2s}} \right) - \lambda \, H\left( -\frac{1}{2} - \frac{\lambda }{2 \sqrt{2s}} \right) + \lambda \, H\left( - \frac{\lambda }{2\sqrt{2s}} \right) \right) ,\nonumber \\ \end{aligned}$$
(171)

where \(\mathrm{csc}(x) = 1/\sin {x}\) and H(x) are harmonic numbers, \(H(x) = \psi (x) + \gamma _E\) where \(\psi (x) = \Gamma '(x)/\Gamma (x)\) is the di-gamma function and \(\gamma _E\) the Euler constant. The function H(x) admits the following series expansion

$$\begin{aligned} H(x) = \sum _{j=0}^\infty (-1)^j \zeta (j+2) \, x^{j+2}, \end{aligned}$$
(172)

where \(\zeta (x)\) is the Riemann zeta function. By combining (170), together with (172) one arrives straightforwardly at the formula given in Eq. (128) in the text.

1.2 An Integral Involving the Product of Two Confluent Hypergeometric Functions

To compute the amplitudes \(c_E\) such that the functions \(\phi _E(x)\) in (138) with \(d_E =0\) satisfy the orthogonality condition in Eq. (139) we used the following relation, derived by Landau and Lifshitz [51] (see formula (f.9) in Appendix f):

$$\begin{aligned} J= & {} \int _0^\infty e^{-\lambda z} z^{\gamma -1} \,_1 F_1(\alpha ,\gamma ,k z) _1 F_1(\alpha ',\gamma ',k' z) {{\mathrm {d}}} z \nonumber \\= & {} \Gamma (\gamma ) \lambda ^{\alpha +\alpha '-\gamma } (\lambda -k)^{-\alpha }(\lambda -k')^{-\alpha '} \,_2F_1\left( \alpha ,\alpha ',\gamma ,\frac{k k'}{(\lambda -k)(\lambda -k')}\right) , \end{aligned}$$
(173)

where \(\,_2F_1(\alpha ,\alpha ',\gamma ,z)\) is a generalized hypergeometric series. Such integrals (173) arise naturally in the study of certain matrix elements of quantum Hamiltonian involving Coulomb interactions. In our case (138), one has \(\alpha = 1- i s/\sqrt{E}\), \(\alpha ' = 1- i s/\sqrt{E'}\), \(\gamma = \gamma ' = 2\), \(k = 2i\sqrt{2E}\), \(k' = 2i\sqrt{2E'}\) and \(\lambda = 2\sqrt{2 s}\). Hence the desired formula in our case (139) can be obtained by differentiating (173) once wrt \(\lambda \) and analyzing in detail the limit \(k \rightarrow k'\) of the resulting formula (173). These somewhat cumbersome manipulations yield the expression for \(c_E\) given in (140).

Appendix 5: Numerical Simulations of Constrained Brownian Motion

In this appendix, we describe the algorithms that we have used here to simulate various constrained Brownian motions (Fig. 13). We refer the interested reader to [52] for an extended discussion of these algorithms.

Fig. 13
figure 13

Example of different constrained Brownian motions studied in the present paper

1.1 Brownian Motion

In order to simulate a Brownian motion \(x(\tau )\), we consider the discrete random walk

$$\begin{aligned} \left\{ \begin{array}{ll} X_0 = &{} 0\\ X_i = &{} X_{i-1} + \frac{\eta _i}{\sqrt{N}}, \, i\in [1,N] \end{array} \right. \end{aligned}$$
(174)

where \(\eta _i\)’s are identical and independent Gaussian standard variables of variance unity. When N goes to infinity, \(X_{[\tau N]} \rightarrow x(\tau )\) where \(x(\tau )\) is a Brownian motion, with \(\tau \in [0,1]\) : \(\dot{x}(\tau )= \zeta (\tau )\), where \(\zeta (\tau )\) is a Gaussian white noise \(\langle \zeta (\tau ) \zeta (\tau ') \rangle = \delta (\tau -\tau ')\). This is the building block (174), to simulate different constrained Brownian motions.

figure a

1.2 Brownian Bridge

For a Brownian bridge \(x_B(\tau )\), which is a Brownian motion starting and ending at the origin \(x_B(0)=x_B(1)=0\), we use the identity \(x(\tau )-\tau x(1)=x_B(\tau )\)

$$\begin{aligned} Y_i = X_i-\frac{i}{N}X_N , \, i\in [0,N], \end{aligned}$$
(175)

where \(X_i\)’s are generated by (174). One can show that \(Y_{[\tau N]}\) converges to a Brownian bridge \(x_B(\tau )\).

figure b

1.3 Brownian Excursion

For a Brownian excursion \(x_E(\tau )\), which is a Brownian motion that starts and ends at the origin \(x_E(0)=x_E(1)=0\) and staying positive in the interval [0, 1], we use the identity \(\sqrt{[x_{B,1}(\tau )]^2 + [x_{B,2}(\tau )]^2+[x_{B,3}(\tau )]^2}=x_E(\tau )\) where \(x_{B,1}, x_{B,2}\) and \(x_{B,3}\) are three independent Brownian bridges [53, 54]. Hence we simulate

$$\begin{aligned} E_i = \sqrt{Y_{1,i}^2+Y_{2,i}^2+Y_{3,i}^2}, \, i\in [0,N] \end{aligned}$$
(176)

where \(Y_{1,i},Y_{2,i}\) and \(Y_{3,i}\) are three independent realisations of (175). \(E_{[\tau N]}\) converges to a Brownian excursion \(x_E(\tau )\).

figure c

1.4 Brownian Meander

For a Brownian meander \(x_{Me}(\tau )\), a Brownian motion which begins at the origin and stays positive on [0, 1], one can show that the PDF of its final position \(x_F > 0\) at time 1 is \(p(x_F) = x_F e^{-x_F^2/2}\). One can then use the following representation of the meander ending at \(x_F\): [53, 54] \(\sqrt{[x_{B,1}(\tau )]^2 + [x_{B,2}(\tau )]^2+[x_{B,3}(\tau )+\tau \, x_F]^2}=x_{Me}(\tau )\) where \(x_{{B},1},x_{{B},2}\) and \(x_{{B},3}\) are three independent Brownian bridges and \(x_F\) is a random variable drawn from \(p(x_F) = x_F e^{-x_F^2/2}\). Hence the Brownian meander \(x_{Me}(\tau )\) can be generated numerically as

$$\begin{aligned} M_i = \sqrt{Y_{1,i}^2+Y_{2,i}^2+\left( Y_{3,i}+f \frac{i}{N}\right) ^2} , \, i\in [0,N] \end{aligned}$$
(177)

where \(Y_1,Y_2\) and \(Y_3\) are three independent realizations of (175), where \(f>0\) is a random variable, whose PDF is given by \(p(f) = f e^{-f^2/2}\). \(M_{[tN]}\) converges to a Brownian meander \(x_{Me}(\tau )\).

figure d

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Perret, A., Comtet, A., Majumdar, S.N. et al. On Certain Functionals of the Maximum of Brownian Motion and Their Applications. J Stat Phys 161, 1112–1154 (2015). https://doi.org/10.1007/s10955-015-1377-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-015-1377-8

Keywords

Navigation