Skip to main content
Log in

A Time-Dependent Markovian Model of a Limit Order Book

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

This paper considers a Markovian model of a limit order book where time-dependent rates are allowed. With the objective of understanding the mechanisms through which a microscopic model of an orderbook can converge to more general diffusion than a Brownian motion with constant coefficient, a simple time-dependent model is proposed. The model considered here starts by describing the processes that govern the arrival of the different orders such as limit orders, market orders and cancellations. In this sense, this is a microscopic model rather than a “mesoscopic” model where the starting point is usually the point processes describing the times at which the price changes occur and aggregate in these all the information pertaining to the arrival of individual orders. Furthermore, several empirical studies are performed to shed some light into the validity of the modeling assumptions and to verify whether certain stocks satisfy the conditions for their price process to converge to a more complex diffusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Bouchaud, J. P., Farmer, J. D., & Lillo, F. (2009). How markets slowly digest changes in supply and demand. Proceedings of the Handbook of Financial Markets: Dynamics and Evolution, 9(70), 57–160.

    Google Scholar 

  • Harris, L. (2003). Trading and exchanges: Market microstructure for practitioners. Oxford: Oxford University Press.

    Google Scholar 

  • Cont, R., & de Larrard, A. (2013). Price dynamics in a markovian limit order market. SIAM Journal on Financial Mathematics, 4(1), 1–25.

    Article  Google Scholar 

  • Foucault, T., Kadan, O., Kandel, E.: Limit Order book as a market for liquidity. Discussion paper series, The Federmann center for the study of rationality, the Hebrew University, Jerusalem (2003)

  • Gould, M. D., Porter, M. A., Williams, S., McDonald, M., Fenn, D. J., & Howison, S. D. (2013). Limit order books. Quantitative Finance, 13(11), 1709–1742.

    Article  Google Scholar 

  • Harris, L., & Hasbrouck, J. (1996). Market vs. limit orders: The superdot evidence on order submission strategy. Journal of Financial and Quantitative Analysis, 31(2), 213–231.

    Article  Google Scholar 

  • Obizhaeva, A. A., & Wang, J. (2013). Optimal trading strategy and supply/demand dynamics. Journal of Financial Markets, 16(1), 1–32.

    Article  Google Scholar 

  • Law, B.: A pure-jump market-making model for high-frequency trading. PhD thesis, Purdue University (2015)

  • Cartea, A., & Jaimungal, S. (2015). Optimal execution with limit and market orders. Quantitative Finance, 15(8), 1279–1291.

    Article  Google Scholar 

  • Eisler, Z., Bouchaud, J.-P., & Kockelkoren, J. (2012). The price impact of order book events: Market orders, limit orders and cancellations. Quantitative Finance, 12, 1395–1419.

    Article  Google Scholar 

  • Engle, R.F., Ferstenberg, R., Russell, J.R.: Measuring and modeling execution cost and risk. NYU Working Paper (2006)

  • Kirilenko, A., Kyle, A.S., Samadi, M., Tuzun, T.: The flash crash: The impact of high frequency trading on an electronic market (2011)

  • Cont, R., Stoikov, S., & Talreja, R. (2010). A stochastic model for order book dynamics. Operations Research, 58(3), 549–563.

    Article  Google Scholar 

  • Chávez-Casillas, J. A., Elliott, R. J., Rémillard, B., & Swishchuk, A. V. (2019). A level-1 limit order book with time dependent arrival rates. Methodology and Computing in Applied Probability, 21(3), 699–719.

    Article  Google Scholar 

  • Billingsley, P. (1995). Probability and Measure Theory. London: Wiley Series in Probability and Statistics.

    Google Scholar 

  • Jaisson, T., & Rosenbaum, M. (2015). Limit theorems for nearly unstable hawkes processes. The Annals of Applied Probability, 25(2), 600–631.

    Article  Google Scholar 

  • Olver, F. W., Lozier, D. W., Boisvert, R. F., & Clark, C. W. (2010). NIST Handbook of Mathematical Functions (1st ed.). USA: Cambridge University Press.

    Google Scholar 

  • Gut, A. (2013). Probability: A Graduate Course (Vol. 75). USA: Springer.

    Book  Google Scholar 

Download references

Funding

The author declares that no funds, grants, or other support were received during the preparation of this manuscript

Author information

Authors and Affiliations

Authors

Contributions

All the steps needed to complete this manuscript were done by the single author.

Corresponding author

Correspondence to Jonathan A. Chávez Casillas.

Ethics declarations

Competing Interests

The author has no relevant financial or non-financial interests to disclose

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A Proofs of Section 3

Proof of Proposition 1

The arrival processes \(L_t\) and \(M_t\) are Markov processes. Moreover, the queue process, describing the amount of orders at the ask, \(q_t^a\) is also a Markov process with its generator given by Equ. 1. That is, for any function \(u\in \text {Dom}({\mathscr {L}}_t)\), \(t\in {\mathbb {R}}^+\) and \(z\in {\mathbb {N}}\),

$$\begin{aligned} {\mathscr {L}}_tu(t,z)=\lambda _t u(t,z+1) + \mu _t u(t,z-1) -(\lambda _t+\mu _t)u(t,z). \end{aligned}$$
(A1)

Let \({\bar{u}}(t,x)\) be an arbitrary bounded function such that \(t\mapsto {\bar{u}}(t,x)\) is \(C_1\) for all x and \((t,x)\mapsto \partial {\bar{u}}(t,x)/\partial t\) is bounded. Fix \(T>0\) and let \(f(t,x)={\bar{u}}(T-t,x)\). Under the stated conditions, \({\bar{u}}\) belongs to the domain of the generator \({\mathscr {L}}_t\) and, thus, the process

$$\begin{aligned} f(t,q_t^a)-f(0,q_0^a)-\int _0^t\left( \frac{\partial }{\partial r}+{\mathscr {L}}_r\right) f(r,q_r^a)dr,\quad t\in [0,T] \end{aligned}$$

is a local martingale. Therefore,

$$\begin{aligned} M_t:={\bar{u}}(T-t,q_t)-{\bar{u}}(T,q_0)-\int _0^t\left( \frac{\partial }{\partial {r}}+{\mathscr {L}}_r\right) {\bar{u}}(T-r,q_r)dr,\quad {t\in [0,T]}, \end{aligned}$$

is a martingale with \(M_0=0\). Let \(\varsigma :=T\wedge \sigma _a^1\). By the Optional Sampling Theorem,

$$\begin{aligned} {\bar{u}}(T,x)={\mathbb {E}}_x[{\bar{u}}(T-\varsigma ,{q_\varsigma })]-{\mathbb {E}}_x\left[ \int _0^\varsigma \left( \frac{\partial }{\partial {r}}+{\mathscr {L}}_r\right) {\bar{u}}(T-r,q_r)dr\right] , \end{aligned}$$
(A2)

where \({\mathbb {E}}_x[\cdot ]:={\mathbb {E}}[\;\cdot \;\mid \;q_0=x]\).

On the other hand, suppose that \({\bar{u}}(t,x)\) solves the initial value problem 2. That is, \({\bar{u}}(t,x)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{rcl} \left( \frac{\partial }{\partial r} + {\mathscr {L}}_r\right) {\bar{u}} (T-r,z)=0 &{} \text {for} &{} 0<r<T,\; z\in {\mathbb {N}}.\\ {\bar{u}}(T-r,0)=0&{} \text {for} &{} 0\le r<T.\\ {\bar{u}}(0,z)=1 &{} \text {for} &{} z\in {\mathbb {N}}.\end{array}\right. \end{aligned}$$

In that case, by (A2),

$$\begin{aligned} {\bar{u}}(T,x)= & {} {\mathbb {E}}[{\bar{u}}(T-\varsigma ,q_\varsigma )]\\= & {} {\mathbb {E}}[{\bar{u}}(T-\varsigma ,q_\varsigma )\mathbbm {1}_{\left\{ {\{} \right\} }\sigma _a^1\le T\}+{\bar{u}}(T-\varsigma ,q_\varsigma ) \mathbbm {1}_{\left\{ {\{} \right\} }\sigma _a^1>T\}]\\= & {} {\mathbb {E}}[{\bar{u}}(T-\sigma _a^1,0)\mathbbm {1}_{\left\{ {\{} \right\} }\sigma _a^1\le T\} + {\bar{u}}(0,q_T)\mathbbm {1}_{\left\{ {\{} \right\} }\sigma _a^1>T\}]\\= & {} {\mathbb {P}}[\sigma _a^1(x)>T]. \end{aligned}$$

This implies that \({\bar{u}}(T,x)={\mathbb {P}}[\sigma _a^1(x)> T]\).\(\square \)

Proof of Lemma 3

According to Olver et al. (2010)[Formula 10.30.4], for fixed \(\nu \),

$$\begin{aligned} I_\nu (x)\sim \frac{e^x}{\sqrt{2\pi x}}\qquad \qquad \text {as}x\rightarrow \infty . \end{aligned}$$

Thus, as \(T\rightarrow \infty \),

$$\begin{aligned} {\mathbb {P}}[\sigma _{a,{\mathcal {Q}}}^1>T\;\mid \;q_0^a=x]= & {} \left( \frac{\mu }{\lambda }\right) ^{x/2}\int _T^{\infty }\frac{x}{s}I_x\left( 2s\sqrt{\lambda \mu }\right) e^{-s(\lambda +\mu )}ds\\\sim & {} \left( \frac{\mu }{\lambda }\right) ^{x/2}\int _T^{\infty }\frac{x}{s} \frac{e^{2s\sqrt{\lambda \mu }}}{\sqrt{4s\pi \sqrt{\lambda \mu }}}e^{-s(\lambda +\mu )}ds\\\sim & {} \left( \frac{\mu }{\lambda }\right) ^{x/2}\int _T^{\infty }\frac{x}{2\sqrt{\pi \sqrt{\lambda \mu }}}s^{-3/2}e^{-s(\sqrt{\mu }-\sqrt{\lambda })^2}ds\\ \end{aligned}$$

Consequently, if \(\lambda =\mu \),

$$\begin{aligned} {\mathbb {P}}[\sigma _{a,{\mathcal {Q}}}^1>T\;\mid \;q_0^a=x]&\sim \int _T^{\infty }\frac{x}{2\lambda \sqrt{\pi }}s^{-3/2}ds\\&\sim \frac{x}{2\lambda \sqrt{\pi }} \frac{2}{\sqrt{T}}\\&\sim \frac{x}{\lambda \sqrt{\pi }} \frac{1}{\sqrt{T}}. \end{aligned}$$

This agrees with the result proved in Cont and de Larrard (2013). However, if \(\lambda <\mu \),

$$\begin{aligned} {\mathbb {P}}[\sigma _{a,{\mathcal {Q}}}^1>T\;\mid \;q_0^a=x]&\sim \left( \frac{\mu }{\lambda }\right) ^{x/2}\int _{T(\sqrt{\mu }-\sqrt{\lambda })^2}^{\infty }\frac{x}{2\sqrt{\pi \sqrt{\lambda \mu }}} \frac{(\sqrt{\mu }-\sqrt{\lambda })^3}{u^{3/2}}e^{-u}\frac{du}{(\sqrt{\mu }-\sqrt{\lambda })^2}\\&\sim \left( \frac{\mu }{\lambda }\right) ^{x/2} \frac{x(\sqrt{\mu }-\sqrt{\lambda })}{2\sqrt{\pi \sqrt{\lambda \mu }}} \int _{T(\sqrt{\mu }-\sqrt{\lambda })^2}^{\infty }u^{-3/2}e^{-u}du\\&= \left( \frac{\mu }{\lambda }\right) ^{x/2} \frac{x(\sqrt{\mu }-\sqrt{\lambda })}{\sqrt{\pi \sqrt{\lambda \mu }}} \left[ \frac{e^{-T(\sqrt{\mu }-\sqrt{\lambda })^2}}{(\sqrt{\mu }-\sqrt{\lambda })\sqrt{T}} -\Gamma \left( \frac{1}{2},T(\sqrt{\mu }-\sqrt{\lambda })^2\right) \right] \\&\sim \left( \frac{\mu }{\lambda }\right) ^{x/2} \frac{x(\sqrt{\mu }-\sqrt{\lambda })}{\sqrt{\pi \sqrt{\lambda \mu }}} \left[ \frac{e^{-T(\sqrt{\mu }-\sqrt{\lambda })^2}}{(\sqrt{\mu }-\sqrt{\lambda })\sqrt{T}} - T^{-3/2}e^{-T}+O(T^{-1})\right] \\&= \left( \frac{\mu }{\lambda }\right) ^{x/2} \frac{x(\sqrt{\mu }-\sqrt{\lambda })}{\sqrt{\pi \sqrt{\lambda \mu }}} \left[ \frac{e^{-T(\sqrt{\mu }-\sqrt{\lambda })^2}}{(\sqrt{\mu }-\sqrt{\lambda })\sqrt{T}} +o(T^{-1/2})\right] \\&\sim \left( \frac{\mu }{\lambda }\right) ^{x/2} \frac{x(\sqrt{\mu }-\sqrt{\lambda })}{\sqrt{\pi \sqrt{\lambda \mu }}}\frac{e^{-T{\mathcal {C}}}}{(T{\mathcal {C}})^{1/2}}, \end{aligned}$$

where in the second to last asymptotic expansion we used Formula 8.11.2 in Olver et al. (2010)

Let \({\mathcal {C}}=(\sqrt{\mu }-\sqrt{\lambda })^2\). To compute the expectation in the case where \(\lambda =\mu \), notice that, for large enough T,

$$\begin{aligned} {\mathbb {E}}\left[ \sigma _{a,{\mathcal {Q}}}^1;\mid \;q_0^a=x\right] = \int _0^\infty {\mathbb {P}}[\sigma _{a,{\mathcal {Q}}}^1>t;\mid \;q_0^a=x] dt\ge \frac{x}{\lambda \sqrt{\pi }} \int _0^\infty \frac{1}{\sqrt{t}}dt = \infty , \end{aligned}$$

whereas if \(\lambda <\mu \), for a sufficiently large T, there are finite constants \(\widehat{C_1}\) and \(\widehat{C_2}\) such that for any \(n\ge 1\),

$$\begin{aligned} {\mathbb {E}}\left[ \left( \sigma _{a,{\mathcal {Q}}}^1\right) ^k;\mid \;q_0^a=x\right]&= n\int _0^\infty t^{n-1}{\mathbb {P}}[\sigma _{a,{\mathcal {Q}}}^1>t] dt\\&\le \widehat{C_1} + \widehat{C_2}\int _T^\infty t^{n-1}\Big ( (t{\mathcal {C}})^{-1/2}e^{-t{\mathcal {C}}}\Big )dt\\&\le \widehat{C_1} + \widehat{C_2}\int _T^\infty t^{n-1 - 1/2}e^{-t{\mathcal {C}}}dt\\&= \widehat{C_1} + \widehat{C_2}\Gamma (n-1/2,T{\mathcal {C}}) <\infty \end{aligned}$$

\(\square \)

Proof of Proposition 4

For \(0\le t\le A_T\), let \({\bar{w}}(t,x)\) be the function defined in Equ.  7. For \(0\le t\le T\), set \({\bar{v}}(t,x)={\bar{w}}(A_{t},x)\). Then, \({\bar{v}}(t,x)\) belongs to the Domain of \({\mathscr {L}}_t\) and

$$\begin{aligned} \frac{\partial }{\partial r}{\bar{v}}(T-r,x) = \frac{\partial }{\partial r}\Bigg [{\bar{w}}(A_{T-r},x)\Bigg ] = \frac{\partial }{\partial r}{\bar{w}}(A_{T-r},x)\cdot (-a_r)= -Q{\bar{w}}(A_{T-r},x)a_r, \end{aligned}$$

for all \(r\in (0,T)\) and \(x\in {\mathbb {N}}\). However, since \({\mathscr {L}}_r=Qa_r\),

$$\begin{aligned} \frac{\partial }{\partial t}{\bar{v}}(t,x) + {\mathscr {L}}_r{\bar{v}}(t,x) = 0 \end{aligned}$$

for all \(x\in {\mathbb {N}}\), \(t\in (0,T)\). Moreover, since \({\bar{w}}(t,x)\) satisfies the IVP 6, for \(0\le r<T\), \({\bar{v}}(T-r,0)= {\bar{w}}(A_{T-r},0) = 0\) and for any \(z\in {\mathbb {N}}\), \({\bar{v}}(0,z)={\bar{w}}(A_{0},z)={\bar{w}}(0,z)=1\). Thus, by Proposition 1, the result follows.\(\square \)

Proof of Lemma 5

By Remark 2 and Lemma 3, the first part is straightforward. If

  • \(\alpha _t\sim t^s\log ^m(t)\) as \(t\rightarrow \infty \) for some \(s\ne -1\), \(m\in {\mathbb {N}}\cup \{0\}\). Then, for sufficiently large t, \(A_t\ge {\hat{c}}t^{s+1}\). Therefore,

    • If \(\lambda <\mu \), (by the Proof of Lemma ), there are finite constants \({\mathcal {C}}\), \(C_1\), \(C_2\) and \(C_3\) such that,

      $$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\\le & {} T^n+n\int _T^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\\le & {} T^n + C_1 \int _T^\infty t^{n-1}\left[ \left( {\mathcal {C}}A_t\right) ^{-1/2}e^{-{\mathcal {C}}A_t} \right] ^2 dt\\\le & {} T^n + C_1 \int _T^\infty t^{n-1}\left[ \left( {\mathcal {C}}t^{s+1}\right) ^{-1/2}e^{-{\mathcal {C}}t^{s+1}} \right] ^2 dt\\\le & {} T^n + C_2 \int _T^\infty t^{n-s-2}e^{-2{\mathcal {C}}t^{s+1}} dt\\= & {} T^n + C_3 \int _T^\infty u^{(n-2(s+1))/(s+1)}(u/2{\mathcal {C}})e^{-u}du\\= & {} T^n + C_4 \int _T^\infty u^{n/(s+1)-2}e^{-u}du<\infty . \end{aligned}$$
    • If \(\lambda =\mu \), there are finite constants \(C_1\), \(C_2\) and \(C_3\) such that,

      $$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]= & {} n\int _0^\infty t^{n-1} {\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} C_1+n\int _T^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} C_1 + C_2 \int _T^\infty \dfrac{xy}{\lambda ^2\pi } t^{n-1}\dfrac{1}{A_T} dt\\= & {} C_1 + C_2 \int _T^\infty \frac{xy}{\lambda ^2\pi }t^{n-s-2}dt\\= & {} C_3\mathbbm {1}_{\left\{ {n<s+1} \right\} }+\infty \mathbbm {1}_{\left\{ {n\ge s+1} \right\} }. \end{aligned}$$
  • \(\alpha _t\sim k/t\) for some \(k>0\). Then \(A_t\sim k\log (t)\).

    • If \(\lambda <\mu \), for sufficiently large T, there are finite constants \({\mathcal {C}}\), \(C_1\), \(C_2\) and \(C_3\) such that,

      $$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} C_1 + n\int _T^\infty t^{n-1} \left[ A_t^{-1/2}e^{-{\mathcal {C}}A_t}\right] ^2dt\\= & {} C_1 + n\int _T^\infty t^{n-1} \left[ (k\log (t))^{-1/2}e^{-k{\mathcal {C}}\log (t)}\right] ^2dt\\= & {} C_1 + C_2\int _T^\infty k t^{n-1-2k{\mathcal {C}}}\log (t) dt\\= & {} C_3(\mathbbm {1}_{\left\{ {\{} \right\} }n<2{\mathcal {C}}k\}+ \infty (\mathbbm {1}_{\left\{ {\{} \right\} }n\ge 2{\mathcal {C}}k\}). \end{aligned}$$
    • If \(\lambda =\mu \), there are finite constants \(C_1\), \(C_2\) and \(C_3\) such that,

      $$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]&=n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\&=C_1 + C_2 \int _T^\infty \dfrac{xy}{\lambda ^2\pi } t^{n-1}\dfrac{1}{A_T} dt\\&=C_1 + C_2 \int _T^\infty \dfrac{xy}{\lambda ^2\pi } t^{n-1}\dfrac{1}{k\log (t)} dt\\&=C_1 + C_3 \int _T^\infty \frac{t^{n-1}}{\log (t)} dt=\infty . \end{aligned}$$

      \(\square \)

Proof of Proposition 6

Let \(F_{n,{\mathcal {Q}}}(t)\) and \(F_{n,{\mathcal {H}}}(t)\) denote the cdf of \(S^n_{{\mathcal {Q}}}\) and \(S^n_{{\mathcal {H}}}\), respectively. Moreover, let \(f_{n,{\mathcal {Q}}}(t)\) and \(f_{n,{\mathcal {H}}}(t)\) denote their corresponding densities. The result will be proven by induction. The base case, \(n=1\) is given in Corollary 1. Assume the result is true for any \(m\le n\in {\mathbb {N}}\). Then by Corollary 1 and the induction hypothesis,

$$\begin{aligned} F_{{\mathcal {H}}}(t)=F_{{\mathcal {Q}}}(A_t)\qquad \qquad \text {and}\qquad \qquad f_{n,{\mathcal {H}}}(t)=f_{n,{\mathcal {Q}}}(A_t)\alpha _t \end{aligned}$$
(A3)

Furthermore, by the definition of \(\tau ^n\) and \(S^n\)

$$\begin{aligned} {\mathbb {P}}_{x,y}[S^{n+1}_{{\mathcal {H}}}\le t]&={\mathbb {P}}_{x,y}[S^{n}_{{\mathcal {H}}}\le t, \tau ^{n+1}_{{\mathcal {H}}}\le t-S^{n}_{\mathcal {Q}}] \nonumber \\&=\int _0^{t} {\mathbb {P}}_{x,y}[S^{n}_{{\mathcal {H}}}\le t, \tau ^{n+1}_{{\mathcal {H}}}\le t-S^{n}_{{\mathcal {H}}}\;\mid \;S^{n}_{{\mathcal {H}}}=u]{\mathbb {P}}_{x,y}[S^{n}_{{\mathcal {H}}}=u]du \nonumber \\&=\int _0^{t} {\mathbb {P}}[\tau ^{n+1}_{{\mathcal {H}}}\le t-S^{n}_{{\mathcal {H}}}\;\mid \;S^{n}_{{\mathcal {H}}}=u]{\mathbb {P}}_{x,y}[S^{n}_{{\mathcal {H}}}=u]du\end{aligned}$$
(A4)
$$\begin{aligned}&=\int _0^{t} {\mathbb {P}}[\tau ^{n+1}_{{\mathcal {H}}}\le t-u]f_{n,{\mathcal {H}}}(u)du \nonumber \\&=\int _0^{t} {\mathbb {P}}[\tau ^{n+1}_{{\mathcal {Q}}}\le A_{t-u}]f_{n,{\mathcal {Q}}}(A_u)\alpha _u du\nonumber \\&=\int _0^{t} F_{1,{\mathcal {Q}}}(A_t-A_u)f_{n,{\mathcal {Q}}}(A_u)\alpha _u du \nonumber \\&=\int _0^{A_t} F_{1,{\mathcal {Q}}}(A_t-u)f_{n,{\mathcal {Q}}}(u) du \nonumber \\&=\int _0^{A_t} F_{1,{\mathcal {Q}}}(A_t-u)dF_{n,{\mathcal {Q}}}(u) \nonumber \\&={\mathbb {P}}_{x,y}[S^{n+1}_{{\mathcal {Q}}}\le A_t], \end{aligned}$$
(A5)

In the last equality we used the facts that \(S_{n+1}=S_n+\tau _{n+1}\) and that for X and Y, non-negative independent random variables,

$$\begin{aligned} F_{X+Y}(t)={\mathbb {P}}[X+Y\le t]=F_X*F_Y(t)=\int _0^tF_X(t-x)dF_Y(x), \end{aligned}$$

with \(F_X\) and \(F_Y\) denoting the cdfs of X and Y.\(\square \)

Proof of Theorem 7

By the dynamics of the order book described in Sect. 3, the sequence of random price changes \(X_i\in \{-1,1\}\) is independent. Then, if \(\lambda <\mu \) and,

  • If \(\alpha _t\sim t^{s}\log ^m(t)\) for \(s\ne -1, m\ge 0\) or if \(\dfrac{\alpha _t}{t^{-1}} \rightarrow K\) as \(t\rightarrow \infty \), with \(2{\mathcal {C}}K> 1\), by Lemma 5, for every \(x,y\in {\mathbb {N}}\),

    $$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\Big \vert \;q_0^a=x, q_0^b=y\right] <\infty . \end{aligned}$$

    Since \(N_t=\max \{n\ge 0\;\mid \; \tau _1+\tau _2+\ldots +\tau _n\le t\}\) then,

    $$\begin{aligned} \tau _1+\tau _2+\ldots +\tau _{N_t}\le t \le \tau _1+\tau _2+\ldots +\tau _{N_t+1}. \end{aligned}$$

    Dividing the previous inequality by \(N_t\), since \(N_t\rightarrow \infty \) as \(t\rightarrow \infty \), by using the Strong Law of Large Numbers we obtain that a.s.

    $$\begin{aligned} {\mathbb {E}}[\tau ]:=\sum \limits _{x,y\in {\mathbb {N}}} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\Big \vert \;q_0^a=x, q_0^b=y\right] f(x,y)\rightarrow \frac{t}{N_t}\qquad \qquad \text { as }t\rightarrow \infty . \end{aligned}$$

    Therefore, by using the sequence \(t_n=tn\), we decompose the process \(s_{t_n}:=\sum \limits _{j=1}^{N_{t_n}}X_i\) as:

    $$\begin{aligned} s_{t_n}= \underbrace{\frac{s_0}{\sqrt{n}}}_{\hbox {I}_n}+\underbrace{\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[tn/{\mathbb {E}}[\tau _1]]}\left( X_j\right) }_{\hbox {II}_n} + \underbrace{\left( \frac{1}{\sqrt{n}}\sum \limits _{j=1}^{N_{t_n}}X_j-\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[tn/{\mathbb {E}}_{\pi }(\tau _1)]}X_j\right) }_{\hbox {III}_n} \end{aligned}$$

    As \(n\rightarrow \infty \), clearly, I\(_n\Rightarrow 0\). Also, by Donsker’s Invariance principle,

    $$\begin{aligned} \hbox {II}_n&\Rightarrow \sigma W_t, \end{aligned}$$

    where \(\sigma \) is a constant. Now, since \(X_j\in \left\{ 1,-1\right\} \), for any \(\epsilon >0\),

    $$\begin{aligned} {\mathbb {P}}\left( \left| \sum \limits _{j=1}^{N_{t_n}}X_j-\sum \limits _{j=1}^{[tn/{\mathbb {E}}[\tau _1]]}\Pi _j\right| \ge \epsilon \sqrt{n}\right)&\le {\mathbb {P}}\left( \left| \sum \limits _{j=N_{t_n}\wedge [tn/{\mathbb {E}}[\tau _1]]}^{N_{t_n}\vee [tn/{\mathbb {E}}[\tau _1]]}X_j\right| \ge \epsilon \sqrt{n}\right) \\&\le {\mathbb {P}}\left( \frac{1}{2}\left| N_{t_n}-[tn/{\mathbb {E}}[\tau _1]]\right| \ge \epsilon \sqrt{n}\right) \\&\le {\mathbb {P}}\left( \left| \frac{N_{t_n}}{[tn/{\mathbb {E}}[\tau _1]]}-1\right| \ge \frac{2\epsilon \sqrt{n}}{[tn/{\mathbb {E}}[\tau _1]]}\right) , \end{aligned}$$

    which converges to 0 as \(n\rightarrow \infty \). Thus, III\(_n\) converges to 0 in probability and we conclude the proof.

  • If \(\dfrac{\alpha _t}{t^{-1}} \sim K\) with \(2{\mathcal {C}}K\le 1\) as \(t\rightarrow \infty \), then \(A_T\sim k\log (T)\). By Lemma 5

    $$\begin{aligned} {\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>T\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{\exp (-2{\mathcal {C}}k\log (T))}{k\log (T)}\\&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{T^{-2{\mathcal {C}}k}}{k\log (T)}\\&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{1}{kT^{2{\mathcal {C}}k}\log (T)} \end{aligned}$$

    Therefore,

    $$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/2k{\mathcal {C}}}\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim n\left[ \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{1}{kn\log (n^{1/2k{\mathcal {C}}})}\right] \\&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{1}{k\log (n^{1/2k{\mathcal {C}}})}. \end{aligned}$$

    Thus,

    $$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/2k{\mathcal {C}}}\;\Big \vert \;q_0^a=x, q_0^b=y\right] \rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$

    and by Theorem 6.4.2 in Gut (2013), in probability,

    $$\begin{aligned} \frac{S_n-n{\mathbb {E}}\left[ \tau _{{\mathcal {H}}}^1\mathbbm {1}_{\left\{ {\tau _{{\mathcal {H}}}^1<n^{1/2k{\mathcal {C}}}} \right\} }\right] }{n^{1/2k{\mathcal {C}}}}\rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$

    By Proposition 9, \(\hat{{\mathcal {A}}}:=\lim _{n\rightarrow \infty }\frac{n{\mathbb {E}}[\tau \mathbbm {1}_{\left\{ {\{} \right\} }\tau <n^{1/2k{\mathcal {C}}}]}{n^{1/2k{\mathcal {C}}}}\) is a constant. Thus, by a similar argument as in the previous bullet, in probability,

    $$\begin{aligned} \frac{t}{N_t^{1/2k{\mathcal {C}}}}\rightarrow \hat{{\mathcal {A}}} \qquad \qquad \text { as } t\rightarrow \infty , \end{aligned}$$

    or equivalently,

    $$\begin{aligned} N_t\rightarrow \left( \frac{t}{\hat{{\mathcal {A}}}}\right) ^{2k{\mathcal {C}}} \qquad \qquad \text { as } t\rightarrow \infty . \end{aligned}$$

    As before, by using the sequence \(t_n=tn^{1/2k{\mathcal {C}}}\), we decompose the process \(s_{t_n}:=\sum \limits _{j=1}^{N_{t_n}}X_i\) as:

    $$\begin{aligned} s_{t_n}= \underbrace{\frac{s_0}{\sqrt{n}}}_{\hbox {I}_n}+\underbrace{\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[n(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}]}\left( X_j\right) }_{\hbox {II}_n} + \underbrace{\left( \frac{1}{\sqrt{n}}\sum \limits _{j=1}^{N_{t_n}}X_j-\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[n(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}]}X_j\right) }_{\hbox {III}_n} \end{aligned}$$

    By similar arguments as above, as \(n\rightarrow \infty \),

    $$\begin{aligned} \hbox {I}_n&\Rightarrow 0\\ \hbox {III}_n&\Rightarrow 0\\ \hbox {II}_n&\Rightarrow W_{(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}}= W_{\hat{{\mathcal {A}}}^{-{2k{\mathcal {C}}}}\int _0^t \frac{1}{u^{1-2k{\mathcal {C}}}}du}. \end{aligned}$$

    Moreover, in distribution,

    $$\begin{aligned} W_{(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}}=\hat{{\mathcal {A}}}^{-k{\mathcal {C}}}\int _0^t \sqrt{\frac{1}{u^{1-2{\mathcal {C}}k}}}dW_u, \end{aligned}$$

    which concludes the proof.\(\square \)

Proof of Theorem 8

By the dynamics of the order book described in Sect. 3 , the sequence of random price changes \(X_i\in \{-1,1\}\) is independent. Then, if \(\lambda =\mu \),

  • If \(\alpha _t\sim t^{s}\log ^m(t)\) as \(t\rightarrow \infty \), for any \(s>0, m\ge 0\), by Lemma 5, for every \(x,y\in {\mathbb {N}}\),

    $$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\Big \vert \;q_0^a=x, q_0^b=y\right] <\infty . \end{aligned}$$

    and the proof follows in the same way as in the proof of Theorem 7.

  • If \(\alpha _t\sim t^{-1+s}\) as \(t\rightarrow \infty \) for any \(s\in (0,1]\), then \(A_t\sim t^s/s\) and by Lemma 5

    $$\begin{aligned} {\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>T\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim \dfrac{xy}{\lambda ^2\pi } \dfrac{s}{T^s} \end{aligned}$$

    Therefore,

    $$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/s}\log (n)\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim n\left[ \dfrac{xy}{\lambda ^2\pi } \dfrac{s}{n\log ^s(n)}\right] \\&\sim \dfrac{xy}{\lambda ^2\pi } \dfrac{s}{\log ^s(n)}. \end{aligned}$$

    Thus, since \(s>0\),

    $$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/s}\log (s)\;\Big \vert \;q_0^a=x, q_0^b=y\right] \rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$

    and by Theorem 6.4.2 in Gut (2013), in probability,

    $$\begin{aligned} \frac{S_n-n{\mathbb {E}}\left[ \tau _{{\mathcal {H}}}^1\mathbbm {1}_{\left\{ {\{} \right\} }\tau _{{\mathcal {H}}}^1<n^{1/s}\log (n)\right] }{n^{1/s}\log (n)}\rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty , \end{aligned}$$

    where by Proposition (10) \(\hat{{\mathcal {B}}}:=\lim _{n\rightarrow \infty }\frac{n{\mathbb {E}}[\tau \mathbbm {1}_{\left\{ {\{} \right\} }\tau <n^{s}\log (n)]}{n^{s}\log (n)}\) is a constant. Thus, by a similar argument as in the proof of Theorem 7, in probability,

    $$\begin{aligned} \frac{t}{N_t^{s}\ln (N_t)}\sim \hat{{\mathcal {B}}} \qquad \qquad \text { as } t\rightarrow \infty , \end{aligned}$$

    or what is the same,

    $$\begin{aligned} N_t^s\log (N_t)\sim \frac{t}{\hat{{\mathcal {B}}}} \qquad \qquad \text { as } t\rightarrow \infty . \end{aligned}$$
    (A6)

    Let \(\psi (t)\) be the inverse function of \(f(t):=t^s\log (t)\). Notice that \(\psi \) is well defined since f is strictly increasing. By definition, \(\psi (t)=u\) implies that \(f(u)=t\) or, what is the same, \(u^s\log (u)=t\), or \(\psi (t)^s\log (\psi (t))=t\).Thus,

    $$\begin{aligned} \psi (t)&\sim \frac{t^{1/s}}{\log (\psi (t))}\\&\sim \frac{t^{1/s}}{(1/s)\log (\psi (t)^s)}\\&\sim \frac{st^{1/s}}{\log (\psi (t)^s\log (\psi (t)))}\\&\sim \frac{st^{1/s}}{\log (t)} \end{aligned}$$

    Therefore, by Eq. A6,

    $$\begin{aligned} f(N_t)\sim \frac{t}{\hat{{\mathcal {B}}}} \end{aligned}$$

    and since \(\psi \) is the inverse of f, then

    $$\begin{aligned} N_t\sim \psi \left( \frac{t}{\hat{{\mathcal {B}}}}\right) \sim \frac{s\left( \frac{t}{\hat{{\mathcal {B}}}}\right) ^{1/s}}{\log \left( \frac{t}{\hat{{\mathcal {B}}}}\right) }\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}}\frac{t^{1/s}}{\log \left( t\right) -\log (\hat{{\mathcal {B}}})} \end{aligned}$$

    Using the sequence \(t_n=t(n\log (n))^s\), we have that,

    $$\begin{aligned} N_{t_n}&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}}\frac{(tn^{s}\log ^s(n))^{1/s}}{\log \left( tn^{s}\log ^s(n)\right) -\log (\hat{{\mathcal {B}}})}\\&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}} \frac{t^{1/s}n\log (n)}{\log \left( tn^{s}\log ^s(n)\right) -\log (\hat{{\mathcal {B}}})}\\&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}} \frac{t^{1/s}n}{\frac{\log \left( tn^{s}\log ^s(n)\right) }{\log (n)} -\frac{\log (\hat{{\mathcal {B}}})}{\log (n)}}\\&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}} \frac{t^{1/s}n}{s}\sim \frac{t^{1/s}n}{\hat{{\mathcal {B}}}^{1/s}} \end{aligned}$$

    To conclude, we decompose the process \(s_{t_n}:=\sum \limits _{j=1}^{N_{t_n}}X_i\) as in the proof of Theorem 7 and use the same arguments therein.

  • If \(\alpha _t\sim t^s\log ^m(t)\) for every \(s<0\) or if \(\alpha _t\sim k/t\), then for any regularly varying sequence at infinity with exponent \(1/\rho \) for some \(\rho \in (0,1]\),

    $$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>b_n\;\Big \vert \;q_0^a=x, q_0^b=y\right] \rightarrow \infty \qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$

    Thus, \(N_t\) cannot be rescaled to ensure a Law of Large Numbers and the price process does not converge.\(\square \)

Appendix B Auxiliary Results

Proposition 9

Let \(\tau \) be positive random variable such that \({\mathbb {P}}[\tau >t]\sim \frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}\) with \(2k{\mathcal {C}}\le 1\), \(\Theta \) a constant and define the sequence

$$\begin{aligned} X_n=\frac{n}{n^{1/2k{\mathcal {C}}}}\tau \mathbbm {1}_{\left\{ {\tau <n^{1/2k{\mathcal {C}}}} \right\} }. \end{aligned}$$

Then, \(\Psi _n:={\mathbb {E}}[X_n]\) converges as \(n\rightarrow \infty \).

Proof

Let \(F_\tau (t)\) denote the CDF of \(\tau \). Then, since \(\tau \) is a positive random variable,

$$\begin{aligned} {\mathbb {E}}[g(\tau )]=\int _0^\infty g(x)dF_\tau (x), \end{aligned}$$

where the last integral should be understood as a Riemann-Stieltjes integral. Letting \(g(x)=\frac{n}{n^{1/2k{\mathcal {C}}}}x\mathbbm {1}_{\left\{ {x<n^{1/2k{\mathcal {C}}}} \right\} }\) and substituting in the above formula,

$$\begin{aligned} {\mathbb {E}}\left[ \frac{n}{n^{1/2k{\mathcal {C}}}}\tau \mathbbm {1}_{\left\{ {\tau <n^{1/2k{\mathcal {C}}}} \right\} }\right]&=\frac{n}{n^{1/2k{\mathcal {C}}}}\int _0^{n^{1/2k{\mathcal {C}}}}xdF_\tau (x)\\&=-\frac{n}{n^{1/2k{\mathcal {C}}}}\int _0^{n^{1/2k{\mathcal {C}}}}xd(1-F_\tau (x))\\ \end{aligned}$$

Noting that the Left Hand Side is \(\Psi _n\) and integrating by parts,

$$\begin{aligned} \Psi _n&=-\frac{n}{n^{1/2k{\mathcal {C}}}}\cdot x{\mathbb {P}}[\tau>x]\Big \vert _{x=0}^{n^{1/2k{\mathcal {C}}}} +\frac{n}{n^{1/2k{\mathcal {C}}}}\int _{0}^{n^{1/2k{\mathcal {C}}}}{\mathbb {P}}[\tau>t]dt\nonumber \\&=-n{\mathbb {P}}\left[ \tau>n^{1/2k{\mathcal {C}}}\right] +n^{1-1/2k{\mathcal {C}}}\int _{0}^{n^{1/2k{\mathcal {C}}}}{\mathbb {P}}[\tau >t]dt\nonumber \\&\sim -\left. n\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}\right| _{t=n^{1/2k{\mathcal {C}}}} +n^{1-1/2k{\mathcal {C}}}\int _{0}^{n^{1/2k{\mathcal {C}}}}\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt\nonumber \\&\sim -\frac{\Theta }{2k{\mathcal {C}}\log (n)}+n^{1-1/2k{\mathcal {C}}}\int _{0}^{n^{1/2k{\mathcal {C}}}} \frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt \end{aligned}$$
(B7)

Then, by using L’Hopital rule and considering \(\Psi _n\) as a (differentiable) function of n and setting \({\widehat{\Theta }}=\Theta /2k{\mathcal {C}}\),

$$\begin{aligned} \frac{d}{dn}\Psi _n\sim \frac{{\widehat{\Theta }}}{n\log ^2(n)} + n^{-1/2k{\mathcal {C}}}\int _{0}^{n^{1/2k{\mathcal {C}}}}\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt + n^{1-1/2k{\mathcal {C}}} \frac{{\widehat{\Theta }}}{n\log (n)} \end{aligned}$$

which is clearly positive and thus \(\Psi _n\) is an increasing sequence. Furthermore, by (B7), there exist constants \(\epsilon \) and \(T_\epsilon \) such that if \(n^{1/2k{\mathcal {C}}}>T_\epsilon \),

$$\begin{aligned} \Psi _n\le -\frac{\Theta }{2k{\mathcal {C}}\log (n)}+n^{1-1/2k{\mathcal {C}}}\int _{0}^{T_\epsilon }\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt + n^{1-1/2k{\mathcal {C}}}\int _{T_\epsilon }^{n^{1/2k{\mathcal {C}}}}\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt \end{aligned}$$

and since \(2k{\mathcal {C}}<1\), then \(1-1/2{\mathcal {C}}\le 0\) and thus

$$\begin{aligned} \Psi _n&\le -\frac{\Theta }{2k{\mathcal {C}}}+\int _{0}^{T_\epsilon }\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt+ n^{1-1/2k{\mathcal {C}}}\int _{T_\epsilon }^{n^{1/2k{\mathcal {C}}}}\frac{\Theta }{t^{2k{\mathcal {C}}}}dt\\&=-\frac{\Theta }{2k{\mathcal {C}}}+\int _{0}^{T_\epsilon }\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt+ n^{1-1/2k{\mathcal {C}}}\frac{n^{-1+1/2k{\mathcal {C}}} + C(T_\epsilon )}{-2k{\mathcal {C}}+1}\\&\le -\frac{\Theta }{2k{\mathcal {C}}}+\int _{0}^{T_\epsilon }\frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}dt+ \frac{1}{1-2k{\mathcal {C}}} + \frac{C(T_\epsilon )}{1-2k{\mathcal {C}}}\\&<\infty \end{aligned}$$

Therefore, \(\Psi _n\) is also bounded and by the monotone convergence theorem for sequences it converges to a constant limit. \(\square \)

Proposition 10

Let \(\tau \) be positive random variable such that \({\mathbb {P}}[\tau >t]\sim \frac{\Theta }{t^s}\) with \(0<s\le 1\) with \(\Theta \) a constant. Define the sequence

$$\begin{aligned} X_n=\frac{n}{n^{1/s}\log (n)}\tau \mathbbm {1}_{\left\{ {\tau <n^{1/s}\log (n)} \right\} }. \end{aligned}$$

Then, \(\Phi _n:={\mathbb {E}}[X_n]\) converges as \(n\rightarrow \infty \).

Proof

As in the previous result, let \(F_\tau (t)\) denote the CDF of \(\tau \). Then, again, since \(\tau \) is a positive random variable,

$$\begin{aligned} {\mathbb {E}}[g(\tau )]=\int _0^\infty g(x)dF_\tau (x). \end{aligned}$$

Substituting \(g(x)=\frac{n}{n^{1/s}\log (n)}x\mathbbm {1}_{\left\{ {x<n^{1/s}\log (n)} \right\} }\) in the above formula,

$$\begin{aligned} {\mathbb {E}}\left[ \frac{n}{n^{1/s}\log (n)}\tau \mathbbm {1}_{\left\{ {\tau <n^{1/s}\log (n)} \right\} }\right]&=\frac{n}{n^{1/s}\log (n)}\int _0^{n^{1/s}\log (n)}xdF_\tau (x)\\&=-\frac{n}{n^{1/s}\log (n)}\int _0^{n^{1/s}\log (n)}xd(1-F_\tau (x))\\ \end{aligned}$$

Noting that the Left Hand Side is \(\Phi _n\) and integrating by parts,

$$\begin{aligned} \Phi _n&=-\frac{n}{n^{1/s}\log (n)}\cdot x{\mathbb {P}}[\tau>x]\Big \vert _{x=0}^{n^{1/s}\log (n)} +\frac{n}{n^{1/s}\log (n)}\int _{0}^{n^{1/s}\log (n)}{\mathbb {P}}[\tau>t]dt\\&=-n{\mathbb {P}}\left[ \tau>n^{1/s}\log (n)\right] +\frac{n^{1-1/s}}{\log (n)}\int _{0}^{n^{1/s}\log (n)}{\mathbb {P}}[\tau >t]dt\\&\sim -\left. n\frac{\Theta }{t^{s}}\right| _{t=n^{1/s}\log (n)}+\frac{n^{1-1/s}}{\log (n)}\int _{0}^{n^{1/s}\log (n)}\frac{\Theta }{t^s}dt\\&\sim -\frac{\Theta }{\log ^s(n)}+\frac{n^{1-1/s}}{\log (n)}\int _{0}^{n^{1/s}\log (n)}\frac{\Theta }{t^s}dt\\&\sim -\frac{\Theta }{\log ^s(n)}+\left. \frac{n^{1-1/s}}{\log (n)}\cdot \frac{\Theta t^{-s+1}}{-s+1}\right| _{t=0}^{n^{1/s}\log (n)}\\&\sim -\frac{\Theta }{\log ^s(n)}+\frac{n^{1-1/s}}{\log (n)}\cdot \frac{\Theta n^{-1+1/s}\log ^{-s+1}(n)}{-s+1}\\&\sim -\frac{\Theta }{\log ^s(n)}+\frac{\Theta }{(-s+1)\log ^{s}(n)}\\&\sim \frac{(2-s)\Theta }{(1-s)\log ^s(n)} \end{aligned}$$

which shows that it is asymptotically decreasing to zero and thus \(\Phi _n\) converges by using similar arguments as in the previous theorem. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chávez Casillas, J.A. A Time-Dependent Markovian Model of a Limit Order Book. Comput Econ 63, 679–709 (2024). https://doi.org/10.1007/s10614-023-10356-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10614-023-10356-9

Keywords

Navigation