Abstract
This paper considers a Markovian model of a limit order book where time-dependent rates are allowed. With the objective of understanding the mechanisms through which a microscopic model of an orderbook can converge to more general diffusion than a Brownian motion with constant coefficient, a simple time-dependent model is proposed. The model considered here starts by describing the processes that govern the arrival of the different orders such as limit orders, market orders and cancellations. In this sense, this is a microscopic model rather than a “mesoscopic” model where the starting point is usually the point processes describing the times at which the price changes occur and aggregate in these all the information pertaining to the arrival of individual orders. Furthermore, several empirical studies are performed to shed some light into the validity of the modeling assumptions and to verify whether certain stocks satisfy the conditions for their price process to converge to a more complex diffusion.
Similar content being viewed by others
References
Bouchaud, J. P., Farmer, J. D., & Lillo, F. (2009). How markets slowly digest changes in supply and demand. Proceedings of the Handbook of Financial Markets: Dynamics and Evolution, 9(70), 57–160.
Harris, L. (2003). Trading and exchanges: Market microstructure for practitioners. Oxford: Oxford University Press.
Cont, R., & de Larrard, A. (2013). Price dynamics in a markovian limit order market. SIAM Journal on Financial Mathematics, 4(1), 1–25.
Foucault, T., Kadan, O., Kandel, E.: Limit Order book as a market for liquidity. Discussion paper series, The Federmann center for the study of rationality, the Hebrew University, Jerusalem (2003)
Gould, M. D., Porter, M. A., Williams, S., McDonald, M., Fenn, D. J., & Howison, S. D. (2013). Limit order books. Quantitative Finance, 13(11), 1709–1742.
Harris, L., & Hasbrouck, J. (1996). Market vs. limit orders: The superdot evidence on order submission strategy. Journal of Financial and Quantitative Analysis, 31(2), 213–231.
Obizhaeva, A. A., & Wang, J. (2013). Optimal trading strategy and supply/demand dynamics. Journal of Financial Markets, 16(1), 1–32.
Law, B.: A pure-jump market-making model for high-frequency trading. PhD thesis, Purdue University (2015)
Cartea, A., & Jaimungal, S. (2015). Optimal execution with limit and market orders. Quantitative Finance, 15(8), 1279–1291.
Eisler, Z., Bouchaud, J.-P., & Kockelkoren, J. (2012). The price impact of order book events: Market orders, limit orders and cancellations. Quantitative Finance, 12, 1395–1419.
Engle, R.F., Ferstenberg, R., Russell, J.R.: Measuring and modeling execution cost and risk. NYU Working Paper (2006)
Kirilenko, A., Kyle, A.S., Samadi, M., Tuzun, T.: The flash crash: The impact of high frequency trading on an electronic market (2011)
Cont, R., Stoikov, S., & Talreja, R. (2010). A stochastic model for order book dynamics. Operations Research, 58(3), 549–563.
Chávez-Casillas, J. A., Elliott, R. J., Rémillard, B., & Swishchuk, A. V. (2019). A level-1 limit order book with time dependent arrival rates. Methodology and Computing in Applied Probability, 21(3), 699–719.
Billingsley, P. (1995). Probability and Measure Theory. London: Wiley Series in Probability and Statistics.
Jaisson, T., & Rosenbaum, M. (2015). Limit theorems for nearly unstable hawkes processes. The Annals of Applied Probability, 25(2), 600–631.
Olver, F. W., Lozier, D. W., Boisvert, R. F., & Clark, C. W. (2010). NIST Handbook of Mathematical Functions (1st ed.). USA: Cambridge University Press.
Gut, A. (2013). Probability: A Graduate Course (Vol. 75). USA: Springer.
Funding
The author declares that no funds, grants, or other support were received during the preparation of this manuscript
Author information
Authors and Affiliations
Contributions
All the steps needed to complete this manuscript were done by the single author.
Corresponding author
Ethics declarations
Competing Interests
The author has no relevant financial or non-financial interests to disclose
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A Proofs of Section 3
Proof of Proposition 1
The arrival processes \(L_t\) and \(M_t\) are Markov processes. Moreover, the queue process, describing the amount of orders at the ask, \(q_t^a\) is also a Markov process with its generator given by Equ. 1. That is, for any function \(u\in \text {Dom}({\mathscr {L}}_t)\), \(t\in {\mathbb {R}}^+\) and \(z\in {\mathbb {N}}\),
Let \({\bar{u}}(t,x)\) be an arbitrary bounded function such that \(t\mapsto {\bar{u}}(t,x)\) is \(C_1\) for all x and \((t,x)\mapsto \partial {\bar{u}}(t,x)/\partial t\) is bounded. Fix \(T>0\) and let \(f(t,x)={\bar{u}}(T-t,x)\). Under the stated conditions, \({\bar{u}}\) belongs to the domain of the generator \({\mathscr {L}}_t\) and, thus, the process
is a local martingale. Therefore,
is a martingale with \(M_0=0\). Let \(\varsigma :=T\wedge \sigma _a^1\). By the Optional Sampling Theorem,
where \({\mathbb {E}}_x[\cdot ]:={\mathbb {E}}[\;\cdot \;\mid \;q_0=x]\).
On the other hand, suppose that \({\bar{u}}(t,x)\) solves the initial value problem 2. That is, \({\bar{u}}(t,x)\) satisfies
In that case, by (A2),
This implies that \({\bar{u}}(T,x)={\mathbb {P}}[\sigma _a^1(x)> T]\).\(\square \)
Proof of Lemma 3
According to Olver et al. (2010)[Formula 10.30.4], for fixed \(\nu \),
Thus, as \(T\rightarrow \infty \),
Consequently, if \(\lambda =\mu \),
This agrees with the result proved in Cont and de Larrard (2013). However, if \(\lambda <\mu \),
where in the second to last asymptotic expansion we used Formula 8.11.2 in Olver et al. (2010)
Let \({\mathcal {C}}=(\sqrt{\mu }-\sqrt{\lambda })^2\). To compute the expectation in the case where \(\lambda =\mu \), notice that, for large enough T,
whereas if \(\lambda <\mu \), for a sufficiently large T, there are finite constants \(\widehat{C_1}\) and \(\widehat{C_2}\) such that for any \(n\ge 1\),
\(\square \)
Proof of Proposition 4
For \(0\le t\le A_T\), let \({\bar{w}}(t,x)\) be the function defined in Equ. 7. For \(0\le t\le T\), set \({\bar{v}}(t,x)={\bar{w}}(A_{t},x)\). Then, \({\bar{v}}(t,x)\) belongs to the Domain of \({\mathscr {L}}_t\) and
for all \(r\in (0,T)\) and \(x\in {\mathbb {N}}\). However, since \({\mathscr {L}}_r=Qa_r\),
for all \(x\in {\mathbb {N}}\), \(t\in (0,T)\). Moreover, since \({\bar{w}}(t,x)\) satisfies the IVP 6, for \(0\le r<T\), \({\bar{v}}(T-r,0)= {\bar{w}}(A_{T-r},0) = 0\) and for any \(z\in {\mathbb {N}}\), \({\bar{v}}(0,z)={\bar{w}}(A_{0},z)={\bar{w}}(0,z)=1\). Thus, by Proposition 1, the result follows.\(\square \)
Proof of Lemma 5
By Remark 2 and Lemma 3, the first part is straightforward. If
-
\(\alpha _t\sim t^s\log ^m(t)\) as \(t\rightarrow \infty \) for some \(s\ne -1\), \(m\in {\mathbb {N}}\cup \{0\}\). Then, for sufficiently large t, \(A_t\ge {\hat{c}}t^{s+1}\). Therefore,
-
If \(\lambda <\mu \), (by the Proof of Lemma ), there are finite constants \({\mathcal {C}}\), \(C_1\), \(C_2\) and \(C_3\) such that,
$$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\\le & {} T^n+n\int _T^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\\le & {} T^n + C_1 \int _T^\infty t^{n-1}\left[ \left( {\mathcal {C}}A_t\right) ^{-1/2}e^{-{\mathcal {C}}A_t} \right] ^2 dt\\\le & {} T^n + C_1 \int _T^\infty t^{n-1}\left[ \left( {\mathcal {C}}t^{s+1}\right) ^{-1/2}e^{-{\mathcal {C}}t^{s+1}} \right] ^2 dt\\\le & {} T^n + C_2 \int _T^\infty t^{n-s-2}e^{-2{\mathcal {C}}t^{s+1}} dt\\= & {} T^n + C_3 \int _T^\infty u^{(n-2(s+1))/(s+1)}(u/2{\mathcal {C}})e^{-u}du\\= & {} T^n + C_4 \int _T^\infty u^{n/(s+1)-2}e^{-u}du<\infty . \end{aligned}$$ -
If \(\lambda =\mu \), there are finite constants \(C_1\), \(C_2\) and \(C_3\) such that,
$$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]= & {} n\int _0^\infty t^{n-1} {\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} C_1+n\int _T^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} C_1 + C_2 \int _T^\infty \dfrac{xy}{\lambda ^2\pi } t^{n-1}\dfrac{1}{A_T} dt\\= & {} C_1 + C_2 \int _T^\infty \frac{xy}{\lambda ^2\pi }t^{n-s-2}dt\\= & {} C_3\mathbbm {1}_{\left\{ {n<s+1} \right\} }+\infty \mathbbm {1}_{\left\{ {n\ge s+1} \right\} }. \end{aligned}$$
-
-
\(\alpha _t\sim k/t\) for some \(k>0\). Then \(A_t\sim k\log (t)\).
-
If \(\lambda <\mu \), for sufficiently large T, there are finite constants \({\mathcal {C}}\), \(C_1\), \(C_2\) and \(C_3\) such that,
$$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\= & {} C_1 + n\int _T^\infty t^{n-1} \left[ A_t^{-1/2}e^{-{\mathcal {C}}A_t}\right] ^2dt\\= & {} C_1 + n\int _T^\infty t^{n-1} \left[ (k\log (t))^{-1/2}e^{-k{\mathcal {C}}\log (t)}\right] ^2dt\\= & {} C_1 + C_2\int _T^\infty k t^{n-1-2k{\mathcal {C}}}\log (t) dt\\= & {} C_3(\mathbbm {1}_{\left\{ {\{} \right\} }n<2{\mathcal {C}}k\}+ \infty (\mathbbm {1}_{\left\{ {\{} \right\} }n\ge 2{\mathcal {C}}k\}). \end{aligned}$$ -
If \(\lambda =\mu \), there are finite constants \(C_1\), \(C_2\) and \(C_3\) such that,
$$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\mid \;q_0^a=x, q_0^b=y\right]&=n\int _0^\infty t^{n-1}{\mathbb {P}}\left[ \tau _{{\mathcal {Q}}}^1>A_t\;\mid \;q_0^a=x, q_0^b=y\right] dt\\&=C_1 + C_2 \int _T^\infty \dfrac{xy}{\lambda ^2\pi } t^{n-1}\dfrac{1}{A_T} dt\\&=C_1 + C_2 \int _T^\infty \dfrac{xy}{\lambda ^2\pi } t^{n-1}\dfrac{1}{k\log (t)} dt\\&=C_1 + C_3 \int _T^\infty \frac{t^{n-1}}{\log (t)} dt=\infty . \end{aligned}$$\(\square \)
-
Proof of Proposition 6
Let \(F_{n,{\mathcal {Q}}}(t)\) and \(F_{n,{\mathcal {H}}}(t)\) denote the cdf of \(S^n_{{\mathcal {Q}}}\) and \(S^n_{{\mathcal {H}}}\), respectively. Moreover, let \(f_{n,{\mathcal {Q}}}(t)\) and \(f_{n,{\mathcal {H}}}(t)\) denote their corresponding densities. The result will be proven by induction. The base case, \(n=1\) is given in Corollary 1. Assume the result is true for any \(m\le n\in {\mathbb {N}}\). Then by Corollary 1 and the induction hypothesis,
Furthermore, by the definition of \(\tau ^n\) and \(S^n\)
In the last equality we used the facts that \(S_{n+1}=S_n+\tau _{n+1}\) and that for X and Y, non-negative independent random variables,
with \(F_X\) and \(F_Y\) denoting the cdfs of X and Y.\(\square \)
Proof of Theorem 7
By the dynamics of the order book described in Sect. 3, the sequence of random price changes \(X_i\in \{-1,1\}\) is independent. Then, if \(\lambda <\mu \) and,
-
If \(\alpha _t\sim t^{s}\log ^m(t)\) for \(s\ne -1, m\ge 0\) or if \(\dfrac{\alpha _t}{t^{-1}} \rightarrow K\) as \(t\rightarrow \infty \), with \(2{\mathcal {C}}K> 1\), by Lemma 5, for every \(x,y\in {\mathbb {N}}\),
$$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\Big \vert \;q_0^a=x, q_0^b=y\right] <\infty . \end{aligned}$$Since \(N_t=\max \{n\ge 0\;\mid \; \tau _1+\tau _2+\ldots +\tau _n\le t\}\) then,
$$\begin{aligned} \tau _1+\tau _2+\ldots +\tau _{N_t}\le t \le \tau _1+\tau _2+\ldots +\tau _{N_t+1}. \end{aligned}$$Dividing the previous inequality by \(N_t\), since \(N_t\rightarrow \infty \) as \(t\rightarrow \infty \), by using the Strong Law of Large Numbers we obtain that a.s.
$$\begin{aligned} {\mathbb {E}}[\tau ]:=\sum \limits _{x,y\in {\mathbb {N}}} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\Big \vert \;q_0^a=x, q_0^b=y\right] f(x,y)\rightarrow \frac{t}{N_t}\qquad \qquad \text { as }t\rightarrow \infty . \end{aligned}$$Therefore, by using the sequence \(t_n=tn\), we decompose the process \(s_{t_n}:=\sum \limits _{j=1}^{N_{t_n}}X_i\) as:
$$\begin{aligned} s_{t_n}= \underbrace{\frac{s_0}{\sqrt{n}}}_{\hbox {I}_n}+\underbrace{\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[tn/{\mathbb {E}}[\tau _1]]}\left( X_j\right) }_{\hbox {II}_n} + \underbrace{\left( \frac{1}{\sqrt{n}}\sum \limits _{j=1}^{N_{t_n}}X_j-\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[tn/{\mathbb {E}}_{\pi }(\tau _1)]}X_j\right) }_{\hbox {III}_n} \end{aligned}$$As \(n\rightarrow \infty \), clearly, I\(_n\Rightarrow 0\). Also, by Donsker’s Invariance principle,
$$\begin{aligned} \hbox {II}_n&\Rightarrow \sigma W_t, \end{aligned}$$where \(\sigma \) is a constant. Now, since \(X_j\in \left\{ 1,-1\right\} \), for any \(\epsilon >0\),
$$\begin{aligned} {\mathbb {P}}\left( \left| \sum \limits _{j=1}^{N_{t_n}}X_j-\sum \limits _{j=1}^{[tn/{\mathbb {E}}[\tau _1]]}\Pi _j\right| \ge \epsilon \sqrt{n}\right)&\le {\mathbb {P}}\left( \left| \sum \limits _{j=N_{t_n}\wedge [tn/{\mathbb {E}}[\tau _1]]}^{N_{t_n}\vee [tn/{\mathbb {E}}[\tau _1]]}X_j\right| \ge \epsilon \sqrt{n}\right) \\&\le {\mathbb {P}}\left( \frac{1}{2}\left| N_{t_n}-[tn/{\mathbb {E}}[\tau _1]]\right| \ge \epsilon \sqrt{n}\right) \\&\le {\mathbb {P}}\left( \left| \frac{N_{t_n}}{[tn/{\mathbb {E}}[\tau _1]]}-1\right| \ge \frac{2\epsilon \sqrt{n}}{[tn/{\mathbb {E}}[\tau _1]]}\right) , \end{aligned}$$which converges to 0 as \(n\rightarrow \infty \). Thus, III\(_n\) converges to 0 in probability and we conclude the proof.
-
If \(\dfrac{\alpha _t}{t^{-1}} \sim K\) with \(2{\mathcal {C}}K\le 1\) as \(t\rightarrow \infty \), then \(A_T\sim k\log (T)\). By Lemma 5
$$\begin{aligned} {\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>T\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{\exp (-2{\mathcal {C}}k\log (T))}{k\log (T)}\\&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{T^{-2{\mathcal {C}}k}}{k\log (T)}\\&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{1}{kT^{2{\mathcal {C}}k}\log (T)} \end{aligned}$$Therefore,
$$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/2k{\mathcal {C}}}\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim n\left[ \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{1}{kn\log (n^{1/2k{\mathcal {C}}})}\right] \\&\sim \left( \frac{\mu }{\lambda }\right) ^{(x+y)/2} \frac{xy}{\pi {\mathcal {C}}^2\sqrt{\lambda \mu }} \frac{1}{k\log (n^{1/2k{\mathcal {C}}})}. \end{aligned}$$Thus,
$$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/2k{\mathcal {C}}}\;\Big \vert \;q_0^a=x, q_0^b=y\right] \rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$and by Theorem 6.4.2 in Gut (2013), in probability,
$$\begin{aligned} \frac{S_n-n{\mathbb {E}}\left[ \tau _{{\mathcal {H}}}^1\mathbbm {1}_{\left\{ {\tau _{{\mathcal {H}}}^1<n^{1/2k{\mathcal {C}}}} \right\} }\right] }{n^{1/2k{\mathcal {C}}}}\rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$By Proposition 9, \(\hat{{\mathcal {A}}}:=\lim _{n\rightarrow \infty }\frac{n{\mathbb {E}}[\tau \mathbbm {1}_{\left\{ {\{} \right\} }\tau <n^{1/2k{\mathcal {C}}}]}{n^{1/2k{\mathcal {C}}}}\) is a constant. Thus, by a similar argument as in the previous bullet, in probability,
$$\begin{aligned} \frac{t}{N_t^{1/2k{\mathcal {C}}}}\rightarrow \hat{{\mathcal {A}}} \qquad \qquad \text { as } t\rightarrow \infty , \end{aligned}$$or equivalently,
$$\begin{aligned} N_t\rightarrow \left( \frac{t}{\hat{{\mathcal {A}}}}\right) ^{2k{\mathcal {C}}} \qquad \qquad \text { as } t\rightarrow \infty . \end{aligned}$$As before, by using the sequence \(t_n=tn^{1/2k{\mathcal {C}}}\), we decompose the process \(s_{t_n}:=\sum \limits _{j=1}^{N_{t_n}}X_i\) as:
$$\begin{aligned} s_{t_n}= \underbrace{\frac{s_0}{\sqrt{n}}}_{\hbox {I}_n}+\underbrace{\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[n(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}]}\left( X_j\right) }_{\hbox {II}_n} + \underbrace{\left( \frac{1}{\sqrt{n}}\sum \limits _{j=1}^{N_{t_n}}X_j-\frac{1}{\sqrt{n}}\sum \limits _{j=1}^{[n(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}]}X_j\right) }_{\hbox {III}_n} \end{aligned}$$By similar arguments as above, as \(n\rightarrow \infty \),
$$\begin{aligned} \hbox {I}_n&\Rightarrow 0\\ \hbox {III}_n&\Rightarrow 0\\ \hbox {II}_n&\Rightarrow W_{(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}}= W_{\hat{{\mathcal {A}}}^{-{2k{\mathcal {C}}}}\int _0^t \frac{1}{u^{1-2k{\mathcal {C}}}}du}. \end{aligned}$$Moreover, in distribution,
$$\begin{aligned} W_{(t/\hat{{\mathcal {A}}})^{{2k{\mathcal {C}}}}}=\hat{{\mathcal {A}}}^{-k{\mathcal {C}}}\int _0^t \sqrt{\frac{1}{u^{1-2{\mathcal {C}}k}}}dW_u, \end{aligned}$$which concludes the proof.\(\square \)
Proof of Theorem 8
By the dynamics of the order book described in Sect. 3 , the sequence of random price changes \(X_i\in \{-1,1\}\) is independent. Then, if \(\lambda =\mu \),
-
If \(\alpha _t\sim t^{s}\log ^m(t)\) as \(t\rightarrow \infty \), for any \(s>0, m\ge 0\), by Lemma 5, for every \(x,y\in {\mathbb {N}}\),
$$\begin{aligned} {\mathbb {E}}\left[ \left( \tau _{{\mathcal {H}}}^1\right) ^n\;\Big \vert \;q_0^a=x, q_0^b=y\right] <\infty . \end{aligned}$$and the proof follows in the same way as in the proof of Theorem 7.
-
If \(\alpha _t\sim t^{-1+s}\) as \(t\rightarrow \infty \) for any \(s\in (0,1]\), then \(A_t\sim t^s/s\) and by Lemma 5
$$\begin{aligned} {\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>T\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim \dfrac{xy}{\lambda ^2\pi } \dfrac{s}{T^s} \end{aligned}$$Therefore,
$$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/s}\log (n)\;\Big \vert \;q_0^a=x, q_0^b=y\right]&\sim n\left[ \dfrac{xy}{\lambda ^2\pi } \dfrac{s}{n\log ^s(n)}\right] \\&\sim \dfrac{xy}{\lambda ^2\pi } \dfrac{s}{\log ^s(n)}. \end{aligned}$$Thus, since \(s>0\),
$$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>n^{1/s}\log (s)\;\Big \vert \;q_0^a=x, q_0^b=y\right] \rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$and by Theorem 6.4.2 in Gut (2013), in probability,
$$\begin{aligned} \frac{S_n-n{\mathbb {E}}\left[ \tau _{{\mathcal {H}}}^1\mathbbm {1}_{\left\{ {\{} \right\} }\tau _{{\mathcal {H}}}^1<n^{1/s}\log (n)\right] }{n^{1/s}\log (n)}\rightarrow 0\qquad \qquad \text { as } n\rightarrow \infty , \end{aligned}$$where by Proposition (10) \(\hat{{\mathcal {B}}}:=\lim _{n\rightarrow \infty }\frac{n{\mathbb {E}}[\tau \mathbbm {1}_{\left\{ {\{} \right\} }\tau <n^{s}\log (n)]}{n^{s}\log (n)}\) is a constant. Thus, by a similar argument as in the proof of Theorem 7, in probability,
$$\begin{aligned} \frac{t}{N_t^{s}\ln (N_t)}\sim \hat{{\mathcal {B}}} \qquad \qquad \text { as } t\rightarrow \infty , \end{aligned}$$or what is the same,
$$\begin{aligned} N_t^s\log (N_t)\sim \frac{t}{\hat{{\mathcal {B}}}} \qquad \qquad \text { as } t\rightarrow \infty . \end{aligned}$$(A6)Let \(\psi (t)\) be the inverse function of \(f(t):=t^s\log (t)\). Notice that \(\psi \) is well defined since f is strictly increasing. By definition, \(\psi (t)=u\) implies that \(f(u)=t\) or, what is the same, \(u^s\log (u)=t\), or \(\psi (t)^s\log (\psi (t))=t\).Thus,
$$\begin{aligned} \psi (t)&\sim \frac{t^{1/s}}{\log (\psi (t))}\\&\sim \frac{t^{1/s}}{(1/s)\log (\psi (t)^s)}\\&\sim \frac{st^{1/s}}{\log (\psi (t)^s\log (\psi (t)))}\\&\sim \frac{st^{1/s}}{\log (t)} \end{aligned}$$Therefore, by Eq. A6,
$$\begin{aligned} f(N_t)\sim \frac{t}{\hat{{\mathcal {B}}}} \end{aligned}$$and since \(\psi \) is the inverse of f, then
$$\begin{aligned} N_t\sim \psi \left( \frac{t}{\hat{{\mathcal {B}}}}\right) \sim \frac{s\left( \frac{t}{\hat{{\mathcal {B}}}}\right) ^{1/s}}{\log \left( \frac{t}{\hat{{\mathcal {B}}}}\right) }\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}}\frac{t^{1/s}}{\log \left( t\right) -\log (\hat{{\mathcal {B}}})} \end{aligned}$$Using the sequence \(t_n=t(n\log (n))^s\), we have that,
$$\begin{aligned} N_{t_n}&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}}\frac{(tn^{s}\log ^s(n))^{1/s}}{\log \left( tn^{s}\log ^s(n)\right) -\log (\hat{{\mathcal {B}}})}\\&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}} \frac{t^{1/s}n\log (n)}{\log \left( tn^{s}\log ^s(n)\right) -\log (\hat{{\mathcal {B}}})}\\&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}} \frac{t^{1/s}n}{\frac{\log \left( tn^{s}\log ^s(n)\right) }{\log (n)} -\frac{\log (\hat{{\mathcal {B}}})}{\log (n)}}\\&\sim \frac{s}{\hat{{\mathcal {B}}}^{1/s}} \frac{t^{1/s}n}{s}\sim \frac{t^{1/s}n}{\hat{{\mathcal {B}}}^{1/s}} \end{aligned}$$To conclude, we decompose the process \(s_{t_n}:=\sum \limits _{j=1}^{N_{t_n}}X_i\) as in the proof of Theorem 7 and use the same arguments therein.
-
If \(\alpha _t\sim t^s\log ^m(t)\) for every \(s<0\) or if \(\alpha _t\sim k/t\), then for any regularly varying sequence at infinity with exponent \(1/\rho \) for some \(\rho \in (0,1]\),
$$\begin{aligned} n{\mathbb {P}}\left[ \tau _{{\mathcal {H}}}^1>b_n\;\Big \vert \;q_0^a=x, q_0^b=y\right] \rightarrow \infty \qquad \qquad \text { as } n\rightarrow \infty . \end{aligned}$$Thus, \(N_t\) cannot be rescaled to ensure a Law of Large Numbers and the price process does not converge.\(\square \)
Appendix B Auxiliary Results
Proposition 9
Let \(\tau \) be positive random variable such that \({\mathbb {P}}[\tau >t]\sim \frac{\Theta }{t^{2k{\mathcal {C}}}\log (t)}\) with \(2k{\mathcal {C}}\le 1\), \(\Theta \) a constant and define the sequence
Then, \(\Psi _n:={\mathbb {E}}[X_n]\) converges as \(n\rightarrow \infty \).
Proof
Let \(F_\tau (t)\) denote the CDF of \(\tau \). Then, since \(\tau \) is a positive random variable,
where the last integral should be understood as a Riemann-Stieltjes integral. Letting \(g(x)=\frac{n}{n^{1/2k{\mathcal {C}}}}x\mathbbm {1}_{\left\{ {x<n^{1/2k{\mathcal {C}}}} \right\} }\) and substituting in the above formula,
Noting that the Left Hand Side is \(\Psi _n\) and integrating by parts,
Then, by using L’Hopital rule and considering \(\Psi _n\) as a (differentiable) function of n and setting \({\widehat{\Theta }}=\Theta /2k{\mathcal {C}}\),
which is clearly positive and thus \(\Psi _n\) is an increasing sequence. Furthermore, by (B7), there exist constants \(\epsilon \) and \(T_\epsilon \) such that if \(n^{1/2k{\mathcal {C}}}>T_\epsilon \),
and since \(2k{\mathcal {C}}<1\), then \(1-1/2{\mathcal {C}}\le 0\) and thus
Therefore, \(\Psi _n\) is also bounded and by the monotone convergence theorem for sequences it converges to a constant limit. \(\square \)
Proposition 10
Let \(\tau \) be positive random variable such that \({\mathbb {P}}[\tau >t]\sim \frac{\Theta }{t^s}\) with \(0<s\le 1\) with \(\Theta \) a constant. Define the sequence
Then, \(\Phi _n:={\mathbb {E}}[X_n]\) converges as \(n\rightarrow \infty \).
Proof
As in the previous result, let \(F_\tau (t)\) denote the CDF of \(\tau \). Then, again, since \(\tau \) is a positive random variable,
Substituting \(g(x)=\frac{n}{n^{1/s}\log (n)}x\mathbbm {1}_{\left\{ {x<n^{1/s}\log (n)} \right\} }\) in the above formula,
Noting that the Left Hand Side is \(\Phi _n\) and integrating by parts,
which shows that it is asymptotically decreasing to zero and thus \(\Phi _n\) converges by using similar arguments as in the previous theorem. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chávez Casillas, J.A. A Time-Dependent Markovian Model of a Limit Order Book. Comput Econ 63, 679–709 (2024). https://doi.org/10.1007/s10614-023-10356-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10614-023-10356-9