Skip to main content

Likelihood Inference for Exponential-Trawl Processes

  • Chapter
  • First Online:
The Fascination of Probability, Statistics and their Applications

Abstract

Integer-valued trawl processes are a class of serially correlated, stationary and infinitely divisible processes that Ole E. Barndorff-Nielsen has been working on in recent years. In this chapter, we provide the first analysis of likelihood inference for trawl processes by focusing on the so-called exponential-trawl process, which is also a continuous time hidden Markov process with countable state space. The core ideas include prediction decomposition, filtering and smoothing, complete-data analysis and EM algorithm. These can be easily scaled up to adapt to more general trawl processes but with increasing computation efforts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Asmussen, S.: Applied Probability and Queues. Springer, New York (2003)

    MATH  Google Scholar 

  2. Barndorff-Nielsen, O.E.: Stationary infinitely divisible processes. Braz. J. Probab. Stat. 25, 294–322 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Barndorff-Nielsen, O.E., Schmiegel, J.: Ambit processes; with applications to turbulence and tumour growth. In: Benth, F.E., Di Nunno, G., Lindstrøm, T., Øksendal, B., Zhang, T. (eds.) Stochastic Analysis and Applications, pp. 93–124. Springer, Berlin Heidelberg (2007)

    Chapter  Google Scholar 

  4. Barndorff-Nielsen, O.E., Benth, F.E., Veraart, A.E.D.: Ambit processes and stochastic partial differential equations. In: Di Nunno, G., Øksendal, B. (eds.) Advanced Mathematical Methods for Finance, pp. 35–74. Springer, Berlin (2011)

    Chapter  Google Scholar 

  5. Barndorff-Nielsen, O.E., Pollard, D.G., Shephard, N.: Integer-valued Lévy processes and low latency financial econometrics. Quant. Finan. 12, 587–605 (2012)

    Article  MATH  Google Scholar 

  6. Barndorff-Nielsen, O.E., Benth, F.E., Veraart, A.E.D.: Recent advances in ambit stochastics with a view towards tempo-spatial stochastic volatility, intermittency. ArXiv e-prints. Unpublished paper. Department of Mathematics, Imperial College London (2012)

    Google Scholar 

  7. Barndorff-Nielsen, O.E., Lunde, A., Shephard, N., Veraart, A.E.D.: Integer-valued trawl processes: a class of stationary infinitely divisible processes. Scand. J. Stat. 41, 693–724 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bartlett, M.S.: An Introduction to Stochastic Processes, with Special Reference to Methods and Applications, 3rd edn. Cambridge University Press, Cambridge (1978)

    MATH  Google Scholar 

  9. Cameron, C.A., Trivedi, P.K.: Regression Analysis of Count Data. Cambridge University Press, Cambridge (1998)

    Book  MATH  Google Scholar 

  10. Cui, Y., Lund, R.: A new look at time series of counts. Biometrika 96, 781–792 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Daley, D., Vere-Jones, D.: Evolutionary processes and predictability. An Introduction to the Theory of Point Processes. Probability and Its Applications, pp. 355–456. Springer, New York (2008)

    Google Scholar 

  12. Davis, R.A., Wu, R.: A negative binomial model for time series of counts. Biometrika 96, 735–749 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Fokianos, K., Kedem, B.: Regression theory for categorical time series. Stat. Sci. 18, 357–376 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jacobs, P.A., Lewis, P.A.W.: Discrete time series generated by mixtures. I: correlational and runs properties. J. Roy. Stat. Soc. Ser. B (Methodological) 40, 94–105 (1978)

    MathSciNet  MATH  Google Scholar 

  15. Jung, R., Tremayne, A.: Useful models for time series of counts or simply wrong ones? AStA Adv. Stat. Anal. 95, 59–91 (2011)

    Article  MathSciNet  Google Scholar 

  16. Lindley, D.V.: The estimation of velocity distributions from counts. In: Proceedings of the Internationl Congress of Mathematicians, vol. 3, pp. 427–444. North-Holland, Amsterdam (1956)

    Google Scholar 

  17. McKenzie, D.J.: Measuring inequality with asset indicators. J. Popul. Econ. 18, 229–260 (2005)

    Article  Google Scholar 

  18. Reynolds, J.F.: On the autocorrelation and spectral functions of queues. J. Appl. Probab. 5, 467–475 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  19. Rudemo, M.: State estimation for partially observed Markov chains. J. Math. Anal. Appl. 44, 581–611 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  20. Rudemo, M.: Prediction and smoothing for partially observed Markov chains. J. Math. Anal. Appl. 49, 1–23 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  21. Shephard, N.,Yang, J.J.: Continuous time analysis of fleeting discrete price moves. ArXiv e-prints. Unpublished paper. Department of Statistics, Harvard Unviersity (2014)

    Google Scholar 

  22. Surgailis, D., Rosinski, J., Mandrekar, V., Cambanis, S.: Stable mixed moving averages. Probab. Theory Relat. Fields 97, 543–558 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  23. Weiß, C.: Thinning operations for modeling time series of counts–a survey. AStA Adv. Stat. Anal. 92, 319–341 (2008)

    Article  MathSciNet  Google Scholar 

  24. Wolpert, R.L., Brown, L.D.: Stationary infinitely-divisible Markov processes with non-negative integer values. Working paper, Department of Staistics, Duke University (2011)

    Google Scholar 

  25. Wolpert, R.L., Taqqu, M.S.: Fractional Ornstein-Uhlenbeck Lévy processes and the telecom process: upstairs and downstairs. Sig. Process. 85, 1523–1545 (2005)

    Article  MATH  Google Scholar 

  26. Zhu, R., Joe, H.: A new type of discrete self-decomposability and its application to continuous-time Markov processes for modeling count data time series. Stoch. Models 19, 235–254 (2003)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neil Shephard .

Editor information

Editors and Affiliations

Appendix: Proofs and Derivations

Appendix: Proofs and Derivations

1.1 6.1 Heuristic Proof of Theorem 1

Our heuristic derivation starts from the following prediction decomposition of the Radon-Nikodym derivative:

$$\begin{aligned} \log \left( \dfrac{\mathrm {d}\mathbb {P}}{\mathrm {d} \mathbb {Q} }\right) _{\mathscr {F}_{T}^{X}|X_{0}}=\int _{t\in (0,T]}\log \left( \dfrac{ \mathrm {d}\mathbb {\mathbb {P}}}{\mathrm {d} \mathbb {Q} }\right) _{X_{t}|\mathscr {F}_{t-}^{X}}, \end{aligned}$$
(20)

where the integral over \(t\in (0,T]\) means a continuous sum of the integrand random variables. Thus,

$$\begin{aligned} \left( \dfrac{\mathrm {d}\mathbb {\mathbb {P}}}{\mathrm {d} \mathbb {Q} }\right) _{X_{t}|\mathscr {F}_{t-}^{X}}= & {} \left( \dfrac{\mathrm {d}\mathbb { \mathbb {P}}}{\mathrm {d} \mathbb {Q} }\right) _{\varDelta X_{t}|\mathscr {F}_{t-}^{X}} \\= & {} \sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\dfrac{\mathbb {P}\left( \varDelta X_{t}=y| \mathscr {F}_{t-}^{X}\right) }{ \mathbb {Q} \left( \varDelta X_{t}=y|\mathscr {F}_{t-}^{X}\right) }1_{\left\{ \varDelta X_{t}=y\right\} }+\dfrac{\mathbb {P}\left( \varDelta X_{t}=0|\mathscr {F} _{t-}^{X}\right) }{ \mathbb {Q} \left( \varDelta X_{t}=0|\mathscr {F}_{t-}^{X}\right) }1_{\left\{ \varDelta X_{t}=0\right\} } \\= & {} \sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb { P}}\mathrm {d}t}{\lambda _{t-}^{\left( y\right) ,\mathbb { \mathbb {Q} }}\mathrm {d}t}1_{\left\{ \varDelta X_{t}=y\right\} }+\dfrac{1-\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) ,\mathbb {P}} \mathrm {d}t}{1-\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) , \mathbb {Q} }\mathrm {d}t}1_{\left\{ \varDelta X_{t}=0\right\} }, \end{aligned}$$

where the first equality follows because \(X_{t-}\) is known in \(\mathscr {F} _{t-}^{X}\); the third equality follows from (5). Therefore, (20) can be rewritten as

$$\begin{aligned} \int _{t\in \left( 0,T\right] }\log \left( \dfrac{\mathrm {d}\mathbb {\mathbb {P} }}{\mathrm {d} \mathbb {Q} }\right) _{X_{t}|\mathscr {F}_{t-}^{X}}= & {} \sum _{0<t\le T}\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\log \left( \dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb {P}}\mathrm {d}t}{\lambda _{t-}^{\left( y\right) ,\mathbb { \mathbb {Q} }}\mathrm {d}t}\right) 1_{\left\{ \varDelta X_{t}=y\right\} } \\&+\int _{\left\{ t\in \left( 0,T\right] :\varDelta X_{t}=0\right\} }\log \left( \dfrac{1-\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) ,\mathbb {P}} \mathrm {d}t}{1-\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) , \mathbb {Q} }\mathrm {d}t}\right) \\= & {} \sum _{0<t\le T}\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\log \left( \dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb {P}}}{\lambda _{t-}^{\left( y\right) ,\mathbb { \mathbb {Q} }}}\right) 1_{\left\{ \varDelta X_{t}=y\right\} } \\&-\int _{t\in \left( 0,T\right] }\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\left( \lambda _{t-}^{\left( y\right) ,\mathbb { P}}-\lambda _{t-}^{\left( y\right) , \mathbb {Q} }\right) \mathrm {d}t, \end{aligned}$$

where the second equality follows from \(\log \left( 1-x\right) \approx -x\) for small x and \(\{ t\in \left( 0,T\right] :\varDelta X_{t}\ne 0\} \) has Lebesgue measure 0.

1.2 6.2 Heuristic Proof of Theorem 2

1.2.1 6.2.1 Update by Inactivity

We want to update \(p_{\tau ,\tau }\left( \mathbf {j}\right) \) by incorporating the information \(\mathscr {F}_{\left( \tau ,t\right) }\triangleq \sigma \left( \left\{ \varDelta Y_{s}=0, \tau <s<t\right\} \right) \) using Bayes’ Theorem:

$$\begin{aligned} \mathbb {P}\left( \left. \mathbf {C}_{t-}=\mathbf {j}\right| \mathscr {F} _{t-}\right)= & {} \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j} \right| \mathscr {F}_{t-}\right) =\mathbb {P}\left( \left. \mathbf {C} _{\tau }=\mathbf {j}\right| \mathscr {F}_{\tau },\mathscr {F}_{\left( \tau ,t\right) }\right) \\\propto & {} \mathbb {P}\left( \mathscr {F}_{\left( \tau ,t\right) }\left| \mathscr {F}_{\tau },\mathbf {C}_{\tau }=\mathbf {j}\right. \right) \mathbb {P} \left( \left. \mathbf {C}_{\tau }=\mathbf {j}\right| \mathscr {F}_{\tau }\right) , \end{aligned}$$

where the first equality holds because there is no activity of \(Y_{s}\) for \( s\in \left( \tau ,t\right) \) and hence the hidden state \(\mathbf {C}\) must stay the same.

Using the prediction decomposition, we have

$$\begin{aligned} \log \mathbb {P}\left( \mathscr {F}_{\left( \tau ,t\right) }\left| \mathscr {F}_{\tau },\mathbf {C}_{\tau }=\mathbf {j}\right. \right)= & {} \int _{s\in \left( \tau ,t\right) }\log \mathbb {P}\left( \varDelta Y_{s}=0| \mathscr {F}_{\tau },\mathscr {F}_{\left( \tau ,s\right) },\mathbf {C}_{\tau }= \mathbf {j}\right) \\= & {} \int _{s\in \left( \tau ,t\right) }\log \left( 1-\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\nu \left( y\right) \mathrm {d}s-\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\phi j_{y}\mathrm {d}s\right) \\= & {} -\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\nu \left( y\right) \left( t-\tau \right) -\phi \left\| \mathbf {j}\right\| _{1}\left( t-\tau \right) , \end{aligned}$$

where the second equality intuitively holds because we know the instantaneous departure probability of a size y event at time s is \(\phi C_{s-}^{\left( y\right) }\mathrm {d}s\) but \(C_{s-}^{\left( y\right) }=C_{\tau }^{\left( y\right) }=j_{y}\) under \(\mathscr {F}_{\left( \tau ,s\right) }\); the third equality follows from \(\log \left( 1-x\right) \approx -x\) for small x. Therefore,

$$\begin{aligned} \mathbb {P}\left( \left. \mathbf {C}_{t-}=\mathbf {j}\right| \mathscr {F} _{t-}\right) \propto e^{-\phi \left\| \mathbf {j}\right\| _{1}\left( t-\tau \right) }\mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j} \right| \mathscr {F}_{\tau }\right) , \end{aligned}$$

where we throw out the term \(\exp \left( -\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }\nu \left( y\right) \left( t-\tau \right) \right) \) because it doesn’t depend on \(\mathbf {j}\). Normalizing the equation above leads to the desired result.

1.2.2 6.2.2 Update by Jump

We want to update \(p_{\tau -,\tau -}\left( \mathbf {j}\right) \) by incorporating the piece of information, \(\varDelta Y_{\tau }=y\). First note that

$$\begin{aligned} \mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {j}|\mathscr {F}_{\tau }\right)= & {} \mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {j}|\mathscr {F}_{\tau -},\varDelta Y_{\tau }=y\right) \\= & {} \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j},\mathbf {C}_{\tau -}= \mathbf {j}-\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{\tau -},\varDelta Y_{\tau }=y\right) \\&+\,\mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j},\mathbf {C}_{\tau -}= \mathbf {j}+\mathbf {1}^{\left( -y\right) }\right| \mathscr {F}_{\tau -},\varDelta Y_{\tau }=y\right) , \end{aligned}$$

which corresponds to the arrival of a new size y event and the departure of an old size \(-y\) event.

For the first term,

$$\begin{aligned}&\mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j},\mathbf {C}_{\tau -}= \mathbf {j}-\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{\tau -},\varDelta Y_{\tau }=y\right) \\&\quad =\dfrac{\mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j},\mathbf {C} _{\tau -}=\mathbf {j}-\mathbf {1}^{\left( y\right) },\varDelta Y_{\tau }=y\right| \mathscr {F}_{\tau -}\right) }{\mathbb {P}\left( \left. \varDelta Y_{\tau }=y\right| \mathscr {F}_{\tau -}\right) } \\&\quad =\dfrac{\mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {j},\varDelta Y_{\tau }=y\left| \mathbf {C}_{\tau -}=\mathbf {j}-\mathbf {1}^{\left( y\right) }, \mathscr {F}_{\tau -}\right. \right) \mathbb {P}\left( \left. \mathbf {C}_{\tau -}=\mathbf {j}-\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{\tau -}\right) }{\mathbb {P}\left( \left. \varDelta Y_{\tau }=y\right| \mathscr {F} _{\tau -}\right) } \\&\quad =\dfrac{\mathbb {P}\left( \varDelta \mathbf {C}_{\tau }=\mathbf {1}^{\left( y\right) }\left| \mathbf {C}_{\tau -}=\mathbf {j}-\mathbf {1}^{\left( y\right) },\mathscr {F}_{\tau -}\right. \right) \mathbb {P}\left( \left. \mathbf {C}_{\tau -}=\mathbf {j}-\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{\tau -}\right) }{\mathbb {P}\left( \left. \varDelta Y_{\tau }=y\right| \mathscr {F}_{\tau -}\right) } \\&\quad =\dfrac{\nu \left( y\right) }{\lambda _{\tau -}^{\left( y\right) }}\mathbb { P}\left( \left. \mathbf {C}_{\tau -}=\mathbf {j}-\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{\tau -}\right) {,} \end{aligned}$$

where the fourth equality follows from (3) (using \(\mathscr {C}_{\tau -}\supseteq \mathscr {F}_{\tau -}\)) and (5).

Using similar arguments, the second term is

$$\begin{aligned} \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j},\mathbf {C}_{\tau -}= \mathbf {j}+\mathbf {1}^{\left( -y\right) }\right| \mathscr {F}_{\tau -},\varDelta Y_{\tau }=y\right) =\dfrac{\phi \left( j_{-y}+1\right) }{\lambda _{\tau -}^{\left( y\right) }}\mathbb {P}\left( \left. \mathbf {C}_{\tau -}= \mathbf {j}+\mathbf {1}^{\left( -y\right) }\right| \mathscr {F}_{\tau -}\right) . \end{aligned}$$

Combining all of these gives us the required result.

1.3 6.3 Heuristic Proof of Theorem 3

The case of updating smoothing distribution \(p_{\tau -,T}\left( \mathbf {j} \right) \) due to inactivity is trivial because the hidden configuration \( \mathbf {C}\) must stay unchanged because of the inactivity during the time period \([t,\tau )\).

1.3.1 6.3.1 Update by Jump

We now consider the case of (backward) updating the smoothing distribution \( p_{\tau ,T}\left( \mathbf {j}\right) \) due to the jump \(\varDelta Y_{\tau }=y\). Then

$$\begin{aligned} \mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}|\mathscr {F}_{T}\right)= & {} \mathbb {P}\left( \left. \mathbf {C}_{\tau -}=\mathbf {j},\mathbf {C}_{\tau }= \mathbf {j}+\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{T}\right) + \mathbb {P}\left( \left. \mathbf {C}_{\tau -}=\mathbf {j},\mathbf {C}_{\tau }= \mathbf {j}-\mathbf {1}^{\left( -y\right) }\right| \mathscr {F}_{T}\right) \\= & {} \mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}\left| \mathscr {F}_{T}, \mathbf {C}_{\tau }=\mathbf {j}+\mathbf {1}^{\left( y\right) }\right. \right) \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j}+\mathbf {1}^{\left( y\right) }\right| \mathscr {F}_{T}\right) \\&+\,\mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}\left| \mathscr {F}_{T}, \mathbf {C}_{\tau }=\mathbf {j}-\mathbf {1}^{\left( -y\right) }\right. \right) \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j}-\mathbf {1}^{\left( -y\right) }\right| \mathscr {F}_{T}\right) . \end{aligned}$$

Note that

$$\begin{aligned} \mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}\left| \mathscr {F}_{T}, \mathbf {C}_{\tau }=\mathbf {k}\right. \right)= & {} \mathbb {P}\left( \mathbf {C} _{\tau -}=\mathbf {j}\left| \mathscr {F}_{\tau },\mathbf {C}_{\tau }= \mathbf {k}\right. \right) \\= & {} \dfrac{\mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {k}|\mathbf {C}_{\tau -}= \mathbf {j},\mathscr {F}_{\tau }\right) \mathbb {P}\left( \mathbf {C}_{\tau -}= \mathbf {j}|\mathscr {F}_{\tau }\right) }{\mathbb {P}\left( \mathbf {C}_{\tau }= \mathbf {k}|\mathscr {F}_{\tau }\right) } \nonumber \\= & {} \dfrac{ \begin{array}{c} \mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {k}|\mathbf {C}_{\tau -}=\mathbf {j} ,\mathscr {F}_{\tau }\right) \mathbb {P}\left( \varDelta Y_{\tau }=y|\mathbf {C} _{\tau -}=\mathbf {j},\mathscr {F}_{\tau -}\right) \\ \times \mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}|\mathscr {F}_{\tau -}\right) \end{array} }{\mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {k}|\mathscr {F}_{\tau }\right) \mathbb {P}\left( \varDelta Y_{\tau }=y|\mathscr {F}_{\tau -}\right) } \nonumber \\= & {} \dfrac{\mathbb {P}\left( \mathbf {C}_{\tau }=\mathbf {k},\varDelta Y_{\tau }=y| \mathbf {C}_{\tau -}=\mathbf {j},\mathscr {F}_{\tau -}\right) }{\lambda _{\tau -}^{\left( y\right) }\mathrm {d}t}\dfrac{\mathbb {P}\left( \mathbf {C}_{\tau -}= \mathbf {j}|\mathscr {F}_{\tau -}\right) }{\mathbb {P}\left( \mathbf {C}_{\tau }= \mathbf {k}|\mathscr {F}_{\tau }\right) }, \nonumber \end{aligned}$$
(21)

where the first equality holds due to the Markov property of \(\mathbf {C}_{t}\), a heuristic derivation is given later; the second and third equalities follow from the Bayes’ Theorem. Since

$$\begin{aligned} \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j}+\mathbf {1}^{\left( y\right) },\varDelta Y_{\tau }=y\right| \mathbf {C}_{\tau -}=\mathbf {j}, \mathscr {F}_{\tau -}\right)= & {} \mathbb {P}\left( \left. \varDelta \mathbf {C} _{\tau }=\mathbf {1}^{\left( y\right) }\right| \mathbf {C}_{\tau -}= \mathbf {j},\mathscr {F}_{\tau -}\right) \\= & {} \nu \left( y\right) \mathrm {d}t, \\ \mathbb {P}\left( \left. \mathbf {C}_{\tau }=\mathbf {j}-\mathbf {1}^{\left( -y\right) },\varDelta Y_{\tau }=y\right| \mathbf {C}_{\tau -}=\mathbf {j}, \mathscr {F}_{\tau -}\right)= & {} \mathbb {P}\left( \left. \varDelta \mathbf {C} _{\tau }=-\mathbf {1}^{\left( -y\right) }\right| \mathbf {C}_{\tau -}= \mathbf {j},\mathscr {F}_{\tau -}\right) \\= & {} \phi j_{-y}\mathrm {d}t, \end{aligned}$$

combining all of these gives us the required result.

1.3.2 6.3.2 Derivation of (21)

Let \(\mathscr {F}_{(\tau ,T]}\triangleq \sigma \left( \left\{ Y_{t}\right\} _{\tau <t\le T}\right) \) and \(\mathscr {C}_{(\tau ,T]}\triangleq \sigma \left( \left\{ \mathbf {C}_{t}\right\} _{\tau <t\le T}\right) \). Note that heuristically the Bayes’ Theorem implies

$$\begin{aligned} \mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}\left| \mathscr {F}_{T}, \mathbf {C}_{\tau }=\mathbf {k}\right. \right)= & {} \mathbb {P}\left( \mathbf {C} _{\tau -}=\mathbf {j}\left| \mathscr {F}_{\tau },\mathscr {F}_{(\tau ,T]}, \mathbf {C}_{\tau }=\mathbf {k}\right. \right) \\= & {} \dfrac{\left( \dfrac{\mathrm {d}\mathbb {P}}{\mathrm {d} \mathbb {Q} }\right) _{\mathscr {F}_{(\tau ,T]}\left| \mathscr {F}_{\tau },\mathbf {C} _{\tau }=\mathbf {k},\mathbf {C}_{\tau -}=\mathbf {j}\right. }}{\left( \dfrac{ \mathrm {d}\mathbb {P}}{\mathrm {d} \mathbb {Q} }\right) _{\mathscr {F}_{(\tau ,T]}\left| \mathscr {F}_{\tau },\mathbf {C} _{\tau }=\mathbf {k}\right. }}\mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j} \left| \mathscr {F}_{\tau },\mathbf {C}_{\tau }=\mathbf {k}\right. \right) . \end{aligned}$$

Since \(\mathscr {F}_{(\tau ,T]}\subseteq \mathscr {C}_{(\tau ,T]}\) (each \( Y_{t}=\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }C_{t}^{\left( y\right) }\)), the Markov property of \(\mathbf {C}_{t}\) implies

$$\begin{aligned} \left( \dfrac{\mathrm {d}\mathbb {P}}{\mathrm {d} \mathbb {Q} }\right) _{\mathscr {F}_{(\tau ,T]}\left| \mathscr {F}_{\tau },\mathbf {C} _{\tau }=\mathbf {k},\mathbf {C}_{\tau -}=\mathbf {j}\right. }=\left( \dfrac{ \mathrm {d}\mathbb {P}}{\mathrm {d} \mathbb {Q} }\right) _{\mathscr {F}_{(\tau ,T]}\left| \mathscr {F}_{\tau },\mathbf {C} _{\tau }=\mathbf {k}\right. }, \end{aligned}$$

because given the current information \(\mathbf {C}_{\tau }\) the information in the past \(\mathbf {C}_{\tau -}\) is irrelevant. This then proves that

$$\begin{aligned} \mathbb {P}\left( \mathbf {C}_{\tau -}=\mathbf {j}\left| \mathscr {F}_{T}, \mathbf {C}_{\tau }=\mathbf {k}\right. \right) =\mathbb {P}\left( \mathbf {C} _{\tau -}=\mathbf {j}\left| \mathscr {F}_{\tau },\mathbf {C}_{\tau }= \mathbf {k}\right. \right) . \end{aligned}$$

1.4 6.4 Proof of Theorem 4

Since each \(C_{t}^{\left( y\right) }\) is independent for different y, the complete-data log-likelihood can be written as

$$\begin{aligned} l_{\mathscr {C}_{T}}\left( {\theta }\right) =\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }l_{\mathscr {C}_{T}^{\left( y\right) }|C_{0}^{\left( y\right) }}\left( {\theta }\right) +\sum _{y\in \mathbb {Z} \backslash \left\{ 0\right\} }l_{C_{0}^{\left( y\right) }}\left( \mathbf { \theta }\right) , \end{aligned}$$

where we recall that \(\mathscr {C}_{t}^{\left( y\right) }\) is the natural filtration generated by \(C_{t}^{\left( y\right) }\),

$$\begin{aligned} l_{\mathscr {C}_{T}^{\left( y\right) }|C_{0}^{\left( y\right) }}\left( {\theta }\right)= & {} \sum _{0<t\le T}\left( \log \left( \nu \left( y\right) \right) 1_{\left\{ \varDelta C_{t}^{\left( y\right) }=1\right\} }+\log \left( \phi C_{t-}^{\left( y\right) }\right) 1_{\left\{ \varDelta C_{t}^{\left( y\right) }=-1\right\} }\right) \\&-\int _{t\in (0,T]}\left( \nu \left( y\right) +\phi C_{t-}^{\left( y\right) }\right) \mathrm {d}t \\= & {} \log \left( \nu \left( y\right) \right) N_{T}^{\left( y\right) ,\mathrm {A} }-\nu \left( y\right) T+\log \left( \phi \right) N_{T}^{\left( y\right) , \mathrm {D}}-\phi \int _{t\in (0,T]}C_{t-}^{\left( y\right) }\mathrm {d}t, \end{aligned}$$

where the first equality follows directly from Theorem 1 (ignoring the constant), and

$$\begin{aligned} l_{C_{0}^{\left( y\right) }}\left( {\theta }\right) =C_{0}^{\left( y\right) }\left( \log \nu \left( y\right) -\log \phi \right) -\dfrac{\nu \left( y\right) }{\phi } \end{aligned}$$

because of \(C_{0}^{\left( y\right) }\backsim \mathrm {Poisson}\left( \nu \left( y\right) /\phi \right) \). Thus, collecting terms will give us the required result (16). The derivations of the MCLE are elementary.

Let

$$\begin{aligned} \left\| \nu \right\| \triangleq \int \nu \left( \mathrm {d}y\right) =\sum _{y=1}^{\infty }\nu \left( y\right) . \end{aligned}$$

The ergodicity of \(D_{t-}\) implies that as \(T\rightarrow \infty \)

$$\begin{aligned} \dfrac{1}{T}\int _{t\in (0,T]}D_{t-}\mathrm {d}t\rightarrow \mathbb {E}\left( D_{t-}\right) =\dfrac{\left\| \nu \right\| }{\phi }. \end{aligned}$$

Since \(\dfrac{N_{T}^{\mathrm {D}}}{T}\approx \dfrac{N_{T}^{\mathrm {A}}}{T} \rightarrow \left\| \nu \right\| \), we have

$$\begin{aligned} \dfrac{\varXi _{T}}{T}=\dfrac{N_{T}^{\mathrm {D}}}{T}-\dfrac{D_{0}+T^{-1}\int _{t \in (0,T]}D_{t-}\mathrm {d}t}{T}\rightarrow \left\| \nu \right\| \text {, too.} \end{aligned}$$

Thus,

$$\begin{aligned} \hat{\phi }_{\mathrm {MCLE}}= & {} \frac{\dfrac{\varXi _{T}}{T}+\sqrt{\left( \dfrac{ \varXi _{T}}{T}\right) ^{2}+4T^{-1}\dfrac{N_{T}^{\mathrm {A}}+N_{T}^{\mathrm {D}} }{T}T^{-1}\int _{t\in (0,T]}D_{t-}\mathrm {d}t}}{2T^{-1}\int _{t\in (0,T]}D_{t-} \mathrm {d}t} \\\rightarrow & {} \frac{\left\| \nu \right\| +\sqrt{\left\| \nu \right\| ^{2}+0}}{2\dfrac{\left\| \nu \right\| }{\phi }}=\phi . \end{aligned}$$

Finally, for any \(y\in \mathbb {Z} \backslash \left\{ 0\right\} \), \(\dfrac{N_{T}^{\left( y\right) }}{T} \rightarrow \nu \left( y\right) \) and \(\hat{\phi }_{\mathrm {MCLE} }^{-1}\rightarrow \phi ^{-1}<\infty \), so we easily have

$$\begin{aligned} \hat{\nu }_{\mathrm {MCLE}}\left( y\right) =\dfrac{\dfrac{N_{T}^{\left( y\right) }}{T}+\dfrac{C_{0}^{\left( y\right) }}{T}}{1+\dfrac{\hat{\phi }_{ \mathrm {MCLE}}^{-1}}{T}}\rightarrow \nu \left( y\right) \text { as well.} \end{aligned}$$

1.5 6.5 Proof of Proposition 1

As \(C_{t}^{(y)}\ge 0\), (19) implies that

$$\begin{aligned} C_{0}^{\left( y\right) }\ge C_{0,T}^{\left( y\right) ,\mathrm {L} }=\sup _{t\in \left[ 0,T\right] }\left( N_{t}^{(-y)}-N_{t}^{(y)}\right) ,\ \ \ \ y=1,2,\ldots , \end{aligned}$$

where we set \(N_{0}^{\left( y\right) }\triangleq 0\) conventionally. Now

$$\begin{aligned} C_{0}^{\left( y\right) }=\frac{Y_{0}-\sum _{y^{\prime }\ne y}y^{\prime }C_{0}^{(y^{\prime })}}{y}\le \left\lfloor \frac{Y_{0}-\sum _{y^{\prime }\ne y}y^{\prime }C_{0,T}^{\left( y^{\prime }\right) ,\mathrm {L}}}{y} \right\rfloor =C_{0,T}^{\left( y\right) ,\mathrm {U}}, \end{aligned}$$

so we have

$$\begin{aligned} C_{0,T}^{\left( y\right) ,\mathrm {U}}\ge C_{0}^{\left( y\right) }\ge C_{0,T}^{\left( y\right) ,\mathrm {L}}. \end{aligned}$$

Let \(N_{t}^{\left( -y\right) ,*}\) be the counting processes of \(-y\) jumps resulting from the departures of the initial events of size y that constitute \(C_{0}^{\left( y\right) }\). Let \(\tau \) be the time when \( N^{\left( -y\right) ,*}\) achieve \(C_{0}^{\left( y\right) }\). Then we have

$$\begin{aligned} C_{0,T}^{\left( y\right) ,\mathrm {L}}= & {} C_{0,\tau }^{\left( y\right) , \mathrm {L}}\vee \sup \limits _{t\in (\tau ,T]}\left( N_{t}^{\left( -y\right) ,*}-\left( N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,*}\right) \right) \right) \\= & {} C_{0,\tau }^{\left( y\right) ,\mathrm {L}}\vee \left( C_{0}^{\left( y\right) }-\inf \limits _{t\in (\tau ,T]}\left( N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,*}\right) \right) \right) . \end{aligned}$$

Observe that \(N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,*}\right) \) is a M/G/\(\infty \) queue initiated at state 0, so by the ergodicity we must have with probability 1

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\inf \limits _{t\in (\tau ,T]}\left( N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,*}\right) \right) =0. \end{aligned}$$

This then shows that actually

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }C_{0,T}^{\left( y\right) ,\mathrm {L} }=C_{0,\tau }^{\left( y\right) ,\mathrm {L}}\vee C_{0}^{\left( y\right) }=C_{0}^{\left( y\right) }, \end{aligned}$$

where the last equality follows because \(C_{0,\tau }^{\left( y\right) , \mathrm {L}}\le C_{0}^{\left( y\right) }\). Correspondingly,

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }C_{0,T}^{\left( y\right) ,\mathrm {U} }=\left\lfloor \frac{Y_{0}-\sum _{y^{\prime }\ne y}y^{\prime }C_{0}^{\left( y\right) }}{y}\right\rfloor =C_{0}^{\left( y\right) }\text {.} \end{aligned}$$

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Shephard, N., Yang, J.J. (2016). Likelihood Inference for Exponential-Trawl Processes. In: Podolskij, M., Stelzer, R., Thorbjørnsen, S., Veraart, A. (eds) The Fascination of Probability, Statistics and their Applications. Springer, Cham. https://doi.org/10.1007/978-3-319-25826-3_12

Download citation

Publish with us

Policies and ethics