Skip to main content
Log in

Novel advancements in the Markov chain stock model: analysis and inference

  • Research Article
  • Published:
Annals of Finance Aims and scope Submit manuscript

Abstract

In this paper we propose further advancements in the Markov chain stock model. First, we provide a formula for the second order moment of the fundamental price process with transversality conditions that avoid the presence of speculative bubbles. Second, we assume that the process of the dividend growth is governed by a finite state discrete time Markov chain and, under this hypothesis, we are able to compute the moments of the price process. We impose assumptions on the dividend growth process that guarantee finiteness of price and risk and the fulfilment of the transversality conditions. Subsequently, we develop non parametric statistical techniques for the inferential analysis of the model. We propose estimators of price, risk and forecasted prices and for each estimator we demonstrate that they are strongly consistent and that properly centralized and normalized they converge in distribution to normal random variables, then we give also the interval estimators. An application that demonstrate the practical implementation of methods and results to real dividend data concludes the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. “Monthly dividend and earnings data are computed from the S&P four-quarter totals for the quarter since 1926, with linear interpolation to monthly figures. Dividend and earnings data before 1926 are from Cowles and associates (Common Stock Indexes, 2nd ed. [Bloomington, Ind.: Principia Press, 1939]), interpolated from annual data”

References

  • Agosto, A., Moretto, E.: Variance matters (in stochastic dividend discount models). Ann. Financ. 11, 283–295 (2015)

    Article  Google Scholar 

  • Anderson, T.W., Goodman, L.A.: Statistical inference about Markov chains. Ann. Math. Stat. 28, 89–110 (1957)

    Article  Google Scholar 

  • Barsky, R.B., DeLong, J.B.: Why does the stock market fluctuate? Q. J. Econ. 108(May), 291–311 (1993)

    Article  Google Scholar 

  • Billingsley, P.: Statistical Inference for Markov Processes. The University of Chicago Press, Chicago (1961)

    Google Scholar 

  • Billingsley, P.: Statistical methods in Markov chains. Ann. Math. Stat. 32, 12–40 (1961)

    Article  Google Scholar 

  • Blanchard, O.J., Watson, M.W.: Bubbles, rational expectations and financial markets. In: Wachtel, P. (ed.) Crises in the Economic and Financial Structure: Bubbles, Bursts and Shocks, pp. 295–315. Lexington Books, Lexington (1982)

    Google Scholar 

  • Brémaud, P.: Markov Chains, Gibbs Fields, Monte Carlo Simulation, and Queues. Springer, New York (1999)

    Google Scholar 

  • Brooks, R., Helms, B.: An N-stage, fractional period, quarterly dividend discount model. Financ. Rev. 25(November), 651–657 (1990)

    Article  Google Scholar 

  • D’Amico, G.: A semi-Markov approach to the stock valuation problem. Ann. Financ. 9, 589–610 (2013)

    Article  Google Scholar 

  • D’Amico, G.: Generalized semi-Markovian dividend discount model: risk and return, arXiv:1605.02472 (2016)

  • Ghezzi, L.L., Piccardi, C.: Stock valuation along a Markov chain. Appl. Math. Comput. 141, 385–393 (2003)

    Google Scholar 

  • Gordon, M.J.: The Savings Investment and Valuation of a Corporation. Rev. Econ. Stat. 44(1), 37–51 (1962)

    Article  Google Scholar 

  • Gordon, M.J., Shapiro, E.: Capital investment analysis: the required rate of profit. Manag. Sci. 3, 102–110 (1956)

    Article  Google Scholar 

  • Gutierrez, M.J., Vasquez, J.: Switching equilibria: the present value model for stock prices revisited. J. Econ. Dyn. Control 28, 2297–2325 (2004)

    Article  Google Scholar 

  • Harrison, J.M., Kreps, D.M.: Speculative investor behavior in a stock market with heterogeneous expectations. Q. J. Econ. 92, 323–336 (1978)

    Article  Google Scholar 

  • Hurley, W.J., Johnson, L.D.: A realistic dividend valuation model. Financ. Anal. J. 50, 50–54 (1994)

    Article  Google Scholar 

  • Hurley, W.J., Johnson, L.D.: Generalized Markov dividend discount model. J. Portf. Manag. 24, 27–31 (1998)

    Article  Google Scholar 

  • Sadek, A., Limnios, N.: Asymptotic properties for maximum likelihood estimators for reliability and failure rates of Markov chains. Commun. Stat. Theory Methods 31(10), 1837–1861 (2002)

    Article  Google Scholar 

  • Samuelson, P.A.: Proof that properly discounted present values of assets vibrate randomly. Bell J. Econ. 4, 369–374 (1973)

    Article  Google Scholar 

  • Shiller, R.J.: Irrational Exuberance. Princeton University Press, Princeton (2005)

    Google Scholar 

  • Van Der Vaart, A.W.: Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics, vol. 3. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  • Williams, J.B.: The Theory of Investment Value. Harvard University Press, Cambridge (1938)

    Google Scholar 

  • Yao, Y.: A trinomial dividend valuation model. J. Portf. Manag. 23, 99–103 (1997)

    Article  Google Scholar 

Download references

Acknowledgements

The research work of Vlad Stefan Barbu was partially supported by the projects RISC–Interaction Networks and Complex Systems (2010–2014), XTerM–Complex Systems, Territorial Intelligence and Mobility (2014–2018) and MOUSTIC–Random Models and Statistical, Informatics and Combinatorics Tools (2016–2019), within the Large Scale research Networks from the Region of Normandy, France. The research work of Guglielmo D’Amico was partially supported by the Federation Normandy-Mathematics, France, by offering the opportunity of spending several periods as visiting professor in the Laboratory of Mathematics Raphaël Salem, Department of Mathematics, University of Rouen and in Laboratory of Mathematics Nicolas Oresme, Department of Mathematics, University of Caen, France. The authors would like to express their gratitude to Prof. Terry Walter, Chief Research Officer at Capital Market Cooperative Research Centre (CMCRC) in Sydney, for his attentive reading of the manuscript and for useful comments. The authors thank also an anonymous referee for useful suggestions and constructive comments that improved both the quality and the presentation of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guglielmo D’Amico.

Appendix: proofs

Appendix: proofs

1.1 Proof of Proposition 2.3

For \(k,j,i\in \mathbb N\) with \(j>i\) we consider the following expectation:

$$\begin{aligned} \begin{aligned}&\mathbb E_{k}\left[ \prod _{h=1}^{i}\prod _{w=1}^{j}G(k+h)G(k+w)\right] = \mathbb E_{k}\left[ \prod _{h=1}^{i}G^{2}(k+h)\prod _{w=i+1}^{j}G(k+w)\right] \\&\quad =\mathbb E_{k}\left[ \prod _{h=1}^{i}G^{2}(k+h)\prod _{w=i+1}^{j-1}G(k+w)\mathbb E_{k+j-1}[G(k+j)]\right] \\&\quad \le \mathbb E_{k}\left[ \prod _{h=1}^{i}G^{2}(k+h)\prod _{w=i+1}^{j-1}G(k+w)\right] \overline{g}, \end{aligned} \end{aligned}$$

where the last inequality follows from (2.9). If we proceed to compute the expectation by conditioning up to time \(i+1\) and we use at each step (2.9) we get

$$\begin{aligned} \begin{aligned}&\mathbb E_{k}\left[ \prod _{h=1}^{i}\prod _{w=1}^{j}G(k+h)G(k+w)\right] \le \mathbb E_{k}\left[ \prod _{h=1}^{i}G^{2}(k+h)\right] (\overline{g})^{j-i}\\&\quad = \mathbb E_{k}\left[ \prod _{h=1}^{i-1}G^{2}(k+h)\mathbb E_{k+i-1}[G^{2}(k+i)]\right] (\overline{g})^{j-i} \\&\quad \le \mathbb E_{k}[\prod _{h=1}^{i-1}G^{2}(k+h)]\overline{g}^{(2)}(\overline{g})^{j-i}, \end{aligned} \end{aligned}$$
(5.1)

where the last inequality follows from (2.11). If we proceed to compute the expectation by conditioning up to time 1 and we use at each step (2.11) we get

$$\begin{aligned} \mathbb E_{k}[\prod _{h=1}^{i}\prod _{w=1}^{j}G(k+h)G(k+w)]\le (\overline{g}^{(2)})^{i}(\overline{g})^{j-i}. \end{aligned}$$
(5.2)

Therefore

$$\begin{aligned} p^{(2)}(k)=\sum _{i=1}^{+\infty }\frac{\mathbb E_{k}[D^{2}(k+i)]}{r^{2i}}+2\sum _{i=1}^{+\infty }\sum _{j>i}\frac{\mathbb E_{k}[D(k+i)D(k+j)]}{r^{i+j}}. \end{aligned}$$
(5.3)

Consequently, we obtain

$$\begin{aligned} \begin{aligned} p^{(2)}(k) =&\sum _{i=1}^{+\infty }\frac{\mathbb E_{k}[\prod _{j=1}^{i}G^{2}(k+j)]}{r^{2i}}d^{2}(k) \\&+2\sum _{i=1}^{+\infty }\sum _{j>i}\frac{\mathbb E_{k}[\prod _{h=1}^{i}G(k+h)\prod _{w=1}^{j}G(k+w)]}{r^{i+j}}d^{2}(k)\\ \le&\sum _{i=1}^{+\infty }\frac{(\overline{g}^{(2)})^{i}}{r^{2i}}d^{2}(k)+2\sum _{i=1}^{+\infty }\sum _{j>i}\frac{(\overline{g}^{(2)})^{i}(\overline{g})^{j-i}}{r^{i+j}}d^{2}(k). \end{aligned} \end{aligned}$$
(5.4)

From (5.4), A1 and A2, using the properties of geometric series, we obtain that \(p^{(2)}(k):=\mathbb E_{k}[P^{2}(k)]<+ \infty \).

Since the expected values in Formula (5.3) do depend only on g(k), the second order moment can be expressed in the compact form \(p^{(2)}(k)=\psi _{2}(g(k))d^{2}(k)\). Let us denote the second order price-dividend ratio by

$$\begin{aligned} \psi _{2}(g(k)):=\frac{p^{(2)}(k)}{d^{2}(k)}. \end{aligned}$$
(5.5)

Let \(\overline{\psi }_{2}=\max _{i}(\psi _{2}(g_{i}));\) then \( 0\le \mathbb E_{k}[P^{2}(k)]\le \overline{\psi }_{2}\mathbb E_{k}[D^{2}(k+i)], \) which is equivalent to

$$\begin{aligned} \frac{\mathbb E_{k}[P^{2}(k+i)]}{r^{2i}}\le \overline{\psi }_{2}\frac{\mathbb E_{k}[D^{2}(k+i)]}{r^{2i}}. \end{aligned}$$
(5.6)

Since \(p^{(2)}(k)=\mathbb E_{k}[P^{2}(k)]<+ \infty ,\) we have that \(\lim _{i\rightarrow +\infty }\frac{\mathbb E_{k}[D^{2}(k+i)]}{r^{2i}}=0\) and hence from (5.6) we get

$$\begin{aligned} \lim _{i\rightarrow +\infty }\frac{\mathbb E_{k}[P^{2}(k+i)]}{r^{2i}}=0. \end{aligned}$$
(5.7)

It remains to prove that \(\lim _{N\rightarrow +\infty } \sum _{i=1}^{N}\frac{\mathbb E_{k}[D(k+i)P(k+N)]}{r^{i+N}}=0\). From the Cauchy-Schwarz inequality we have that

$$\begin{aligned} \begin{aligned}&\lim _{N\rightarrow +\infty } \sum _{i=1}^{N}\frac{\mathbb E_{k}[D(k+i)P(k+N)]}{r^{i+N}} \\&\quad \le \lim _{N\rightarrow +\infty } \sum _{i=1}^{N}\Big (\frac{\mathbb E_{k}[D^{2}(k+i)]}{r^{i+N}}\Big )^{\frac{1}{2}}\Big (\frac{\mathbb E_{k}[P^{2}(k+N)]}{r^{2N}}\Big )^{\frac{1}{2}}\\&\quad =\lim _{N\rightarrow +\infty }\Big (\frac{\mathbb E_{k}[P^{2}(k+N)]}{r^{2N}}\Big )^{\frac{1}{2}}\lim _{N\rightarrow +\infty }\sum _{i=1}^{N}\Big (\frac{\mathbb E_{k}[D^{2}(k+i)]}{r^{i+N}}\Big )^{\frac{1}{2}}=0, \end{aligned} \end{aligned}$$

where the last equality holds true because the first factor is zero using (5.7), while the second one is finite as it follows from the finiteness of (5.3).

1.2 Proof of Proposition 2.4

Let \(k\in \mathbb {I\!\!N }\) be the current time. At time k the dividend process and the growth dividend process assume two known values denoted by \(D(k)=d(k)\in \mathbb {R }\) and \(G(k)=g(k)\in E\), respectively. Let us consider first the case when \(g(k)=g_{1}\). By combining Equations (5.5), (2.7) and (2.4) we get

$$\begin{aligned} \begin{aligned}&\psi _{2}(g_{1})d^{2}(k)=\mathbb E_{k}\left[ \frac{(G(k+1)d(k)+P(k+1))^{2}}{r^{2}}\right] \\&\quad =\mathbb E_{k}\left[ \frac{G^{2}(k+1)d^{2}(k)}{r^{2}}\right] +\mathbb E_{k}\left[ \frac{P^{2}(k+1)}{r^{2}}\right] +\mathbb E_{k}\left[ \frac{2G(k+1)d(k)P(k+1)}{r^{2}}\right] . \end{aligned}\nonumber \\ \end{aligned}$$
(5.8)

Now, let us compute these three expectations:

$$\begin{aligned}&\mathbb E_{k}[G^{2}(k+1)d^{2}(k)]=d^{2}(k)(p_{11}g_{1}^{2}+p_{12}g_{2}^{2}); \end{aligned}$$
(5.9)
$$\begin{aligned}&\mathbb E_{k}[P^{2}(k+1)]=\mathbb E_{k}[\mathbb E_{k+1}[P^{2}(k+1)|G(k+1)]]=\mathbb E_{k}[\psi _{2}(g(k+1))d^{2}(k+1)]\nonumber \\&\quad = \mathbb E_{k}[\psi _{2}(g(k+1))G^{2}(k+1)d^{2}(k)]= d^{2}(k)(p_{11}\psi _{2}(g_{1})g_{1}^{2}+p_{12}\psi _{2}(g_{2})g_{2}^{2});\nonumber \\ \end{aligned}$$
(5.10)
$$\begin{aligned}&\mathbb E_{k}[G(k+1)d(k)P(k+1)]=d(k)\mathbb E_{k}[\mathbb E_{k+1}[G(k+1)P(k+1)|G(k+1)]]\nonumber \\&\quad = d(k)\mathbb E_{k}[G(k+1)\mathbb E_{k+1}[P(k+1)|G(k+1)]]\nonumber \\&\quad = d(k)\mathbb E_{k}[G(k+1)\psi _{1}(g(k+1))d(k+1)]\nonumber \\&\quad = d(k)\mathbb E_{k}[G(k+1)\psi _{1}(g(k+1))G(k+1)d(k)]\nonumber \\&\quad = d^{2}(k)(p_{11}g_{1}\psi _{1}(g_{1})g_{1}+p_{12}g_{2}\psi _{1}(g_{2})g_{2})\nonumber \\&\quad = d^{2}(k)(p_{11}g_{1}^{2}\psi _{1}(g_{1})+p_{12}g_{2}^{2}\psi _{1}(g_{2})). \end{aligned}$$
(5.11)

A substitution of (5.9), (5.10) and (5.11) in (5.8) leads to

$$\begin{aligned} \begin{aligned}&\psi _{2}(g_{1})d^{2}(k)=\frac{1}{r^{2}}\bigg (d^{2}(k)(p_{11}g_{1}^{2}+p_{12}g_{2}^{2})\\&\quad + d^{2}(k)(p_{11}\psi _{2}(g_{1})g_{1}^{2}+p_{12}\psi _{2}(g_{2})g_{2}^{2}) + d^{2}(k)(p_{11}g_{1}^{2}\psi _{1}(g_{1})+p_{12}g_{2}^{2}\psi _{1}(g_{2}))\bigg ). \end{aligned} \end{aligned}$$

Some computations yield

$$\begin{aligned}&\psi _{2}(g_{1})\big (r^{2}-p_{11}g_{1}^{2}\big )-\psi _{2}(g_{2})p_{12}g_{2}^{2}\nonumber \\&\quad =p_{11}g_{1}^{2}\big (1+2\psi _{1}(g_{1})\big )+p_{12}g_{2}^{2}\big (1+2\psi _{1}(g_{2})\big ). \end{aligned}$$
(5.12)

Symmetric arguments produce the second equation of the system (2.13). Concerning the uniqueness of the solution, it is sufficient to note that the matrix of the coefficients of the system has the form \(A=\left[ \begin{array}{cc} \ r^{2}-p_{11}g_{1}^{2} &{} -p_{12}g_{2}^{2} \\ \ -p_{21}g_{1}^{2} &{} r^{2}-p_{22}g_{2}^{2} \\ \end{array} \right] \) and then

$$\begin{aligned} det(A)=\left( r^{2}-p_{11}g_{1}^{2}\right) \left( r^{2}-p_{22}g_{2}^{2}\right) -\left( -p_{12}g_{2}^{2}\right) \left( -p_{21}g_{1}^{2}\right) . \end{aligned}$$

Using assumption A2, we have that

$$\begin{aligned} \begin{aligned}&det(A){>}(p_{11}g_{1}^{2}{+}p_{12}g_{2}^{2}-p_{11}g_{1}^{2})(p_{21}g_{1}^{2}+p_{22}g_{2}^{2}-p_{22}g_{2}^{2})-(-p_{12}g_{2}^{2})(-p_{21}g_{1}^{2})\\&\quad =p_{12}g_{2}^{2}p_{21}g_{1}^{2}-(-p_{12}g_{2}^{2})(-p_{21}g_{1}^{2})=0. \end{aligned} \end{aligned}$$

The non negativity of the solution is due to the fact that \(g_{1},g_{2}\ge 0\).

1.3 Proof of Proposition 2.5

For \(n=1\) it is simple to compute the expectation

$$\begin{aligned} E^{(1)}p(d_{k},g_{a})=\sum _{j\in E}p_{aj}p(d_{k}g_{j},g_{j})=\sum _{j\in E}p_{aj}g_{j}p(d_{k},g_{j}). \end{aligned}$$
(5.13)

Consequently, we have the matrix form

$$\begin{aligned} \left( \begin{array}{c} E^{(1)}p(d_{k},g_{1})\\ \vdots \\ E^{(1)}p(d_{k},g_{s}) \end{array}\right)= & {} \mathbf {P} \mathbf {I}_{\mathbf {g}} \left( \begin{array}{c} p(d_{k},g_{1})\\ \vdots \\ p(d_{k},g_{s}) \end{array}\right) , \end{aligned}$$

and, taking into account (cf. (2.8)) that \( p(d_{k},g_{a}) = p(d(k), g(k)) = d(k) \psi _1(g(k)) = d_k \psi _1(g_a), \) we obtain (2.22) for \(n=1\). For \(n=2\) we have

$$\begin{aligned} E^{(2)}p(d_{k},g_{a})= & {} E_{(d(k),g_{a})}[P(D(k+2),g(k+2))]\nonumber \\= & {} E_{(d(k),g_{a})}[E_{(D(k+1),g(k+1))}[P(D(k+2),g(k+2))]\nonumber \\= & {} E_{(d(k),g_{a})}[E^{(1)}P(D(k+1),g(k+1))]\nonumber \\= & {} \sum _{j\in E}p_{aj}E^{(1)}P(g_{j}d_{k},g_{j})=\sum _{j\in E}p_{aj}\sum _{h\in E}p_{jh}p(d_{k}g_{j}g_{h},g_{h})\nonumber \\= & {} \sum _{j\in E}\sum _{h\in E}p_{aj}p_{jh}g_{j}g_{h}p(d_{k},g_{h}). \end{aligned}$$
(5.14)

As we did previously for the case \(n=1,\) taking into account that \(\left( \mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^2 = \mathbf {P}^2 \mathbf {I}_{\mathbf {g}^2}\) we obtain the matrix form given in (2.22) for \(n=2\). An iteration up to the nth step produces the result given in (2.21) and the corresponding matrix form (2.22).

1.4 Proof of Lemma 3.3

Note that we have

$$\begin{aligned} \left( \mathbf {I}_{r}^{n}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{n}\right) ^{-1}= & {} \Big (\mathbf {I}_{r}^{n}\cdot \big (\mathbf {I}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{n}\mathbf {I}_{r}^{-n}\big )\Big )^{-1}\end{aligned}$$
(5.15)
$$\begin{aligned}= & {} \mathbf {I}_{r}^{-n} \sum _{s\ge 0} \mathbf {P}^s \big (\mathbf {I}_{\mathbf {g}}^{n}\big )^{s} \big (\mathbf {I}_{r}^{-n}\big )^{s} \end{aligned}$$
(5.16)

where we used the fact that matrices like \(\mathbf {I}_{r}^{n}\) or \(\mathbf {I}_{\mathbf {g}}^{n}\) commute with any matrix. Taking into account Lemma 3.2, we have

$$\begin{aligned}&\frac{\partial \left( \mathbf {I}_{r}^{n}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{n}\right) ^{-1}}{\partial p_{ij}}\\&\quad = \mathbf {I}_{r}^{-n} \sum _{l\ge 0} \Big (\frac{\partial \mathbf {P}^l }{\partial p_{ij}}\Big ) \big (\mathbf {I}_{\mathbf {g}}^{n}\big )^{l} \big (\mathbf {I}_{r}^{-n}\big )^{l} \\&\quad = \mathbf {I}_{r}^{-n} \sum _{l\ge 0} \sum _{k=1}^{l} \mathbf {P}^{k-1} \Big (\frac{\partial \mathbf {P}}{\partial p_{ij}}\Big ) \mathbf {P}^{l-k} \big (\mathbf {I}_{\mathbf {g}}^{n}\big )^{l} \big (\mathbf {I}_{r}^{-n}\big )^{l}\\&\quad = \mathbf {I}_{r}^{-n} \sum _{k\ge 1}\mathbf {P}^{k-1}\Big (\frac{\partial \mathbf {P}}{\partial p_{ij}}\Big ) \sum _{l\ge k}\mathbf {P}^{l-k} \big (\mathbf {I}_{\mathbf {g}}^{n}\big )^{l} \big (\mathbf {I}_{r}^{-n}\big )^{l}\\&\quad = \mathbf {I}_{r}^{-n} \sum _{k\ge 1}\mathbf {P}^{k-1}\big (\mathbf {I}_{\mathbf {g}}^{n}\big )^{k-1}\big (\mathbf {I}_{r}^{-n}\big )^{k-1}\Big (\frac{\partial \mathbf {P}}{\partial p_{ij}}\Big ) \sum _{l\ge k}\mathbf {P}^{l-k} \big (\mathbf {I}_{\mathbf {g}}^{n}\big )^{l-k} \big (\mathbf {I}_{r}^{-n}\big )^{l-k}\big (\mathbf {I}_{\mathbf {g}}^{n}\big )\big (\mathbf {I}_{r}^{-n}\big )\\&\quad = \mathbf {I}_{\mathbf {g}}^{n} \left( \mathbf {I}_{r}^{n}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{n}\right) ^{-1}\cdot \frac{\partial \mathbf {P}}{\partial p_{ij}} \cdot \left( \mathbf {I}_{r}^{n}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{n}\right) ^{-1}. \end{aligned}$$

1.5 Proof of Theorem 3.5

First, note that we have

$$\begin{aligned} \widehat{\varvec{\Psi }}_{1}(m) = \varPhi _{1} (\widehat{p}_{ij}(m), i = 1, \ldots , s, j =1, \ldots , s-1). \end{aligned}$$
(5.17)

Using the strong consistency of \(\widehat{p}_{ij}(m),\) as m goes to infinity (cf. Proposition 3.1), and the continuous mapping theorem, we obtain the strong consistency of \(\widehat{\varvec{\Psi }}_{1}(m).\)

Second, taking into account the expressions (3.3) and (5.17) for \(\varvec{\Psi }_{1}\) and \(\widehat{\varvec{\Psi }}_{1}(m),\) respectively, together with the asymptotic normality of the vector \((\sqrt{m}\big (\widehat{p}_{ij}(m)-p_{ij}\big ))_{i = 1, \ldots , s, j = 1, \ldots , s-1}\) (cf. Proposition 3.1) with covariance matrix \({\varvec{\varGamma }}\) defined as the restriction of \(\widetilde{{\varvec{\varGamma }}}\) given in (3.2) to \(s(s-1)\times s(s-1),\) we obtain the asymptotic normality stated in (3.9) using the delta method because \(\varvec{\Psi }_{1}\) is a differentiable function of \(\mathbf {P}\).

The last point we have to deal with is to give an expression of the partial derivative matrix \(\varvec{\Phi }_{1}'.\) Note that we have

$$\begin{aligned} \varvec{\Phi }_{1}' =\begin{pmatrix} \frac{\partial \varPhi _1^1}{\partial p_{11}} &{} \cdots &{} \frac{\partial \varPhi _1^1}{\partial p_{1 (s-1)}} &{} \cdots &{} \frac{\partial \varPhi _1^1}{\partial p_{s1}} &{} \cdots &{} \frac{\partial \varPhi _1^1}{\partial p_{s (s-1)}}\\ \vdots &{} &{} \vdots &{} &{} \vdots &{} &{} \vdots \\ \frac{\partial \varPhi _1^s}{\partial p_{11}} &{} \cdots &{} \frac{\partial \varPhi _1^s}{\partial p_{1 (s-1)}} &{} \cdots &{} \frac{\partial \varPhi _1^s}{\partial p_{s1}} &{} \cdots &{} \frac{\partial \varPhi _1^s}{\partial p_{s (s-1)}} \end{pmatrix} \in {\mathcal {M}}_{s \times s(s-1)}. \end{aligned}$$
(5.18)

Note that in the computation of the derivative of \(\varPhi _{1}\) with respect to its argument \((p_{ij}, i = 1, \ldots , s, j =1, \ldots , s-1),\) this argument is ordered in the lexicographic order: \((p_{11}, \ldots , p_{1(s-1)}, p_{21}, \ldots , p_{2(s-1)}, \ldots , p_{s 1}, \ldots , p_{s(s-1)}) \in \mathbb R^{s(s-1)}.\)

An arbitrary column of this matrix \(\varvec{\Phi }_{1}'\) corresponding to \((i, j), i = 1, \ldots , s, j =1, \ldots , s-1,\) is given by

$$\begin{aligned} \frac{\partial \varPhi _1}{\partial p_{ij}}= & {} \frac{\partial }{\partial p_{ij}} \left( \left( \mathbf {I}_{r}-\mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^{-1} \mathbf {P} \mathbf {g} \right) \nonumber \\= & {} \frac{\partial \left( \mathbf {I}_{r}-\mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^{-1}}{\partial p_{ij}} \mathbf {P} \mathbf {g} + \left( \mathbf {I}_{r}-\mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^{-1} \frac{\partial \mathbf {P} }{\partial p_{ij}} \mathbf {g}. \end{aligned}$$
(5.19)

Using the expression of \(\frac{\partial \mathbf {P} }{\partial p_{ij}}\) given in (3.5), together with the computation of \(\frac{\partial \left( \mathbf {I}_{r}-\mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^{-1}}{\partial p_{ij}}\) given in Lemma 3.3 (for \(n=1\)), allows us to completely compute \(\frac{\partial \varPhi _1}{\partial p_{ij}}\) and thus \(\varvec{\Phi }_{1}'.\)

1.6 Proof of Theorem 3.7

First, note that we have

$$\begin{aligned} \widehat{\varvec{\Psi }}_{2}(m) = \varPhi _{2} (\widehat{p}_{ij}(m), i = 1, \ldots , s, j =1, \ldots , s-1). \end{aligned}$$
(5.20)

Using the strong consistency of estimator \(\widehat{\varvec{\Psi }}_{1}(m)\) (cf. Theorem 3.5), the strong consistency of \(\widehat{p}_{ij}(m),\) as m goes to infinity (cf. Proposition 3.1), and the continuous mapping theorem, we obtain the strong consistency of \(\widehat{\varvec{\Psi }}_{2}(m).\)

Second, taking into account the expressions (3.12) and (5.20) for \(\varvec{\Psi }_{2}\) and \(\widehat{\varvec{\Psi }}_{2}(m), \) respectively, together with the asymptotic normality of the vector \((\sqrt{m}\big (\widehat{p}_{ij}(m)-p_{ij}\big ))_{i = 1, \ldots , s, j = 1, \ldots , s-1}\) (cf. Proposition 3.1) with covariance matrix \({\varvec{\varGamma }}\) defined as the restriction of \(\widetilde{{\varvec{\varGamma }}}\) given in (3.2) to \(s(s-1)\times s(s-1),\) we obtain the asymptotic normality stated in (3.14) using the delta method because \(\varvec{\Psi }_{2}\) is a differentiable function of \(\mathbf {P}\) being \(\varvec{\Psi }_{1}\) a differentiable function of \(\mathbf {P}\).

The last point we have to deal with is to give an expression of the partial derivative matrix \(\varvec{\Phi }_{2}'.\) Note that we have

$$\begin{aligned} \varvec{\Phi }_{2}' =\begin{pmatrix} \frac{\partial \varPhi _2^1}{\partial p_{11}} &{} \cdots &{} \frac{\partial \varPhi _2^1}{\partial p_{1 (s-1)}} &{} \cdots &{} \frac{\partial \varPhi _2^1}{\partial p_{s1}} &{} \cdots &{} \frac{\partial \varPhi _2^1}{\partial p_{s (s-1)}}\\ \vdots &{} &{} \vdots &{} &{} \vdots &{} &{} \vdots \\ \frac{\partial \varPhi _2^s}{\partial p_{11}} &{} \cdots &{} \frac{\partial \varPhi _2^s}{\partial p_{1 (s-1)}} &{} \cdots &{} \frac{\partial \varPhi _2^s}{\partial p_{s1}} &{} \cdots &{} \frac{\partial \varPhi _2^s}{\partial p_{s (s-1)}} \end{pmatrix} \in {\mathcal {M}}_{s \times s(s-1)}. \end{aligned}$$
(5.21)

An arbitrary column of this matrix corresponding to \((i, j), i = 1, \ldots , s, j =1, \ldots , s-1,\) is given by

$$\begin{aligned} \frac{\partial \varPhi _2}{\partial p_{ij}}= & {} \frac{\partial }{\partial p_{ij}} \left( \left( \mathbf {I}_{r}^{2}-\mathbf {P} \cdot \mathbf {I}_{\mathbf {g}}^{2}\right) ^{-1} \cdot \mathbf {P}\cdot \left( \mathbf {g} \diamond \mathbf {g}+2\varvec{\Psi }_{1}\diamond \mathbf {g} \diamond \mathbf {g}\right) \right) \nonumber \\= & {} \frac{\partial \left( \mathbf {I}_{r}^{2}-\mathbf {P} \cdot \mathbf {I}_{\mathbf {g}}^{2}\right) ^{-1}}{\partial p_{ij}} \cdot \mathbf {P}\cdot \left( \mathbf {g} \diamond \mathbf {g}+2\varvec{\Psi }_{1}\diamond \mathbf {g} \diamond \mathbf {g}\right) \nonumber \\&+ \left( \mathbf {I}_{r}^{2}-\mathbf {P} \cdot \mathbf {I}_{\mathbf {g}}^{2}\right) ^{-1} \cdot \frac{\partial \mathbf {P}}{\partial p_{ij}}\cdot \left( \mathbf {g} \diamond \mathbf {g}+2\varvec{\Psi }_{1}\diamond \mathbf {g} \diamond \mathbf {g}\right) \nonumber \\&+ \left( \mathbf {I}_{r}^{2}-\mathbf {P} \cdot \mathbf {I}_{\mathbf {g}}^{2}\right) ^{-1} \cdot \mathbf {P}\cdot \left( \mathbf {g} \diamond \mathbf {g}+2 \frac{\partial \varvec{\Psi }_{1}}{\partial p_{ij}}\diamond \mathbf {g} \diamond \mathbf {g}\right) . \end{aligned}$$
(5.22)

Using the computation of \(\frac{\partial \left( \mathbf {I}_{r}^{2}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{2}\right) ^{-1}}{\partial p_{ij}}\) given in Lemma 3.3 (for \(n=2\)), the expression of \(\frac{\partial \mathbf {P} }{\partial p_{ij}}\) given in (3.5), together with the computation of \(\frac{\partial \varvec{\Psi }_{1}}{\partial p_{ij}}\) obtained in (5.19), we completely compute the value of \(\frac{\partial \varPhi _2}{\partial p_{ij}}\) and thus of \(\varvec{\Phi }_{2}'.\)

1.7 Proof of Theorem 3.8

First, note that we have

$$\begin{aligned}&\left( \widehat{E^{(n)}}(m)p(d_{k},g_{1}), \ldots , \widehat{E^{(n)}}(m)p(d_{k},g_{s})\right) ^\top \nonumber \\&\quad = \varTheta (\widehat{p}_{ij}(m), i = 1, \ldots , s, j =1, \ldots , s-1). \end{aligned}$$
(5.23)

Using the strong consistency of \(\widehat{p}_{ij}(m),\) as m goes to infinity (cf. Proposition 3.1), and the continuous mapping theorem, we obtain the strong consistency of estimator (3.16).

Second, taking into account the expressions (3.17) and (5.23) for the expected forecast fundamental prices and the corresponding estimator, together with the asymptotic normality of the vector \((\sqrt{m}\left( \widehat{p}_{ij}(m)-p_{ij}\right) )_{i = 1, \ldots , s, j = 1, \ldots , s-1}\) (cf. Proposition 3.1) with covariance matrix \({\varvec{\varGamma }}\) defined as the restriction of \(\widetilde{{\varvec{\varGamma }}}\) given in (3.2) to \(s(s-1)\times s(s-1),\) we obtain the asymptotic normality stated in (3.19) using the delta method.

The last point we have to deal with is to give an expression of the partial derivative matrix \(\varvec{\Theta }'.\) Note that we have

$$\begin{aligned} \varvec{\Theta }' =\begin{pmatrix} \frac{\partial \varTheta ^1}{\partial p_{11}} &{} \cdots &{} \frac{\partial \varTheta ^1}{\partial p_{1 (s-1)}} &{} \cdots &{} \frac{\partial \varTheta ^1}{\partial p_{s1}} &{} \cdots &{} \frac{\partial \varTheta ^1}{\partial p_{s (s-1)}}\\ \vdots &{} &{} \vdots &{} &{} \vdots &{} &{} \vdots \\ \frac{\partial \varTheta ^s}{\partial p_{11}} &{} \cdots &{} \frac{\partial \varTheta ^s}{\partial p_{1 (s-1)}} &{} \cdots &{} \frac{\partial \varTheta ^s}{\partial p_{s1}} &{} \cdots &{} \frac{\partial \varTheta ^s}{\partial p_{s (s-1)}} \end{pmatrix} \in {\mathcal {M}}_{s \times s(s-1)}. \end{aligned}$$
(5.24)

An arbitrary column of this matrix corresponding to \((i, j), i = 1, \ldots , s, j =1, \ldots , s-1,\) is given by

$$\begin{aligned} \frac{\partial \varTheta }{\partial p_{ij}}= & {} \frac{\partial }{\partial p_{ij}} \left( d_k \mathbf {P}^n \mathbf {I}_{\mathbf {g}^n} \varvec{\Psi }_{1}\right) \nonumber \\= & {} d_k \frac{\partial \mathbf {P}^n}{\partial p_{ij}} \mathbf {I}_{\mathbf {g}^n} \varvec{\Psi }_{1} + d_k \mathbf {P}^n \mathbf {I}_{\mathbf {g}^n} \frac{\partial \varvec{\Psi }_{1}}{\partial p_{ij}}. \end{aligned}$$
(5.25)

Using the computation of \(\frac{\partial \mathbf {P}^n}{\partial p_{ij}}\) given in Lemma 3.2, of \(\frac{\partial \mathbf {P}}{\partial p_{ij}}\) given in (3.5) and of \(\frac{\partial \varvec{\Psi }_{1}}{\partial p_{ij}}\) obtained in (5.19), we are able to compute the value of \(\frac{\partial \varTheta }{\partial p_{ij}} \) and thus of \(\varTheta '.\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barbu, V.S., D’Amico, G. & De Blasis, R. Novel advancements in the Markov chain stock model: analysis and inference. Ann Finance 13, 125–152 (2017). https://doi.org/10.1007/s10436-017-0297-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10436-017-0297-9

Keywords

JEL Classification

Navigation