Abstract
In this paper we propose further advancements in the Markov chain stock model. First, we provide a formula for the second order moment of the fundamental price process with transversality conditions that avoid the presence of speculative bubbles. Second, we assume that the process of the dividend growth is governed by a finite state discrete time Markov chain and, under this hypothesis, we are able to compute the moments of the price process. We impose assumptions on the dividend growth process that guarantee finiteness of price and risk and the fulfilment of the transversality conditions. Subsequently, we develop non parametric statistical techniques for the inferential analysis of the model. We propose estimators of price, risk and forecasted prices and for each estimator we demonstrate that they are strongly consistent and that properly centralized and normalized they converge in distribution to normal random variables, then we give also the interval estimators. An application that demonstrate the practical implementation of methods and results to real dividend data concludes the paper.
Similar content being viewed by others
Notes
“Monthly dividend and earnings data are computed from the S&P four-quarter totals for the quarter since 1926, with linear interpolation to monthly figures. Dividend and earnings data before 1926 are from Cowles and associates (Common Stock Indexes, 2nd ed. [Bloomington, Ind.: Principia Press, 1939]), interpolated from annual data”
References
Agosto, A., Moretto, E.: Variance matters (in stochastic dividend discount models). Ann. Financ. 11, 283–295 (2015)
Anderson, T.W., Goodman, L.A.: Statistical inference about Markov chains. Ann. Math. Stat. 28, 89–110 (1957)
Barsky, R.B., DeLong, J.B.: Why does the stock market fluctuate? Q. J. Econ. 108(May), 291–311 (1993)
Billingsley, P.: Statistical Inference for Markov Processes. The University of Chicago Press, Chicago (1961)
Billingsley, P.: Statistical methods in Markov chains. Ann. Math. Stat. 32, 12–40 (1961)
Blanchard, O.J., Watson, M.W.: Bubbles, rational expectations and financial markets. In: Wachtel, P. (ed.) Crises in the Economic and Financial Structure: Bubbles, Bursts and Shocks, pp. 295–315. Lexington Books, Lexington (1982)
Brémaud, P.: Markov Chains, Gibbs Fields, Monte Carlo Simulation, and Queues. Springer, New York (1999)
Brooks, R., Helms, B.: An N-stage, fractional period, quarterly dividend discount model. Financ. Rev. 25(November), 651–657 (1990)
D’Amico, G.: A semi-Markov approach to the stock valuation problem. Ann. Financ. 9, 589–610 (2013)
D’Amico, G.: Generalized semi-Markovian dividend discount model: risk and return, arXiv:1605.02472 (2016)
Ghezzi, L.L., Piccardi, C.: Stock valuation along a Markov chain. Appl. Math. Comput. 141, 385–393 (2003)
Gordon, M.J.: The Savings Investment and Valuation of a Corporation. Rev. Econ. Stat. 44(1), 37–51 (1962)
Gordon, M.J., Shapiro, E.: Capital investment analysis: the required rate of profit. Manag. Sci. 3, 102–110 (1956)
Gutierrez, M.J., Vasquez, J.: Switching equilibria: the present value model for stock prices revisited. J. Econ. Dyn. Control 28, 2297–2325 (2004)
Harrison, J.M., Kreps, D.M.: Speculative investor behavior in a stock market with heterogeneous expectations. Q. J. Econ. 92, 323–336 (1978)
Hurley, W.J., Johnson, L.D.: A realistic dividend valuation model. Financ. Anal. J. 50, 50–54 (1994)
Hurley, W.J., Johnson, L.D.: Generalized Markov dividend discount model. J. Portf. Manag. 24, 27–31 (1998)
Sadek, A., Limnios, N.: Asymptotic properties for maximum likelihood estimators for reliability and failure rates of Markov chains. Commun. Stat. Theory Methods 31(10), 1837–1861 (2002)
Samuelson, P.A.: Proof that properly discounted present values of assets vibrate randomly. Bell J. Econ. 4, 369–374 (1973)
Shiller, R.J.: Irrational Exuberance. Princeton University Press, Princeton (2005)
Van Der Vaart, A.W.: Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics, vol. 3. Cambridge University Press, Cambridge (2000)
Williams, J.B.: The Theory of Investment Value. Harvard University Press, Cambridge (1938)
Yao, Y.: A trinomial dividend valuation model. J. Portf. Manag. 23, 99–103 (1997)
Acknowledgements
The research work of Vlad Stefan Barbu was partially supported by the projects RISC–Interaction Networks and Complex Systems (2010–2014), XTerM–Complex Systems, Territorial Intelligence and Mobility (2014–2018) and MOUSTIC–Random Models and Statistical, Informatics and Combinatorics Tools (2016–2019), within the Large Scale research Networks from the Region of Normandy, France. The research work of Guglielmo D’Amico was partially supported by the Federation Normandy-Mathematics, France, by offering the opportunity of spending several periods as visiting professor in the Laboratory of Mathematics Raphaël Salem, Department of Mathematics, University of Rouen and in Laboratory of Mathematics Nicolas Oresme, Department of Mathematics, University of Caen, France. The authors would like to express their gratitude to Prof. Terry Walter, Chief Research Officer at Capital Market Cooperative Research Centre (CMCRC) in Sydney, for his attentive reading of the manuscript and for useful comments. The authors thank also an anonymous referee for useful suggestions and constructive comments that improved both the quality and the presentation of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Appendix: proofs
Appendix: proofs
1.1 Proof of Proposition 2.3
For \(k,j,i\in \mathbb N\) with \(j>i\) we consider the following expectation:
where the last inequality follows from (2.9). If we proceed to compute the expectation by conditioning up to time \(i+1\) and we use at each step (2.9) we get
where the last inequality follows from (2.11). If we proceed to compute the expectation by conditioning up to time 1 and we use at each step (2.11) we get
Therefore
Consequently, we obtain
From (5.4), A1 and A2, using the properties of geometric series, we obtain that \(p^{(2)}(k):=\mathbb E_{k}[P^{2}(k)]<+ \infty \).
Since the expected values in Formula (5.3) do depend only on g(k), the second order moment can be expressed in the compact form \(p^{(2)}(k)=\psi _{2}(g(k))d^{2}(k)\). Let us denote the second order price-dividend ratio by
Let \(\overline{\psi }_{2}=\max _{i}(\psi _{2}(g_{i}));\) then \( 0\le \mathbb E_{k}[P^{2}(k)]\le \overline{\psi }_{2}\mathbb E_{k}[D^{2}(k+i)], \) which is equivalent to
Since \(p^{(2)}(k)=\mathbb E_{k}[P^{2}(k)]<+ \infty ,\) we have that \(\lim _{i\rightarrow +\infty }\frac{\mathbb E_{k}[D^{2}(k+i)]}{r^{2i}}=0\) and hence from (5.6) we get
It remains to prove that \(\lim _{N\rightarrow +\infty } \sum _{i=1}^{N}\frac{\mathbb E_{k}[D(k+i)P(k+N)]}{r^{i+N}}=0\). From the Cauchy-Schwarz inequality we have that
where the last equality holds true because the first factor is zero using (5.7), while the second one is finite as it follows from the finiteness of (5.3).
1.2 Proof of Proposition 2.4
Let \(k\in \mathbb {I\!\!N }\) be the current time. At time k the dividend process and the growth dividend process assume two known values denoted by \(D(k)=d(k)\in \mathbb {R }\) and \(G(k)=g(k)\in E\), respectively. Let us consider first the case when \(g(k)=g_{1}\). By combining Equations (5.5), (2.7) and (2.4) we get
Now, let us compute these three expectations:
A substitution of (5.9), (5.10) and (5.11) in (5.8) leads to
Some computations yield
Symmetric arguments produce the second equation of the system (2.13). Concerning the uniqueness of the solution, it is sufficient to note that the matrix of the coefficients of the system has the form \(A=\left[ \begin{array}{cc} \ r^{2}-p_{11}g_{1}^{2} &{} -p_{12}g_{2}^{2} \\ \ -p_{21}g_{1}^{2} &{} r^{2}-p_{22}g_{2}^{2} \\ \end{array} \right] \) and then
Using assumption A2, we have that
The non negativity of the solution is due to the fact that \(g_{1},g_{2}\ge 0\).
1.3 Proof of Proposition 2.5
For \(n=1\) it is simple to compute the expectation
Consequently, we have the matrix form
and, taking into account (cf. (2.8)) that \( p(d_{k},g_{a}) = p(d(k), g(k)) = d(k) \psi _1(g(k)) = d_k \psi _1(g_a), \) we obtain (2.22) for \(n=1\). For \(n=2\) we have
As we did previously for the case \(n=1,\) taking into account that \(\left( \mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^2 = \mathbf {P}^2 \mathbf {I}_{\mathbf {g}^2}\) we obtain the matrix form given in (2.22) for \(n=2\). An iteration up to the nth step produces the result given in (2.21) and the corresponding matrix form (2.22).
1.4 Proof of Lemma 3.3
Note that we have
where we used the fact that matrices like \(\mathbf {I}_{r}^{n}\) or \(\mathbf {I}_{\mathbf {g}}^{n}\) commute with any matrix. Taking into account Lemma 3.2, we have
1.5 Proof of Theorem 3.5
First, note that we have
Using the strong consistency of \(\widehat{p}_{ij}(m),\) as m goes to infinity (cf. Proposition 3.1), and the continuous mapping theorem, we obtain the strong consistency of \(\widehat{\varvec{\Psi }}_{1}(m).\)
Second, taking into account the expressions (3.3) and (5.17) for \(\varvec{\Psi }_{1}\) and \(\widehat{\varvec{\Psi }}_{1}(m),\) respectively, together with the asymptotic normality of the vector \((\sqrt{m}\big (\widehat{p}_{ij}(m)-p_{ij}\big ))_{i = 1, \ldots , s, j = 1, \ldots , s-1}\) (cf. Proposition 3.1) with covariance matrix \({\varvec{\varGamma }}\) defined as the restriction of \(\widetilde{{\varvec{\varGamma }}}\) given in (3.2) to \(s(s-1)\times s(s-1),\) we obtain the asymptotic normality stated in (3.9) using the delta method because \(\varvec{\Psi }_{1}\) is a differentiable function of \(\mathbf {P}\).
The last point we have to deal with is to give an expression of the partial derivative matrix \(\varvec{\Phi }_{1}'.\) Note that we have
Note that in the computation of the derivative of \(\varPhi _{1}\) with respect to its argument \((p_{ij}, i = 1, \ldots , s, j =1, \ldots , s-1),\) this argument is ordered in the lexicographic order: \((p_{11}, \ldots , p_{1(s-1)}, p_{21}, \ldots , p_{2(s-1)}, \ldots , p_{s 1}, \ldots , p_{s(s-1)}) \in \mathbb R^{s(s-1)}.\)
An arbitrary column of this matrix \(\varvec{\Phi }_{1}'\) corresponding to \((i, j), i = 1, \ldots , s, j =1, \ldots , s-1,\) is given by
Using the expression of \(\frac{\partial \mathbf {P} }{\partial p_{ij}}\) given in (3.5), together with the computation of \(\frac{\partial \left( \mathbf {I}_{r}-\mathbf {P} \mathbf {I}_{\mathbf {g}}\right) ^{-1}}{\partial p_{ij}}\) given in Lemma 3.3 (for \(n=1\)), allows us to completely compute \(\frac{\partial \varPhi _1}{\partial p_{ij}}\) and thus \(\varvec{\Phi }_{1}'.\)
1.6 Proof of Theorem 3.7
First, note that we have
Using the strong consistency of estimator \(\widehat{\varvec{\Psi }}_{1}(m)\) (cf. Theorem 3.5), the strong consistency of \(\widehat{p}_{ij}(m),\) as m goes to infinity (cf. Proposition 3.1), and the continuous mapping theorem, we obtain the strong consistency of \(\widehat{\varvec{\Psi }}_{2}(m).\)
Second, taking into account the expressions (3.12) and (5.20) for \(\varvec{\Psi }_{2}\) and \(\widehat{\varvec{\Psi }}_{2}(m), \) respectively, together with the asymptotic normality of the vector \((\sqrt{m}\big (\widehat{p}_{ij}(m)-p_{ij}\big ))_{i = 1, \ldots , s, j = 1, \ldots , s-1}\) (cf. Proposition 3.1) with covariance matrix \({\varvec{\varGamma }}\) defined as the restriction of \(\widetilde{{\varvec{\varGamma }}}\) given in (3.2) to \(s(s-1)\times s(s-1),\) we obtain the asymptotic normality stated in (3.14) using the delta method because \(\varvec{\Psi }_{2}\) is a differentiable function of \(\mathbf {P}\) being \(\varvec{\Psi }_{1}\) a differentiable function of \(\mathbf {P}\).
The last point we have to deal with is to give an expression of the partial derivative matrix \(\varvec{\Phi }_{2}'.\) Note that we have
An arbitrary column of this matrix corresponding to \((i, j), i = 1, \ldots , s, j =1, \ldots , s-1,\) is given by
Using the computation of \(\frac{\partial \left( \mathbf {I}_{r}^{2}-\mathbf {P} \mathbf {I}_{\mathbf {g}}^{2}\right) ^{-1}}{\partial p_{ij}}\) given in Lemma 3.3 (for \(n=2\)), the expression of \(\frac{\partial \mathbf {P} }{\partial p_{ij}}\) given in (3.5), together with the computation of \(\frac{\partial \varvec{\Psi }_{1}}{\partial p_{ij}}\) obtained in (5.19), we completely compute the value of \(\frac{\partial \varPhi _2}{\partial p_{ij}}\) and thus of \(\varvec{\Phi }_{2}'.\)
1.7 Proof of Theorem 3.8
First, note that we have
Using the strong consistency of \(\widehat{p}_{ij}(m),\) as m goes to infinity (cf. Proposition 3.1), and the continuous mapping theorem, we obtain the strong consistency of estimator (3.16).
Second, taking into account the expressions (3.17) and (5.23) for the expected forecast fundamental prices and the corresponding estimator, together with the asymptotic normality of the vector \((\sqrt{m}\left( \widehat{p}_{ij}(m)-p_{ij}\right) )_{i = 1, \ldots , s, j = 1, \ldots , s-1}\) (cf. Proposition 3.1) with covariance matrix \({\varvec{\varGamma }}\) defined as the restriction of \(\widetilde{{\varvec{\varGamma }}}\) given in (3.2) to \(s(s-1)\times s(s-1),\) we obtain the asymptotic normality stated in (3.19) using the delta method.
The last point we have to deal with is to give an expression of the partial derivative matrix \(\varvec{\Theta }'.\) Note that we have
An arbitrary column of this matrix corresponding to \((i, j), i = 1, \ldots , s, j =1, \ldots , s-1,\) is given by
Using the computation of \(\frac{\partial \mathbf {P}^n}{\partial p_{ij}}\) given in Lemma 3.2, of \(\frac{\partial \mathbf {P}}{\partial p_{ij}}\) given in (3.5) and of \(\frac{\partial \varvec{\Psi }_{1}}{\partial p_{ij}}\) obtained in (5.19), we are able to compute the value of \(\frac{\partial \varTheta }{\partial p_{ij}} \) and thus of \(\varTheta '.\)
Rights and permissions
About this article
Cite this article
Barbu, V.S., D’Amico, G. & De Blasis, R. Novel advancements in the Markov chain stock model: analysis and inference. Ann Finance 13, 125–152 (2017). https://doi.org/10.1007/s10436-017-0297-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10436-017-0297-9