Abstract
In the setting of exponential investors and uncertainty governed by Brownian motions, we first prove the existence of an incomplete equilibrium for a general class of models. We then introduce a tractable class of exponential–quadratic models and prove that the corresponding incomplete equilibrium is characterized by a coupled set of Riccati equations. Finally, we prove that these exponential–quadratic models can be used to approximate the incomplete models we studied in the first part.
Similar content being viewed by others
References
Biagini, S., Sîrbu, M.: A note on admissibility when the credit line is infinite. Stochastics 84, 157–169 (2012)
Christensen, P.O., Larsen, K., Munk, C.: Equilibrium in securities markets with heterogeneous investors and unspanned income risk. J. Econ. Theory 147, 1035–1063 (2012)
Christensen, P.O., Larsen, K.: Incomplete continuous-time securities markets with stochastic income volatility. Rev. Asset Pricing Stud. 4, 247–285 (2014)
Cuoco, D., He, H.: Dynamic equilibrium in infinite-dimensional economies with incomplete financial markets. Working paper (1994). Unpublished
Dana, R., Jeanblanc, M.: Financial Markets in Continuous Time. Springer, Berlin (2003)
Duffie, D.: Dynamic Asset Pricing Theory, 3rd edn. Princeton University Press, Princeton (2001)
Duffie, D., Kan, R.: A yield-factor model of interest rates. Math. Finance 6, 379–406 (1996)
Evans, L.C.: Partial Differential Equations, 2rd edn. AMS, Providence (2010)
Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)
Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus, 2nd edn. Springer, Berlin (1991)
Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance. Springer, Berlin (1998)
Krylov, N.V.: Lectures on Elliptic and Parabolic Equations in Hölder Spaces. Graduate Studies in Mathematics, vol. 12. AMS, Providence (1996)
Ladyženskaja, O.A., Solonnikov, V.A., Ural’ceva, N.N.: Linear and Quasilinear Equations of Parabolic Type. Translations of Mathematical Monographs, vol. 23. AMS, Providence (1968)
Shiryaev, A.N., Cherny, A.S.: Vector stochastic integrals and the fundamental theorems of asset pricing. In: Proceedings of the Steklov Mathematical Institute, vol. 237, pp. 12–56 (2002)
Zhao, Y.: Stochastic equilibria in a general class of incomplete Brownian market environments. Ph.D. Thesis from UT-Austin (2012). Available online https://repositories.lib.utexas.edu/bitstream/handle/2152/ETD-UT-2012-05-5064/ZHAO-DISSERTATION.pdf?sequence=1
Žitković, G.: An example of a stochastic equilibrium with incomplete markets. Finance Stoch. 16, 177–206 (2012)
Acknowledgements
The second author has been supported by the National Science Foundation under Grant No. DMS-1411809 (2014–2017). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). We should like to thank Hao Xing, Steve Shreve, the anonymous referee, the anonymous Associate Editor, and the Co-Editor Pierre Collin-Dufresne for their constructive comments.
Author information
Authors and Affiliations
Corresponding author
Appendix A: Proofs
Appendix A: Proofs
For \(x\in\mathbb{R}^{d}\), we denote by \(x_{j}\) the \(j\)th coordinate and by \(|x|\) the usual Euclidean 2-norm. If \(X\in\mathbb{R}^{d\times d}\) has an inverse \(X^{-1}\), we denote by \(X^{-T}\) the transpose of \(X^{-1}\). We use the letter \(c\) to denote various constants depending only on \(\underline {\delta},\overline{\delta} ,D,\alpha,a_{i},I,N\). If the constant also depends on some Hölder norm, we use the letter \(C\). The constants \(c\) and \(C\) never depend on any time variable and can change from line to line.
1.1 A.1 Hölder spaces
In this section, we briefly recall the standard notation related to Hölder spaces of bounded continuous functions; see, e.g., [12, Sects. 3.1 and 8.5]. We fix \(\alpha\in(0,1)\) in what follows. The norm \(|g|_{0}\) and the seminorm \([g]_{\alpha}\) are defined by
We denote by \(\partial_{y} g\) the vector of \(g\)’s derivatives and by \(\partial_{yy}g\) the matrix of \(g\)’s second order derivatives. The Hölder norms are defined by
and the corresponding Hölder spaces are denoted by \(C^{k+\alpha }(\mathbb{R} ^{D})\) for \(k=0,1,2\). In these expressions, we sum whenever the involved quantity is a vector or a matrix. So, e.g., \(|\partial_{y} g|_{0}\) denotes \(\sum_{d=1}^{D}|\partial_{y_{d}} g|_{0}\) for a function \(g=g(y) \in C^{1}(\mathbb{R}^{D})\).
We also need the parabolic Hölder spaces for functions of both time and state. For such functions, the parabolic supremum norm is defined by
We denote by \(\partial_{t}u\) the partial derivative with respect to time of a function \(u = u(t,x)\). The parabolic versions of the above Hölder norms are defined as
for \(u\in C([0,T]\times\mathbb{R}^{D}), u\in C^{0,1}([0,T]\times \mathbb{R}^{D})\) and \(u\in C^{1,2}([0,T]\times\mathbb{R}^{D})\), respectively. Here \(\partial _{y} u\) and \(\partial_{yy} u\) denote the first and second order derivatives with respect to the state variable and
The corresponding parabolic Hölder spaces are denoted by \(C^{k+\alpha }([0,T]\times\mathbb{R}^{D})\) for \(k=0,1,2\).
We conclude this section with a simple inequality which we need later.
Lemma A.1
For \(h_{1},h_{2}, \tilde{h}_{1}\) and \(\tilde{h}_{2}\) in \(C^{\alpha}([0,T]\times\mathbb{R}^{D})\), we have
Proof
Equation (3.1.6) in [12] gives for \(h_{1},h_{2}\in C^{\alpha}([0,T]\times\mathbb{R}^{D})\) the inequality
From this inequality and the definition of the parabolic norm \(|\cdot |_{\alpha}\), we get
Consequently, since
the triangle inequality produces (A.1). □
1.2 A.2 Estimates from linear algebra
We start with a result from linear algebra which we need in the next section. For a \(D\times D\) positive definite matrix \(X\), we denote by \(\Vert X\Vert_{F}\) the Frobenius norm, i.e.,
We note that the Cauchy–Schwarz inequality holds for the Frobenius norm; that is, \(|Xx| \le\Vert X\Vert_{F} |x|\) for \(x\in\mathbb{R}^{D}\).
Lemma A.2
Let \(C\) satisfy Assumption 2.3. We define the \(D\times D\)-matrix
-
1.
The function \(\varSigma\) is symmetric, positive definite and satisfies
$$|\varSigma(t,s)_{ij}| \le\overline{\delta}(s-t), \quad\underline {\delta}(s-t)\le\varSigma (t,s)_{ii},\quad i,j=1,\dots,D. $$ -
2.
The inverse \(\varSigma(t,s)^{-1}\) exists and is symmetric, positive definite and satisfies
$$\frac{1}{\overline{\delta}(s-t)} \le\varSigma(t,s)^{-1}_{ii} \le\frac {1}{\underline{\delta} (s-t)},\quad i=1,\dots,D. $$Consequently, \(|\varSigma(t,s)_{ij}^{-1}| \le\frac{1}{\underline {\delta}(s-t)}\) for \(i,j=1,\dots,D\).
-
3.
The lower triangular matrix \(L(t,s)\) appearing in the Cholesky decomposition \(\varSigma(t,s) = L(t,s)L(t,s)^{T}\) satisfies
$$|L(t,s)_{ij}| \le\sqrt{\overline{\delta}(s-t)},\quad L(t,s)_{ii} \ge\sqrt{\underline{\delta} (s-t)},\quad i,j=1,\dots,D. $$ -
4.
For \(0\le t_{1} < t_{2} < s\), we have \(\Vert L(t_{1},s)-L(t_{2},s)\Vert_{F} \le c \sqrt{t_{2}-t_{1}}\), where \(c\) is a constant depending only on \(\underline{\delta},\overline{\delta}, D\).
-
5.
There exists a constant \(c\), depending only on \(\underline {\delta},\overline{\delta}, D\), such that for \(i=1,\dots,D\) and \(0\le t_{1} < t_{2} < s\), we have
$$\Big| \sqrt{\varSigma(t_{1},s)_{ii}^{-1}} - \sqrt{\varSigma (t_{2},s)^{-1}_{ii}}\Big| \le c \min\bigg\{ \frac{1}{\sqrt{s-t_{2}}}, \frac{t_{2}-t_{1}}{(s-t_{2})^{\frac{3}{2}}}\bigg\} . $$
Proof
Claim 1. The symmetry follows from (A.2). For \(y\in\mathbb{R}^{D}\), condition (2.7) of Assumption 2.3 yields
Therefore, \(\varSigma(t,s)\) is also positive definite. By letting \(y\) be the \(i\)th basis vector \(e_{i}\in\mathbb{R}^{D}\), we get
Finally, the inequality \(|\varSigma(t,s)_{ij}| \le\sqrt{\varSigma (t,s)_{ii}\varSigma(t,s)_{jj}}\) (see Problem 7.1.P1 in [9]) yields Claim 1.
Claim 2. Because \(\varSigma(t,s)^{-1}\) is positive definite, the eigenvalues of \(\varSigma(t,s)^{-1}\) are the reciprocals of the eigenvalues of \(\varSigma(t,s)\). The claimed inequalities then follow from Claim 1 and Problem 4.2.P3 in [9]. The last estimate follows as in the proof of Claim 1 from \(|\varSigma(t,s)^{-1}_{ij}| \le\sqrt{\varSigma(t,s)^{-1}_{ii}\varSigma (t,s)^{-1}_{jj}}\).
Claim 3. To see the first claim, we note that \(\varSigma(t,s) = L(t,s)L(t,s)^{T}\) and Claim 1 produce
For the second, we use Corollary 3.5.6, Theorem 4.3.17 and Corollary 7.2.9 in [9] to get
Claim 4. We prove this by induction. By Claim 1, we have
For the induction step, we suppose there is a constant \(c\) such that for \(j=1,\dots,k-1\) and \(i=j,\dots,D\), we have \(|L(t_{1},s)_{ij} - L(t_{2},s)_{ij}| \le c \sqrt{t_{2}-t_{1}}\). For \(j=i=k\), we have
Above, the first inequality follows from Claim 3, the second from Claim 1, and the last from Claim 3 and the induction hypothesis. The last term is bounded by \(c \sqrt{t_{2}-t_{1}}\) for some constant \(c\).
For \(j=k\) and \(i = k+1,\dots,D\), we can use \(\varSigma= LL^{T}\) to obtain the representation
and arguments similar to the previous diagonal case to obtain the upper bound. All in all, we have the Frobenius norm estimate \(\Vert L(t_{1},s)-L(t_{2},s)\Vert_{F} \le c \sqrt{t_{2}-t_{1}}\).
Claim 5. By using \(\tfrac{\partial}{\partial t} \varSigma(t,s)^{-1} = -\varSigma(t,s)^{-1} \tfrac{\partial}{\partial t} \varSigma(t,s) \varSigma (t,s)^{-1}\), we see for \(0\le t< s\) that
Therefore, Claim 2 gives us the bound
for some constant \(c\). The mean value theorem then produces
This inequality combined with Claim 2 concludes the proof. □
1.3 A.3 Regularity of the heat equation
We define \(\varSigma(s,t)\) by (A.2) and let \(\varGamma\) denote the \(D\)-dimensional (inhomogenuous) Gaussian kernel
Lemma A.3
For \(f_{0}\in C^{\alpha }([0,T]\times\mathbb{R}^{D})\), we have for all \(t\in[0,T]\) and \(y\in \mathbb{R}^{D}\)
for \(d=1,\dots,D\).
Proof
We first assume that \(f_{0}\) is continuously differentiable with compact support. In that case, the dominated convergence theorem and integration by parts produce the claim. For \(f_{0}\) merely continuous and bounded, we approximate as follows. We first fix \(R>0\). Since both \(\varGamma(t,\cdot,\cdot)\) and \(\varGamma_{y_{d}}(t,\cdot,\cdot)\) are integrable over \([t,T]\times\mathbb{R}^{D}\), we can find \(M_{n}>R\) such that for \(n\in\mathbb{N}\),
For each \(n\in\mathbb{N}\), the density of compactly supported functions allows us to find a continuously differentiable function \(f_{n}\) with compact support such that
For \(|y|\le R\), we have \(\{x\in\mathbb{R}^{D}: |x+y| > M_{n}\} \subset\{ x\in\mathbb{R} ^{D}: |x| > M_{n} -R\}\); hence
A similar estimate (also uniform in \(y\)) is found by replacing \(\varGamma \) with \(\varGamma_{y_{d}}\). For \(|y|\le R\) and \(t\in[0,T]\), we define the functions
Since \(f_{n}\) has compact support, we have \(\partial_{y_{d}} g_{n} = - \int _{t}^{T} \int_{\mathbb{R}^{D}}\varGamma_{y_{d}} f_{n}\,dx\,ds\). Therefore,
The fundamental theorem of calculus yields for \(|y|\le R\) that
Since \(\partial_{y_{d}} g_{n}\) is continuous and converges uniformly to \(h\) on \(\{|y|\le R\}\), we know that \(h\) is also continuous. We can then apply \(\partial_{y_{d}}\) to obtain \(\partial_{y_{d}}g_{0} = h\). Since \(R>0\) was arbitrary, the claim follows. □
Lemma A.4
Under Assumption 2.3, for \(\alpha\in(0,1)\) and \(T\in[0,1]\), suppose that \(f\in C^{\alpha}([0,T]\times\mathbb{R}^{D})\) and \(g \in C^{2+\alpha}(\mathbb{R}^{D})\) are given. Then there exist a constant \(c = c(\underline{\delta},\overline{\delta}, \alpha, D)\) and a unique solution \(u \in C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\) of
which satisfies the parabolic norm estimate
Proof
Theorem 4.5.1 in [13] ensures the existence of a unique \(C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\)-solution \(u\) of (A.4). From Sect. 5.7B in [10], we get the Feynman–Kac representation
From the representation (A.6), we immediately obtain \(|u(t,y)| \le\vert g \vert_{0} + (T-t) \vert f \vert_{0}\), which estimates the parabolic Hölder norm \(|u|_{0}\) via the Hölder norm \(|g|_{0}\) by
Because \(\varSigma(t,s)\) is positive definite, there exists a unique Cholesky decomposition \(\varSigma(t,s) = L(t,s)L(t,s)^{T}\) for a lower non-singular triangular matrix \(L(t,s)\). Furthermore, \(\varSigma(t,s)^{-1} = L(t,s)^{-T}L(t,s)^{-1}\). By using \((\operatorname{det}L(t,s))^{2} =\operatorname{det}\varSigma(t,s)\) when changing variables, we can rewrite (A.6) as
Since \(g\in C^{2+\alpha}\), we can apply the dominated convergence theorem to the \(g\)-integral, and we can apply Lemma A.3 to the \(f\)-integral in (A.8) to produce
after substituting \(z= L(t,s)^{-1}(y-x)\) in the \(f\)-integral. Since Claim 2 of Lemma A.2 gives \(\Vert L(t,s)^{-T}\Vert ^{2}_{F} ={\mathrm{tr}} (\varSigma^{-1}(t,s))\le\tfrac{D}{\underline {\delta}(s-t)}\), the Cauchy–Schwarz inequality produces
By computing the integrals, we obtain the estimate
To estimate the parabolic seminorm \([\partial_{y} u]_{\alpha}\), we provide four estimates which when combined produce the estimate. We start by fixing \(0< t_{1}< t_{2}< T\) and \(y_{1},y_{2}\in\mathbb{R}^{D}\). For the first estimate, we have
The first inequality is due to the interpolation inequality which ensures that we have \([g_{y_{d}}]_{\alpha}<\infty\); see, e.g., Theorem 3.2.1 in [12]. The second inequality uses the Cauchy–Schwarz inequality, whereas the last inequality is from Claim 4 of Lemma A.2.
The second estimate reads
where the first inequality is found as before. The third estimate is similar and reads
For the fourth and last estimate, we first consider the case \(d=D\). By the triangular structure of \(L^{-1}\) and \(L^{-T}\), we have \(\sqrt {\varSigma^{-1}_{DD}}=L^{-1}_{DD}=L^{-T}_{DD}\). This gives us
where the inequality follows from Claim 5 of Lemma A.2. The case \(d < D\) can be reduced to the case \(d=D\) just considered by performing the following substitution: We let \(J\) be the \(D\times D\)-matrix obtained by interchanging the \(d\)th and \(D\)th rows of the \(D\times D\)-identity matrix and \(\tilde{L}\) the lower triangular matrix in the Cholesky factorization \(J\varSigma J = \tilde{L}\tilde {L}^{T}\). For \(z:= \tilde{L}^{-1}J(y-x)\), we have
where we used that \(JJ\) is the \(D\times D\)-identity matrix and \(J\varSigma ^{-1}J = \tilde{L}^{-T}\tilde{L}^{-1}\).
The above four estimates together with the triangle inequality as well as the representation (A.9) produce the parabolic seminorm estimate
Finally, by combining the three estimates (A.7), (A.10) and (A.11) and using \(T\le1\), we produce the parabolic norm estimate (A.5). □
Theorem A.5
Under the assumptions of Theorem 3.1, there exists \(T_{0}\in(0,1]\) such that for all \(T< T_{0}\), the nonlinear PDE system (3.2) possesses a unique solution \((u^{(i)})_{i=1}^{I} \subset C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\).
Proof
We define \({\mathcal{S}}_{T}:=(C^{1+\alpha}([0,T] \times \mathbb{R}^{D}))^{I}\) for \(I\in\mathbb{N}\), as well as the norm
Since \((C^{1+\alpha}([0,T] \times\mathbb{R}^{D}),|\cdot|_{1+\alpha })\) is a Banach space, we also have that \(({\mathcal{S}}_{T}, \Vert\cdot\Vert _{{\mathcal{S}}_{T}})\) is a Banach space.
In the following, we use the notation from Lemma A.4. For \(i=1,\dots,I\), we define the \(i\)th coordinate \(\varPi^{(i)}\) of the map \(\varPi: {\mathcal{S}}_{T} \to{\mathcal{S}}_{T}\) by
where \(f^{(i)}: {\mathcal{S}}_{T} \to C^{\alpha}([0,T] \times\mathbb {R}^{D})\) is defined by
Based on Lemma A.1, we have for \(v,\tilde{v}\in {\mathcal{S}} _{T}\) the estimates
for a constant \(c\). By combining (A.5) with (A.13), we produce the estimates
Therefore, by the definition of \(\varPi\), we obtain the estimates
To ensure that \(\varPi\) is a contraction map, we consider real numbers \(R>0\) and \(T_{0}\in(0,1]\) such that (these constants \(R\) and \(T_{0}\) exist)
For \(T\in(0,T_{0}]\), we define the \(R\)-ball
The estimates (A.14) and the parameter restrictions (A.15) imply that \(\varPi\) maps \(\mathcal{B}_{T}\) to \(\mathcal{B}_{T}\) and that \(\varPi\) is a contraction map on \(\mathcal{B}_{T}\). Since the space \((\mathcal{S}_{T},\Vert\cdot\Vert_{{\mathcal{S}}_{T}})\) is complete, there exists a unique fixed point \(u\in\mathcal{B}_{T}\) of the map \(\varPi\). The fixed point property \(\varPi(u) = u\) implies that \(u^{(i)}\) is given by (A.6) with \(f := f^{(i)}(u)\in C^{\alpha}([0,T]\times \mathbb{R} ^{D})\). By uniqueness, we obtain \(u^{(i)} \in C^{2+\alpha}([0,T]\times \mathbb{R}^{D})\). Consequently, the functions \((u^{(i)})_{i=1}^{I}\subset C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\) solve the stated PDE system. □
1.4 A.4 Remaining proofs
We denote by \({I \choose 0}\) the \(D\times N\) matrix whose upper \(N\) rows are the identity matrix \(I_{N\times N}\), whereas all remaining entries are zeros.
Proof of Theorem 3.1
We use Theorem A.5 and let \(T< T_{0}\). We can then define the function \(\lambda= \lambda(t,y)\) by (3.3) as well as the constant
The proof is split into the following two steps.
Step 1. For \(i=1,\dots,I\), we define the \(N\)-dimensional process
where \(\bar{C}(t)_{ij}\) denotes \(C(t)_{ij}\) for \(i=1,\dots,D\) and \(j=1,\dots,N\) . We show that \(\hat{H}^{(i)}\) is admissible in some set \({\mathcal{A}}_{i}={\mathcal{A}}_{i}(\hat{\mathbb{Q}}^{(i)})\) and attains the supremum in
We note that the initial wealth is irrelevant in (A.18) because of the exponential preference structure. We define the function \(V^{(i)}\) in terms of \(u^{(i)}\) by (3.1) and the process \(d\hat{X}^{(i)}_{t} := r\hat{X}^{(i)}_{t}dt + (\hat {H}^{(i)}_{t})^{T} (\lambda(t,Y_{t}) dt +\binom{I}{0}^{T}dW_{t})\) with \(\hat {X}^{(i)}_{0}:=0\). Itô’s lemma yields the dynamics of \(V_{t}^{(i)} = V^{(i)}(t,\hat{X}^{(i)}_{t},Y_{t})\) as
Since the functions \(\partial_{y}u^{(i)}\) and \(\lambda\) are bounded, we can use Novikov’s condition to see that \(V^{(i)}\) is indeed a ℙ-martingale. Thus the terminal condition \(u^{(i)}(T,y) = g^{(i)}(y)\) yields \(q^{(i)}:= \mathbb{E}[e^{rT}U_{i}'(\hat{X}^{(i)}_{T}+ g^{(i)}(Y_{T}))]\in(0,\infty)\). We can then define the ℙ-equivalent probability measures \(\hat{\mathbb{Q}}^{(i)}\) via the Radon–Nikodým derivatives on \(\mathcal{F}_{T}\) by
We next prove that \(\hat{\mathbb{Q}}^{(i)}\in{\mathcal{M}}\). By the martingale property of \(V^{(i)}\), we have
Therefore, the dynamics (A.19) of \(V^{(i)}\) together with Girsanov’s theorem ensure that \(\tilde{S}:= S/S^{(0)}\) is an \(N\)-dimensional \(\hat{\mathbb{Q}}^{(i)}\)-martingale; hence \(\hat {\mathbb{Q} }^{(i)}\in{\mathcal{M}}\). Since the volatility of \(\tilde{S}\) is \(e^{-rt}\) and the process \(\hat{H}^{(i)}\) defined by (A.17) is uniformly bounded, we have that the process \((\hat{X}^{(i)}_{t}e^{-rt})\) is a \(\hat{\mathbb{Q}}^{(i)}\)-martingale for \(t\in[0,T]\); hence \(\hat {H}^{(i)}\in{\mathcal{A}}_{i}\).
Finally, the verification of the optimality of \(\hat{H}^{(i)}\) is fairly standard and can be seen as follows. Fenchel’s inequality produces \(U_{i}(x) \le U^{*}_{i}(y) + xy\) for all \(x\in\mathbb{R}\) and \(y>0\), where \(U^{*}_{i}\) is the convex conjugate of \(U_{i}\), i.e., \(U^{*}_{i}(y):= \sup_{x\in \mathbb{R}}( U_{i}(x) - xy)\). Therefore, for arbitrary \(H\in{\mathcal{A}}_{i}\), we have
Indeed, the second inequality comes from the \(\hat{\mathbb{Q} }^{(i)}\)-supermartingale property of \(H\cdot \tilde{S}\), the first from the \(\hat{\mathbb{Q}}^{(i)}\)-martingale property of \(\hat {H}^{(i)}\cdot \tilde{S}\), and the last equality follows from the first order condition in the definition of \(U^{*}\); see, e.g., Lemma 3.4.3(i) in [11]. This verifies that \(\hat{H}^{(i)}\) attains the supremum in (A.18).
Step 2. Based on the previous step, we can rewrite the optimization problem (2.3) as
It is straightforward to solve this problem for \(\hat{c}^{(i)}_{0}\) and see that (A.16) ensures the clearing condition \(\sum_{i=1}^{I}\hat {c}^{(i)}_{0}=0\). □
Proof of Theorem 4.1
We consider the ODE system in Sect. 4.1 in terms of \(\alpha^{(i)}(t) \in\mathbb{R}\), \(\beta ^{(i)}(t) \in\mathbb{R}^{D}\) and \(\gamma^{(i)}(t) \in\mathbb{R}^{D\times D}\) for \(t\ge0\) and \(i=1,2,\dots,I\). Since the right-hand side of this system is locally Lipschitz-continuous as a function of the left-hand side, there exists a unique solution up to some explosion time \(T^{\mathrm{Riccati}}_{0}\in (0,\infty]\) by the Picard–Lindelöf theorem. For \(i=1,2,\dots,I\), we consider the quadratic form (4.2). By computing the various derivatives, we see that (4.2) solves the coupled PDE system (3.2).
It remains to perform the verification, and here we just point out how the proof of Theorem 3.1 can be adjusted to the present case where \(g^{(i)}\) is a quadratic function. The first issue is the martingale property of the process \(V^{(i)}\) with the dynamics (A.19). As in the proof of Theorem 3.1, the process \((V^{(i)}(t,\hat{X}^{(i)}_{t},Y_{t}))\)—with \(u^{(i)}\) defined by (4.2) and \(V^{(i)}\) defined by (3.1)—is a local martingale under ℙ with the dynamics (A.19). To see that \(V^{(i)}\) is a martingale, we note that the partial derivative \(\partial_{y}u^{(i)}\) is an affine function of \(y\) with deterministic continuous functions of time to maturity as coefficients. Therefore, the function \(\lambda\) defined by (3.3) is also an affine function of \(y\). Because \(dY_{t} = C(t)dW_{t}\), we can use Corollary 3.5.16 in [10] to see that \(V^{(i)}\) is a martingale on \([0,T]\) for \(T< T^{\mathrm{Riccati}}_{0}\).
Secondly, we must prove the \(\hat{\mathbb{Q}}^{(i)}\)-martingale property of the product process \(d(\hat{X}^{(i)}_{t}e^{-rt})= (\hat {H}^{(i)}_{t})^{T}d\tilde{S}_{t}\) with \(\hat{\mathbb{Q}}^{(i)}\) defined via \(V^{(i)}\) as in the proof of Theorem 3.1. The dynamics (A.19) and Girsanov’s theorem produce the \(\hat{\mathbb{Q}}^{(i)}\)-Brownian motions
Therefore, the drift in the \(\hat{\mathbb{Q}}^{(i)}\)-dynamics of \(dY_{t} = C(t)dW_{t}\) is an affine function of \(Y_{t}\) with bounded time-dependent coefficients.
Since the volatility of \(\tilde{S}\) is \(e^{-rt}\), it suffices to verify the square-integrability property \(\mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[\int _{0}^{T} (\hat{H}^{(i)}_{t})^{2}\,dt]=\int_{0}^{T} \mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[(\hat {H}^{(i)}_{t})^{2}]\,dt<\infty\), where \(\hat{H}^{(i)}\) is defined by (A.17). To this end, we define the stopping times
Because the functions \(\lambda\) and \(\partial_{y} u^{(i)}\) are affine with uniformly bounded time-dependent coefficients, the above expression for \(W^{\hat{\mathbb{Q}}^{(i)},m}\) allows us to find two positive constants \(C_{1}\) and \(C_{2}\) (independent of \(k\)) such that
where we have used Tonelli’s theorem in the last inequality. Next, the mapping \([0,T]\ni t\mapsto\mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[|Y_{t\land \tau^{(k)}}|^{2}]\) is continuous by the dominated convergence theorem. Therefore, Gronwall’s inequality produces the bound
Fatou’s lemma then produces
Finally, the definition (A.17) of \(\hat{H}^{(i)}\) and the affineness of \(\lambda\) and \(\partial_{y} u^{(i)}\) ensure that there exists a constant \(C_{3}\) such that \(\mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[(\hat {H}^{(i)}_{t})^{2}]\le C_{3}e^{C_{2}t}\). This latter expression is integrable on \([0,T]\) and the claim follows. □
Proof of Theorem 4.2
In this proof, the functions \(\tilde{\lambda}, \tilde{u}\) and \(\partial _{y}\tilde{u}\) refer to the functions from Theorem 4.1 (and its proof) when the endowment functions (4.1) are specified by the Taylor approximations (4.4).
The definitions (3.3) and (4.3) of \(\lambda\) and \(\tilde{\lambda}\) together with the triangle inequality give
Therefore, it is enough to show that
From the representation (A.9) of \(\partial_{y}u^{(i)}\) with \(f\) replaced by \(f^{(i)}(u)\), where \(f^{(i)}(u)\) is defined by (A.12), we get
The first term in (A.20) can be estimated by
Here, the first equality is produced by the mean value theorem where \(s = s(z)\) is on the line segment connecting \(y-L(t,T)z\) and \(y\). The third and fourth inequality are due to Cauchy–Schwarz, \(|L(t,T)z| \le \Vert L(t,T)\Vert_{F} |z|\), combined with Claim 3 of Lemma A.2.
The second term in (A.20) can be estimated similarly by
By integrating \(s\) over \([t,T]\), we produce the overall estimate of (A.20) as
We also have
Here the first equality follows from the definition of \(\tilde {g}^{(i)}\). The second equality is produced by the mean value theorem for a point \(s = s(y)\) on the line segment connecting \(y\) and 0. Finally, we claim that there exists a constant \(C\) such that
To see this, we first note that
Since \(\beta^{(i)}(0)_{d} = \partial_{y_{d}}g^{(i)}(0)\), the mean value theorem gives an \(s \in[0,T-t]\) with
because the derivative \((\beta^{(i)})'\) is bounded on \([0,T]\) (the constant \(C\) does not depend on \(T\) as long as \(T< T_{0}^{\mathrm{Riccati}}\)). The estimate involving \(\gamma^{(i)}\) is similar and (A.23) follows.
By combining the estimates (A.21)–(A.23), we get
Finally, by taking expectations via (A.24), we obtain
The last inequality holds because we are considering \(T\in(0,1]\) and because
for \(t\in[0,T]\). This estimate follows from the definition (A.3) of \(\varGamma\) and the bounds provided in Lemma A.2. □
Rights and permissions
About this article
Cite this article
Choi, J.H., Larsen, K. Taylor approximation of incomplete Radner equilibrium models. Finance Stoch 19, 653–679 (2015). https://doi.org/10.1007/s00780-015-0268-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00780-015-0268-9