Skip to main content
Log in

Taylor approximation of incomplete Radner equilibrium models

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

In the setting of exponential investors and uncertainty governed by Brownian motions, we first prove the existence of an incomplete equilibrium for a general class of models. We then introduce a tractable class of exponential–quadratic models and prove that the corresponding incomplete equilibrium is characterized by a coupled set of Riccati equations. Finally, we prove that these exponential–quadratic models can be used to approximate the incomplete models we studied in the first part.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Biagini, S., Sîrbu, M.: A note on admissibility when the credit line is infinite. Stochastics 84, 157–169 (2012)

    MathSciNet  Google Scholar 

  2. Christensen, P.O., Larsen, K., Munk, C.: Equilibrium in securities markets with heterogeneous investors and unspanned income risk. J. Econ. Theory 147, 1035–1063 (2012)

    Article  MathSciNet  Google Scholar 

  3. Christensen, P.O., Larsen, K.: Incomplete continuous-time securities markets with stochastic income volatility. Rev. Asset Pricing Stud. 4, 247–285 (2014)

    Article  Google Scholar 

  4. Cuoco, D., He, H.: Dynamic equilibrium in infinite-dimensional economies with incomplete financial markets. Working paper (1994). Unpublished

  5. Dana, R., Jeanblanc, M.: Financial Markets in Continuous Time. Springer, Berlin (2003)

    Google Scholar 

  6. Duffie, D.: Dynamic Asset Pricing Theory, 3rd edn. Princeton University Press, Princeton (2001)

    Google Scholar 

  7. Duffie, D., Kan, R.: A yield-factor model of interest rates. Math. Finance 6, 379–406 (1996)

    Article  Google Scholar 

  8. Evans, L.C.: Partial Differential Equations, 2rd edn. AMS, Providence (2010)

    Google Scholar 

  9. Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)

    Google Scholar 

  10. Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus, 2nd edn. Springer, Berlin (1991)

    Google Scholar 

  11. Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance. Springer, Berlin (1998)

    Book  Google Scholar 

  12. Krylov, N.V.: Lectures on Elliptic and Parabolic Equations in Hölder Spaces. Graduate Studies in Mathematics, vol. 12. AMS, Providence (1996)

    Google Scholar 

  13. Ladyženskaja, O.A., Solonnikov, V.A., Ural’ceva, N.N.: Linear and Quasilinear Equations of Parabolic Type. Translations of Mathematical Monographs, vol. 23. AMS, Providence (1968)

    Google Scholar 

  14. Shiryaev, A.N., Cherny, A.S.: Vector stochastic integrals and the fundamental theorems of asset pricing. In: Proceedings of the Steklov Mathematical Institute, vol. 237, pp. 12–56 (2002)

    Google Scholar 

  15. Zhao, Y.: Stochastic equilibria in a general class of incomplete Brownian market environments. Ph.D. Thesis from UT-Austin (2012). Available online https://repositories.lib.utexas.edu/bitstream/handle/2152/ETD-UT-2012-05-5064/ZHAO-DISSERTATION.pdf?sequence=1

  16. Žitković, G.: An example of a stochastic equilibrium with incomplete markets. Finance Stoch. 16, 177–206 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The second author has been supported by the National Science Foundation under Grant No. DMS-1411809 (2014–2017). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). We should like to thank Hao Xing, Steve Shreve, the anonymous referee, the anonymous Associate Editor, and the Co-Editor Pierre Collin-Dufresne for their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kasper Larsen.

Appendix A: Proofs

Appendix A: Proofs

For \(x\in\mathbb{R}^{d}\), we denote by \(x_{j}\) the \(j\)th coordinate and by \(|x|\) the usual Euclidean 2-norm. If \(X\in\mathbb{R}^{d\times d}\) has an inverse \(X^{-1}\), we denote by \(X^{-T}\) the transpose of \(X^{-1}\). We use the letter \(c\) to denote various constants depending only on \(\underline {\delta},\overline{\delta} ,D,\alpha,a_{i},I,N\). If the constant also depends on some Hölder norm, we use the letter \(C\). The constants \(c\) and \(C\) never depend on any time variable and can change from line to line.

1.1 A.1 Hölder spaces

In this section, we briefly recall the standard notation related to Hölder spaces of bounded continuous functions; see, e.g., [12, Sects. 3.1 and 8.5]. We fix \(\alpha\in(0,1)\) in what follows. The norm \(|g|_{0}\) and the seminorm \([g]_{\alpha}\) are defined by

$$|g|_{0}:= \sup_{x\in\mathbb{R}^{D}} |g(x)|,\quad[g]_{\alpha}:= \sup _{x,y \in\mathbb{R} ^{D}, \ x\neq y}\frac{|g(x) - g(y)|}{|x-y|^{\alpha}},\quad g\in C(\mathbb{R}^{D}). $$

We denote by \(\partial_{y} g\) the vector of \(g\)’s derivatives and by \(\partial_{yy}g\) the matrix of \(g\)’s second order derivatives. The Hölder norms are defined by

$$\begin{aligned} |g|_{\alpha}&:= |g|_{0}+[g]_{\alpha}, \quad g \in C(\mathbb{R}^{D}),\\ |g|_{1+\alpha}&:= |g|_{0}+|\partial_{y} g|_{0}+[\partial_{y} g]_{\alpha}, \quad g \in C^{1}(\mathbb{R}^{D}),\\ |g|_{2+\alpha}&:= |g|_{0}+|\partial_{y} g|_{0}+|\partial_{yy} g|_{0}+ [\partial_{yy}g]_{\alpha}, \quad g \in C^{2}(\mathbb{R}^{D}), \end{aligned}$$

and the corresponding Hölder spaces are denoted by \(C^{k+\alpha }(\mathbb{R} ^{D})\) for \(k=0,1,2\). In these expressions, we sum whenever the involved quantity is a vector or a matrix. So, e.g., \(|\partial_{y} g|_{0}\) denotes \(\sum_{d=1}^{D}|\partial_{y_{d}} g|_{0}\) for a function \(g=g(y) \in C^{1}(\mathbb{R}^{D})\).

We also need the parabolic Hölder spaces for functions of both time and state. For such functions, the parabolic supremum norm is defined by

$$|u|_{0}:= \sup_{(t,x)\in[0,T]\times\mathbb{R}^{D}} |u(t,x)|,\quad u\in C([0,T]\times\mathbb{R}^{D}). $$

We denote by \(\partial_{t}u\) the partial derivative with respect to time of a function \(u = u(t,x)\). The parabolic versions of the above Hölder norms are defined as

$$\begin{aligned} |u|_{\alpha}&:= |u|_{0} +[ u]_{\alpha},\\ |u|_{1+\alpha}&:= |u|_{0}+|\partial_{y}u|_{0}+ [\partial_{y} u]_{\alpha},\\ |u|_{2+\alpha}&:= |\partial_{t}u|_{0} + [\partial_{t} u]_{\alpha}+ |u|_{0}+|\partial_{y} u|_{0}+|\partial_{yy} u|_{0} + [\partial_{yy}u]_{\alpha}, \end{aligned}$$

for \(u\in C([0,T]\times\mathbb{R}^{D}), u\in C^{0,1}([0,T]\times \mathbb{R}^{D})\) and \(u\in C^{1,2}([0,T]\times\mathbb{R}^{D})\), respectively. Here \(\partial _{y} u\) and \(\partial_{yy} u\) denote the first and second order derivatives with respect to the state variable and

$$[h]_{\alpha}:= \sup_{(t,x),(s,y) \in[0,T]\times\mathbb{R}^{D}, (t,x)\neq (s,y)}\frac{|h(t,x) - h(s,y)|}{(\sqrt{|t-s|}+|x-y|)^{\alpha}},\ h \in \{ \partial_{y} u, \partial_{yy}u, \partial_{t}u\}. $$

The corresponding parabolic Hölder spaces are denoted by \(C^{k+\alpha }([0,T]\times\mathbb{R}^{D})\) for \(k=0,1,2\).

We conclude this section with a simple inequality which we need later.

Lemma A.1

For \(h_{1},h_{2}, \tilde{h}_{1}\) and \(\tilde{h}_{2}\) in \(C^{\alpha}([0,T]\times\mathbb{R}^{D})\), we have

$$\begin{aligned} \vert h_{1}h_{2} - \tilde{h}_{1} \tilde{h}_{2} \vert_{\alpha} \leq\frac{1}{2}\big(\vert h_{1}-\tilde{h}_{1} \vert_{\alpha} \vert h_{2}+\tilde{h}_{2} \vert_{\alpha} +\vert h_{1}+\tilde{h}_{1} \vert _{\alpha} \vert h_{2}-\tilde{h}_{2} \vert_{\alpha} \big). \end{aligned}$$
(A.1)

Proof

Equation (3.1.6) in [12] gives for \(h_{1},h_{2}\in C^{\alpha}([0,T]\times\mathbb{R}^{D})\) the inequality

$$[h_{1} h_{2}]_{\alpha}\leq\vert h_{1} \vert_{0} [h_{2}]_{\alpha}+ [h_{1}]_{\alpha} \vert h_{2} \vert_{0}. $$

From this inequality and the definition of the parabolic norm \(|\cdot |_{\alpha}\), we get

$$\begin{aligned} |h_{1} h_{2}|_{\alpha}= |h_{1}h_{2}|_{0} + [h_{1}h_{2}]_{\alpha}\le|h_{1}|_{0}|h_{2}|_{0} + [h_{1}h_{2}]_{\alpha}\le|h_{1}|_{\alpha}|h_{2}|_{\alpha}. \end{aligned}$$

Consequently, since

$$\vert h_{1}h_{2} - \tilde{h}_{1} \tilde{h}_{2} \vert_{\alpha} = \frac{1}{2}\big|(h_{1}-\tilde{h}_{1})(h_{2}+\tilde{h}_{2})+(h_{1}+\tilde {h}_{1})(h_{2}-\tilde{h}_{2})\big|_{\alpha}, $$

the triangle inequality produces (A.1). □

1.2 A.2 Estimates from linear algebra

We start with a result from linear algebra which we need in the next section. For a \(D\times D\) positive definite matrix \(X\), we denote by \(\Vert X\Vert_{F}\) the Frobenius norm, i.e.,

$$\Vert X\Vert_{F}^{2} := \sum_{i,j=1}^{D} X_{ij}^{2}.$$

We note that the Cauchy–Schwarz inequality holds for the Frobenius norm; that is, \(|Xx| \le\Vert X\Vert_{F} |x|\) for \(x\in\mathbb{R}^{D}\).

Lemma A.2

Let \(C\) satisfy Assumption 2.3. We define the \(D\times D\)-matrix

$$\begin{aligned} \varSigma(t,s) := \int_{t}^{s} C(u)C(u)^{T} \,du, \quad0\le t < s \le1. \end{aligned}$$
(A.2)
  1. 1.

    The function \(\varSigma\) is symmetric, positive definite and satisfies

    $$|\varSigma(t,s)_{ij}| \le\overline{\delta}(s-t), \quad\underline {\delta}(s-t)\le\varSigma (t,s)_{ii},\quad i,j=1,\dots,D. $$
  2. 2.

    The inverse \(\varSigma(t,s)^{-1}\) exists and is symmetric, positive definite and satisfies

    $$\frac{1}{\overline{\delta}(s-t)} \le\varSigma(t,s)^{-1}_{ii} \le\frac {1}{\underline{\delta} (s-t)},\quad i=1,\dots,D. $$

    Consequently, \(|\varSigma(t,s)_{ij}^{-1}| \le\frac{1}{\underline {\delta}(s-t)}\) for \(i,j=1,\dots,D\).

  3. 3.

    The lower triangular matrix \(L(t,s)\) appearing in the Cholesky decomposition \(\varSigma(t,s) = L(t,s)L(t,s)^{T}\) satisfies

    $$|L(t,s)_{ij}| \le\sqrt{\overline{\delta}(s-t)},\quad L(t,s)_{ii} \ge\sqrt{\underline{\delta} (s-t)},\quad i,j=1,\dots,D. $$
  4. 4.

    For \(0\le t_{1} < t_{2} < s\), we have \(\Vert L(t_{1},s)-L(t_{2},s)\Vert_{F} \le c \sqrt{t_{2}-t_{1}}\), where \(c\) is a constant depending only on \(\underline{\delta},\overline{\delta}, D\).

  5. 5.

    There exists a constant \(c\), depending only on \(\underline {\delta},\overline{\delta}, D\), such that for \(i=1,\dots,D\) and \(0\le t_{1} < t_{2} < s\), we have

    $$\Big| \sqrt{\varSigma(t_{1},s)_{ii}^{-1}} - \sqrt{\varSigma (t_{2},s)^{-1}_{ii}}\Big| \le c \min\bigg\{ \frac{1}{\sqrt{s-t_{2}}}, \frac{t_{2}-t_{1}}{(s-t_{2})^{\frac{3}{2}}}\bigg\} . $$

Proof

Claim 1. The symmetry follows from (A.2). For \(y\in\mathbb{R}^{D}\), condition (2.7) of Assumption 2.3 yields

$$\begin{aligned} \frac{y^{T} \varSigma(t,s) y}{|y|^{2}} = \int_{t}^{s} \frac{y^{T} C(u)C(u)^{T} y}{|y|^{2}}\,du \in[\underline{\delta}(s-t),\overline{\delta}(s-t)]. \end{aligned}$$

Therefore, \(\varSigma(t,s)\) is also positive definite. By letting \(y\) be the \(i\)th basis vector \(e_{i}\in\mathbb{R}^{D}\), we get

$$\underline{\delta}(s-t) \le\varSigma(t,s)_{ii} \le\overline{\delta}(s-t). $$

Finally, the inequality \(|\varSigma(t,s)_{ij}| \le\sqrt{\varSigma (t,s)_{ii}\varSigma(t,s)_{jj}}\) (see Problem 7.1.P1 in [9]) yields Claim 1.

Claim 2. Because \(\varSigma(t,s)^{-1}\) is positive definite, the eigenvalues of \(\varSigma(t,s)^{-1}\) are the reciprocals of the eigenvalues of \(\varSigma(t,s)\). The claimed inequalities then follow from Claim 1 and Problem 4.2.P3 in [9]. The last estimate follows as in the proof of Claim 1 from \(|\varSigma(t,s)^{-1}_{ij}| \le\sqrt{\varSigma(t,s)^{-1}_{ii}\varSigma (t,s)^{-1}_{jj}}\).

Claim 3. To see the first claim, we note that \(\varSigma(t,s) = L(t,s)L(t,s)^{T}\) and Claim 1 produce

$$\sum_{j=1}^{i} L(t,s)^{2}_{ij} = \varSigma(t,s)_{ii} \le\overline{\delta}(s-t). $$

For the second, we use Corollary 3.5.6, Theorem 4.3.17 and Corollary 7.2.9 in [9] to get

$$L(t,s)_{ii} = \sqrt{\frac{i\mathrm{th}\ \mathrm{leading}\ \mathrm{principal}\ \mathrm{minor}\ \mathrm{of}\ \varSigma(t,s)}{ (i-1)\mathrm{th}\ \mathrm{leading}\ \mathrm{principal}\ \mathrm{minor}\ \mathrm{of}\ \varSigma (t,s)}} \ge\sqrt{\underline{\delta}(s-t)}. $$

Claim 4. We prove this by induction. By Claim 1, we have

$$\begin{aligned} |L(t_{1},s)_{11}- L(t_{2},s)_{11}| &= \sqrt{\varSigma(t_{1},s)_{11}}-\sqrt {\varSigma(t_{2},s)_{11}}\\ &\le\sqrt{\varSigma(t_{1},s)_{11}- \varSigma(t_{2},s)_{11}} \\ &= \sqrt{\varSigma(t_{1},t_{2})_{11}} \le\sqrt{\overline{\delta}(t_{2}-t_{1})}. \end{aligned}$$

For the induction step, we suppose there is a constant \(c\) such that for \(j=1,\dots,k-1\) and \(i=j,\dots,D\), we have \(|L(t_{1},s)_{ij} - L(t_{2},s)_{ij}| \le c \sqrt{t_{2}-t_{1}}\). For \(j=i=k\), we have

$$\begin{aligned} &|L(t_{1},s)_{kk}- L(t_{2},s)_{kk}| \\ &\quad= \frac{|L(t_{1},s)_{kk}^{2}- L(t_{2},s)^{2}_{kk}|}{L(t_{1},s)_{kk}+ L(t_{2},s)_{kk}}\\ &\quad\le\frac{1}{\sqrt{\underline{\delta}(s-t_{1})}+\sqrt{\underline {\delta}(s-t_{2})}}\bigg| \varSigma (t_{1},t_{2})_{kk} -\sum_{j=1}^{k-1}\big(L(t_{1},s)_{kj}^{2} - L(t_{2},s)^{2}_{kj}\big)\bigg|\\ &\quad\le\frac{1}{\sqrt{\underline{\delta}(s-t_{1})}+\sqrt{\underline {\delta}(s-t_{2})}}\\ &\quad\quad{} \times\bigg( \overline{\delta}(t_{2}-t_{1}) +\sum _{j=1}^{k-1}|L(t_{1},s)_{kj} - L(t_{2},s)_{kj}| | L(t_{1},s)_{kj} + L(t_{2},s)_{kj}|\bigg)\\ &\quad\le\frac{1}{\sqrt{\underline{\delta}(s-t_{1})}+\sqrt{\underline {\delta}(s-t_{2})}}\Big( \overline{\delta} (t_{2}-t_{1}) +2c(k-1) \sqrt{t_{2}-t_{1}}\sqrt{\overline{\delta }(s-t_{1})}\Big). \end{aligned}$$

Above, the first inequality follows from Claim 3, the second from Claim 1, and the last from Claim 3 and the induction hypothesis. The last term is bounded by \(c \sqrt{t_{2}-t_{1}}\) for some constant \(c\).

For \(j=k\) and \(i = k+1,\dots,D\), we can use \(\varSigma= LL^{T}\) to obtain the representation

$$L(t,s)_{ik} = \frac{\varSigma(t,s)_{ik} - \sum_{j=1}^{k-1} L(t,s)_{kj}L(t,s)_{ij}}{L(t,s)_{kk}}, \quad0\le t < s, $$

and arguments similar to the previous diagonal case to obtain the upper bound. All in all, we have the Frobenius norm estimate \(\Vert L(t_{1},s)-L(t_{2},s)\Vert_{F} \le c \sqrt{t_{2}-t_{1}}\).

Claim 5. By using \(\tfrac{\partial}{\partial t} \varSigma(t,s)^{-1} = -\varSigma(t,s)^{-1} \tfrac{\partial}{\partial t} \varSigma(t,s) \varSigma (t,s)^{-1}\), we see for \(0\le t< s\) that

$$\begin{aligned} \Big|\frac{\partial}{\partial t} \sqrt{\varSigma(t,s)_{ii}^{-1}}\Big| &= \frac{1}{2} \bigg|\frac{1}{\sqrt{\varSigma(t,s)_{ii}^{-1}}} \big( \varSigma(t,s)^{-1} C(t)C(t)^{T}\varSigma(t,s)^{-1}\big)_{ii}\bigg|\\& \le \frac{1}{2} \Big|\sqrt{\overline{\delta}(s-t)} \overline{\delta}\big( \varSigma(t,s)^{-1} \varSigma (t,s)^{-1}\big)_{ii}\Big|. \end{aligned}$$

Therefore, Claim 2 gives us the bound

$$\begin{aligned} \Big|\frac{\partial}{\partial t} \sqrt{\varSigma(t,s)_{ii}^{-1}}\Big| \le c (s-t)^{-3/2} \end{aligned}$$

for some constant \(c\). The mean value theorem then produces

$$\Big| \sqrt{\varSigma(t_{1},s)_{ii}^{-1}} - \sqrt{\varSigma (t_{2},s)_{ii}^{-1}}\Big| \le c \frac{t_{2}-t_{1}}{(s-t_{2})^{3/2}}. $$

This inequality combined with Claim 2 concludes the proof. □

1.3 A.3 Regularity of the heat equation

We define \(\varSigma(s,t)\) by (A.2) and let \(\varGamma\) denote the \(D\)-dimensional (inhomogenuous) Gaussian kernel

$$\begin{aligned} \varGamma(t,s,y):= \frac{e^{-\frac{1}{2} y^{T}\varSigma(t,s)^{-1} y}}{(2\pi )^{D/2}\operatorname{det}(\varSigma(t,s))^{1/2} },\quad0\le t< s\le T, y\in \mathbb{R}^{D}. \end{aligned}$$
(A.3)

Lemma A.3

For \(f_{0}\in C^{\alpha }([0,T]\times\mathbb{R}^{D})\), we have for all \(t\in[0,T]\) and \(y\in \mathbb{R}^{D}\)

$$\begin{aligned} & \partial_{y_{d}} \int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma(t,s,x-y) f_{0}(s,x) \,dx\,ds \\ &\quad = -\int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma_{y_{d}}(t,s,x-y) f_{0}(s,x) \,dx\,ds, \end{aligned}$$

for \(d=1,\dots,D\).

Proof

We first assume that \(f_{0}\) is continuously differentiable with compact support. In that case, the dominated convergence theorem and integration by parts produce the claim. For \(f_{0}\) merely continuous and bounded, we approximate as follows. We first fix \(R>0\). Since both \(\varGamma(t,\cdot,\cdot)\) and \(\varGamma_{y_{d}}(t,\cdot,\cdot)\) are integrable over \([t,T]\times\mathbb{R}^{D}\), we can find \(M_{n}>R\) such that for \(n\in\mathbb{N}\),

$$\int_{t}^{T} \int_{|x|\ge M_{n}-R} \varGamma(t,s,x) \,dx\,ds\le\frac{1}{n},\quad \int_{t}^{T} \int_{|x|\ge M_{n}-R}| \varGamma_{y_{d}}(t,s,x) | \,dx\,ds\le\frac{1}{n}. $$

For each \(n\in\mathbb{N}\), the density of compactly supported functions allows us to find a continuously differentiable function \(f_{n}\) with compact support such that

$$|f_{n}|_{0} \le|f_{0}|_{0},\quad\sup_{|x|\le M_{n}, s\in[t,T]} |f_{n}(s,x)-f_{0}(s,x)| \le\frac{1}{n}. $$

For \(|y|\le R\), we have \(\{x\in\mathbb{R}^{D}: |x+y| > M_{n}\} \subset\{ x\in\mathbb{R} ^{D}: |x| > M_{n} -R\}\); hence

$$\begin{aligned} &\int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma(t,s,x) | f_{n}(s,y+x)-f_{0}(s,y+x)|\, dx \,ds\\ &\quad\le\int_{t}^{T} \int_{|x+y|\le M_{n}} \varGamma(t,s,x) | f_{n}(s,y+x)-f_{0}(s,y+x)|\, dx \,ds\\ &\quad\quad{}+\int_{t}^{T} \int_{|x|> M_{n}-R} \varGamma(t,s,x) | f_{n}(s,y+x)-f_{0}(s,y+x)| \,dx \,ds\\ &\quad\le\frac{T}{n} +\frac{2|f_{0}|_{0}}{n}. \end{aligned}$$

A similar estimate (also uniform in \(y\)) is found by replacing \(\varGamma \) with \(\varGamma_{y_{d}}\). For \(|y|\le R\) and \(t\in[0,T]\), we define the functions

$$\begin{aligned} &g_{n}(t,y) := \int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma(t,s,x-y) f_{n}(s,x)\,dx\,ds,\quad n=0,1,\dots,\\ & h(t,y) := -\int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma_{y_{d}}(t,s,x-y) f_{0}(s,x)\,dx\,ds. \end{aligned}$$

Since \(f_{n}\) has compact support, we have \(\partial_{y_{d}} g_{n} = - \int _{t}^{T} \int_{\mathbb{R}^{D}}\varGamma_{y_{d}} f_{n}\,dx\,ds\). Therefore,

$$0=\lim_{n\to\infty} \sup_{|y|\le R} |g_{n}(t,y) - g_{0}(t,y)| = \lim _{n\to\infty} \sup_{|y|\le R} |\partial_{y_{d}} g_{n}(t,y)- h(t,y)|. $$

The fundamental theorem of calculus yields for \(|y|\le R\) that

$$\begin{aligned} &g_{0}(t,y_{1},\dots,y_{d},\dots,y_{D}) - g_{0}(t,y_{1},\dots,0,\dots,y_{D})\\ &\quad=\lim_{n\to\infty}\big(g_{n}(t,y_{1},\dots,y_{d},\dots,y_{D}) - g_{n}(t,y_{1},\dots,0,\dots,y_{D})\big)\\ &\quad=\lim_{n\to\infty}\int_{0}^{y_{d}} \partial_{y_{d}}g_{n}(t,y_{1},\dots,\xi ,\dots,y_{D})\,d\xi\\ &\quad= \int_{0}^{y_{d}} h(t,y_{1},\dots,\xi,\dots,y_{D})\,d\xi. \end{aligned}$$

Since \(\partial_{y_{d}} g_{n}\) is continuous and converges uniformly to \(h\) on \(\{|y|\le R\}\), we know that \(h\) is also continuous. We can then apply \(\partial_{y_{d}}\) to obtain \(\partial_{y_{d}}g_{0} = h\). Since \(R>0\) was arbitrary, the claim follows. □

Lemma A.4

Under Assumption 2.3, for \(\alpha\in(0,1)\) and \(T\in[0,1]\), suppose that \(f\in C^{\alpha}([0,T]\times\mathbb{R}^{D})\) and \(g \in C^{2+\alpha}(\mathbb{R}^{D})\) are given. Then there exist a constant \(c = c(\underline{\delta},\overline{\delta}, \alpha, D)\) and a unique solution \(u \in C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\) of

$$ \begin{aligned} & u_{t} + \frac{1}{2} \mathrm{tr}(\partial_{yy} u^{(i)} CC^{T}) + f = 0,\\ & u(T,y)=g(y), \end{aligned} $$
(A.4)

which satisfies the parabolic norm estimate

$$\begin{aligned} \vert u \vert_{1+\alpha} \leq c\big(\vert g \vert_{1+\alpha} + \sqrt{T} \vert f \vert_{\alpha}\big). \end{aligned}$$
(A.5)

Proof

Theorem 4.5.1 in [13] ensures the existence of a unique \(C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\)-solution \(u\) of (A.4). From Sect. 5.7B in [10], we get the Feynman–Kac representation

$$ u(t,y)= \int_{\mathbb{R}^{D}} \varGamma(t,T,x-y) g(x) \,dx + \int_{t}^{T} \int _{\mathbb{R} ^{D}} \varGamma(t,s,x-y)f(s, x) \,dx\,ds. $$
(A.6)

From the representation (A.6), we immediately obtain \(|u(t,y)| \le\vert g \vert_{0} + (T-t) \vert f \vert_{0}\), which estimates the parabolic Hölder norm \(|u|_{0}\) via the Hölder norm \(|g|_{0}\) by

$$\begin{aligned} \vert u \vert_{0} \leq\vert g \vert_{0} + T \vert f \vert_{0}. \end{aligned}$$
(A.7)

Because \(\varSigma(t,s)\) is positive definite, there exists a unique Cholesky decomposition \(\varSigma(t,s) = L(t,s)L(t,s)^{T}\) for a lower non-singular triangular matrix \(L(t,s)\). Furthermore, \(\varSigma(t,s)^{-1} = L(t,s)^{-T}L(t,s)^{-1}\). By using \((\operatorname{det}L(t,s))^{2} =\operatorname{det}\varSigma(t,s)\) when changing variables, we can rewrite (A.6) as

$$\begin{aligned} u(t,y) =& \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} g\big(y-L(t,T)z\big) dz \\ &{}+ \int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma (t,s,x-y)f(s, x) \,dx\,ds. \end{aligned}$$
(A.8)

Since \(g\in C^{2+\alpha}\), we can apply the dominated convergence theorem to the \(g\)-integral, and we can apply Lemma A.3 to the \(f\)-integral in (A.8) to produce

$$\begin{aligned} u_{y_{d}}(t,y) =& \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} g_{y_{d}}\big(y-L(t,T)z\big) \,dz \\ &{}- \int_{t}^{T} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \big(L(t,s)^{-T}z\big)_{d} f\big(s,y-L(t,s)z\big)\,dz\,ds, \end{aligned}$$
(A.9)

after substituting \(z= L(t,s)^{-1}(y-x)\) in the \(f\)-integral. Since Claim 2 of Lemma A.2 gives \(\Vert L(t,s)^{-T}\Vert ^{2}_{F} ={\mathrm{tr}} (\varSigma^{-1}(t,s))\le\tfrac{D}{\underline {\delta}(s-t)}\), the Cauchy–Schwarz inequality produces

$$\begin{aligned} \vert u_{y_{d}}(t,y) \vert &\leq\vert g_{y_{d}} \vert_{0} + |f|_{0}\int_{t}^{T} \int_{\mathbb{R} ^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \big|\big(L(t,s)^{-T}z\big)_{d} \big|\,dz\,ds \\ &\leq\vert g_{y_{d}} \vert_{0} + |f|_{0}\int_{t}^{T} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \Vert L(t,s)^{-T}\Vert_{F}|z|\,dz\,ds \\ &\leq\vert g_{y_{d}} \vert_{0} + D|f|_{0}\int_{t}^{T}\frac{1}{\sqrt{\underline {\delta}(s-t)}} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} |z| \,dz\,ds. \end{aligned}$$

By computing the integrals, we obtain the estimate

$$\begin{aligned} &\vert u_{y_{d}} \vert_{0} \leq\vert g_{y_{d}} \vert_{0} + c\sqrt{T} \vert f \vert_{0} . \end{aligned}$$
(A.10)

To estimate the parabolic seminorm \([\partial_{y} u]_{\alpha}\), we provide four estimates which when combined produce the estimate. We start by fixing \(0< t_{1}< t_{2}< T\) and \(y_{1},y_{2}\in\mathbb{R}^{D}\). For the first estimate, we have

$$\begin{aligned} &\Big| \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \Big(g_{y_{d}}\big(y_{1}-L(t_{1},T)z\big)-g_{y_{d}}\big(y_{2}-L(t_{2},T)z\big)\Big) \,dz\Big|\\ &\quad\le[g_{y_{d}}]_{\alpha}\int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} |y_{1}-L(t_{1},T)z-y_{2}+L(t_{2},T)z|^{\alpha}\,dz,\\ &\quad\le[g_{y_{d}}]_{\alpha}\int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} \big(|y_{1}-y_{2}| +\Vert L(t_{1},T)-L(t_{2},T)\Vert_{F}|z|\big)^{\alpha}\,dz\\ &\quad\le[g_{y_{d}}]_{\alpha}\int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} \big(|y_{1}-y_{2}| +c|t_{1}-t_{2}|^{1/2}|z|\big)^{\alpha}\,dz. \end{aligned}$$

The first inequality is due to the interpolation inequality which ensures that we have \([g_{y_{d}}]_{\alpha}<\infty\); see, e.g., Theorem 3.2.1 in [12]. The second inequality uses the Cauchy–Schwarz inequality, whereas the last inequality is from Claim 4 of Lemma A.2.

The second estimate reads

$$\begin{aligned} &\Big|\int_{t_{1}}^{t_{2}} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} \big(L(t_{1},s)^{-T}z\big)_{d} f\big(s,y_{1}-L(t_{1},s)z\big)\,dz\,ds\Big|\\ &\quad=\Big|\int_{t_{1}}^{t_{2}} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \big(L(t_{1},s)^{-T}z\big)_{d} \Big(f\big(s,y_{1}-L(t_{1},s)z\big)-f\big(t_{1},y_{1}\big)\Big)\,dz\,ds\Big|\\ &\quad\le c [f]_{\alpha}\int_{t_{1}}^{t_{2}}\frac{1}{\sqrt{s-t_{1}}} \int _{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} |z| \big(|s-t_{1}|^{1/2} + |s-t_{1}|^{1/2}|z|\big)^{\alpha}\,dz\,ds\\ &\quad\le c [f]_{\alpha}|t_{2}-t_{1}|^{(\alpha+1)/2}, \end{aligned}$$

where the first inequality is found as before. The third estimate is similar and reads

$$\begin{aligned} &\Big|\int_{t_{2}}^{T} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} \big(L(t_{1},s)^{-T}z\big)_{d} \\ &\quad\quad{}\times\Big(f\big(s,y_{1}-L(t_{1},s)z\big)-f\big(s,y_{2}-L(t_{2},s)z\big)\Big)\,dz\,ds\Big|\\ & \quad\le c[f]_{\alpha}\int_{t_{2}}^{T} \frac{1}{\sqrt {s-t_{1}}} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} |z| \big(|y_{1}-y_{2}| +|t_{2}-t_{1}|^{1/2} |z| \big)^{\alpha}\,dz\,ds. \end{aligned}$$

For the fourth and last estimate, we first consider the case \(d=D\). By the triangular structure of \(L^{-1}\) and \(L^{-T}\), we have \(\sqrt {\varSigma^{-1}_{DD}}=L^{-1}_{DD}=L^{-T}_{DD}\). This gives us

$$\begin{aligned} &\Big|\int_{t_{2}}^{T} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} \Big(\big(L(t_{1},s)^{-T}z\big)_{D} - \big(L(t_{2},s)^{-T}z\big)_{D}\Big) \\ &\quad\quad{}\times f\big(s,y_{2}-L(t_{2},s)z\big)\,dz\,ds\Big|\\ &\quad=\Big|\int_{t_{2}}^{T} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} \Big( \sqrt{\varSigma(t_{1},s)^{-1}_{DD}}-\sqrt{\varSigma(t_{2},s)^{-1}_{DD}} \Big)z_{D} \\&\quad\quad{}\times \Big( f\big(s,y_{2}-L(t_{2},s)z\big)-f\big(t_{2},y_{2}\big)\Big)\,dz\,ds\Big|\\ &\quad\le c [f]_{\alpha}\int_{t_{2}}^{T} \min\bigg\{ \frac{1}{\sqrt{s-t_{2}}}, \frac{t_{2}-t_{1}}{(s-t_{2})^{\frac{3}{2}}}\bigg\} \\&\quad\quad{}\times\int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}} |z_{D}| \big( |s-t_{2}|^{1/2}+ |s-t_{2}|^{1/2}|z|\big)^{\alpha}\,dz\,ds, \end{aligned}$$

where the inequality follows from Claim 5 of Lemma A.2. The case \(d < D\) can be reduced to the case \(d=D\) just considered by performing the following substitution: We let \(J\) be the \(D\times D\)-matrix obtained by interchanging the \(d\)th and \(D\)th rows of the \(D\times D\)-identity matrix and \(\tilde{L}\) the lower triangular matrix in the Cholesky factorization \(J\varSigma J = \tilde{L}\tilde {L}^{T}\). For \(z:= \tilde{L}^{-1}J(y-x)\), we have

$$\begin{aligned} \big(\varSigma(t,s)^{-1}(y-x)\big)_{d} &= \big(J\varSigma (t,s)^{-1}(y-x)\big)_{D}\\ &= \big(J\varSigma(t,s)^{-1}JJ(y-x)\big)_{D} = \big(\tilde {L}(t,s)^{-T}z\big)_{D}, \end{aligned}$$

where we used that \(JJ\) is the \(D\times D\)-identity matrix and \(J\varSigma ^{-1}J = \tilde{L}^{-T}\tilde{L}^{-1}\).

The above four estimates together with the triangle inequality as well as the representation (A.9) produce the parabolic seminorm estimate

$$\begin{aligned}{} [u_{y_{d}}]_{\alpha} \leq c \big(|g|_{1+\alpha} + \sqrt{T}|f|_{\alpha}\big). \end{aligned}$$
(A.11)

Finally, by combining the three estimates (A.7), (A.10) and (A.11) and using \(T\le1\), we produce the parabolic norm estimate (A.5). □

Theorem A.5

Under the assumptions of Theorem 3.1, there exists \(T_{0}\in(0,1]\) such that for all \(T< T_{0}\), the nonlinear PDE system (3.2) possesses a unique solution \((u^{(i)})_{i=1}^{I} \subset C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\).

Proof

We define \({\mathcal{S}}_{T}:=(C^{1+\alpha}([0,T] \times \mathbb{R}^{D}))^{I}\) for \(I\in\mathbb{N}\), as well as the norm

$$\begin{aligned} \Arrowvert v \Arrowvert_{\mathcal{S}_{T}}:=\max_{i\in\{1,2,\dots,I\} } \vert v^{(i)} \vert_{1+\alpha},\quad v\in{\mathcal{S}}_{T}. \end{aligned}$$

Since \((C^{1+\alpha}([0,T] \times\mathbb{R}^{D}),|\cdot|_{1+\alpha })\) is a Banach space, we also have that \(({\mathcal{S}}_{T}, \Vert\cdot\Vert _{{\mathcal{S}}_{T}})\) is a Banach space.

In the following, we use the notation from Lemma A.4. For \(i=1,\dots,I\), we define the \(i\)th coordinate \(\varPi^{(i)}\) of the map \(\varPi: {\mathcal{S}}_{T} \to{\mathcal{S}}_{T}\) by

$$\begin{aligned} \varPi^{(i)}(v)(t,y) :=&\int_{\mathbb{R}^{D}} \varGamma(t,T,x-y) g^{(i)}(x)\,dx \\ &{}+ \int_{t}^{T} \int_{\mathbb{R}^{D}} \varGamma(t,s,x-y)f^{(i)}(v)(s,x) \,dx\,ds, \end{aligned}$$

where \(f^{(i)}: {\mathcal{S}}_{T} \to C^{\alpha}([0,T] \times\mathbb {R}^{D})\) is defined by

$$\begin{aligned} \begin{aligned} f^{(i)}(v)&:= \frac{1}{2a_{i}} |\lambda(v)|^{2} - \lambda(v)^{T}\bar{C}^{T} \partial_{y} v^{(i)} + \frac{a_{i}}{2}\big( |\bar{C}^{T}\partial_{y} v^{(i)}|^{2}-|C^{T}\partial_{y} v^{(i)}|^{2}\big), \\ \lambda(v)&:= \frac{1}{\tau_{\varSigma}}\bar{C}^{T}\sum_{j=1}^{I}\partial_{y} v^{(j)}. \end{aligned} \end{aligned}$$
(A.12)

Based on Lemma A.1, we have for \(v,\tilde{v}\in {\mathcal{S}} _{T}\) the estimates

$$ \begin{aligned} \vert f^{(i)}(v) \vert_{\alpha}&\leq c \Arrowvert v \Arrowvert _{\mathcal{S}_{T}}^{2},\\ \vert f^{(i)}(v)- f^{(i)}(\tilde{v}) \vert_{\alpha} &\leq c \big(\Arrowvert v \Arrowvert_{\mathcal{S}_{T}}+\Arrowvert\tilde{v} \Arrowvert_{\mathcal{S}_{T}}\big) \Arrowvert v-\tilde{v} \Arrowvert _{\mathcal{S}_{T}}, \end{aligned} $$
(A.13)

for a constant \(c\). By combining (A.5) with (A.13), we produce the estimates

$$\begin{aligned} \vert\varPi^{(i)}(v) \vert_{1+\alpha} &\leq c \big(\vert g^{(i)} \vert_{1+\alpha} + \sqrt{T} \Arrowvert v \Arrowvert_{\mathcal {S}_{T}}^{2}\big), \\ \vert\varPi^{(i)}(v)-\varPi^{(i)}(\tilde{v}) \vert_{1+\alpha} & \leq c\sqrt {T} \big(\Arrowvert v \Arrowvert_{\mathcal{S}_{T}}+\Arrowvert\tilde {v} \Arrowvert_{\mathcal {S}_{T}}\big) \Arrowvert v-\tilde{v} \Arrowvert_{\mathcal{S}_{T}}. \end{aligned} $$

Therefore, by the definition of \(\varPi\), we obtain the estimates

$$ \begin{aligned} \Arrowvert\varPi(v) \Arrowvert_{\mathcal{S}_{T}} &\leq c \Big(\max _{1\leq i\leq I}\vert g^{(i)} \vert_{1+\alpha} + \sqrt{T} \Arrowvert v \Arrowvert _{\mathcal {S}_{T}}^{2}\Big),\\ \Arrowvert\varPi(v)-\varPi(\tilde{v}) \Arrowvert_{\mathcal{S}_{T}} & \leq c\sqrt{T} \big(\Arrowvert v \Arrowvert_{\mathcal{S}_{T}}+\Arrowvert\tilde{v} \Arrowvert_{\mathcal {S}_{T}}\big) \Arrowvert v-\tilde{v} \Arrowvert_{\mathcal{S}_{T}}. \end{aligned} $$
(A.14)

To ensure that \(\varPi\) is a contraction map, we consider real numbers \(R>0\) and \(T_{0}\in(0,1]\) such that (these constants \(R\) and \(T_{0}\) exist)

$$ \begin{aligned} c\Big(\max_{1\leq i\leq I}\vert g^{(i)} \vert_{1+\alpha}+\sqrt{T_{0}} R^{2}\Big) &\leq R, \\ 2c \sqrt{T_{0}} R &\leq\frac{1}{2}. \end{aligned} $$
(A.15)

For \(T\in(0,T_{0}]\), we define the \(R\)-ball

$$\mathcal{B}_{T}:=\{ v\in\mathcal{S}_{T} : \Arrowvert v \Arrowvert _{\mathcal{S}_{T}} \leq R \}\subset\big(C^{1+\alpha}([0,T]\times\mathbb{R}^{D})\big)^{I}. $$

The estimates (A.14) and the parameter restrictions (A.15) imply that \(\varPi\) maps \(\mathcal{B}_{T}\) to \(\mathcal{B}_{T}\) and that \(\varPi\) is a contraction map on \(\mathcal{B}_{T}\). Since the space \((\mathcal{S}_{T},\Vert\cdot\Vert_{{\mathcal{S}}_{T}})\) is complete, there exists a unique fixed point \(u\in\mathcal{B}_{T}\) of the map \(\varPi\). The fixed point property \(\varPi(u) = u\) implies that \(u^{(i)}\) is given by (A.6) with \(f := f^{(i)}(u)\in C^{\alpha}([0,T]\times \mathbb{R} ^{D})\). By uniqueness, we obtain \(u^{(i)} \in C^{2+\alpha}([0,T]\times \mathbb{R}^{D})\). Consequently, the functions \((u^{(i)})_{i=1}^{I}\subset C^{2+\alpha}([0,T]\times\mathbb{R}^{D})\) solve the stated PDE system. □

1.4 A.4 Remaining proofs

We denote by \({I \choose 0}\) the \(D\times N\) matrix whose upper \(N\) rows are the identity matrix \(I_{N\times N}\), whereas all remaining entries are zeros.

Proof of Theorem 3.1

We use Theorem A.5 and let \(T< T_{0}\). We can then define the function \(\lambda= \lambda(t,y)\) by (3.3) as well as the constant

$$\begin{aligned} r:= \frac{1}{\tau_{\varSigma}T}\sum_{i=1}^{I} \big(u^{(i)}(0,0) - g^{(i)}_{0}\big). \end{aligned}$$
(A.16)

The proof is split into the following two steps.

Step 1. For \(i=1,\dots,I\), we define the \(N\)-dimensional process

$$\begin{aligned} \hat{H}^{(i)}_{t}:= \frac{1}{a_{i} e^{r(T-t)}}\big(\lambda(t,Y_{t})- a_{i}\bar{C}(t)^{T}\partial_{y} u^{(i)}(t,Y_{t})\big),\quad t\in[0,T], \end{aligned}$$
(A.17)

where \(\bar{C}(t)_{ij}\) denotes \(C(t)_{ij}\) for \(i=1,\dots,D\) and \(j=1,\dots,N\) . We show that \(\hat{H}^{(i)}\) is admissible in some set \({\mathcal{A}}_{i}={\mathcal{A}}_{i}(\hat{\mathbb{Q}}^{(i)})\) and attains the supremum in

$$\begin{aligned} \sup_{H\in{\mathcal{A}}_{i}} \mathbb{E}\big[U_{i}\big(X^{0,H}_{T} + g^{(i)}(Y_{T})\big)\big]. \end{aligned}$$
(A.18)

We note that the initial wealth is irrelevant in (A.18) because of the exponential preference structure. We define the function \(V^{(i)}\) in terms of \(u^{(i)}\) by (3.1) and the process \(d\hat{X}^{(i)}_{t} := r\hat{X}^{(i)}_{t}dt + (\hat {H}^{(i)}_{t})^{T} (\lambda(t,Y_{t}) dt +\binom{I}{0}^{T}dW_{t})\) with \(\hat {X}^{(i)}_{0}:=0\). Itô’s lemma yields the dynamics of \(V_{t}^{(i)} = V^{(i)}(t,\hat{X}^{(i)}_{t},Y_{t})\) as

$$\begin{aligned} dV_{t}^{(i)} =& \partial_{x} V^{(i)} (\hat{H}^{(i)})^{T}\binom{I}{0}^{T}dW_{t} + (\partial_{y} V^{(i)})^{T}C(t)dW_{t} \\ =&-V_{t}^{(i)} \bigg(\big(\lambda^{T}- a_{i}(\partial_{y} u^{(i)})^{T}\bar {C}\big)\binom{I}{0}^{T}+a_{i}(\partial_{y} u^{(i)})^{T}C\bigg)dW_{t}. \end{aligned}$$
(A.19)

Since the functions \(\partial_{y}u^{(i)}\) and \(\lambda\) are bounded, we can use Novikov’s condition to see that \(V^{(i)}\) is indeed a ℙ-martingale. Thus the terminal condition \(u^{(i)}(T,y) = g^{(i)}(y)\) yields \(q^{(i)}:= \mathbb{E}[e^{rT}U_{i}'(\hat{X}^{(i)}_{T}+ g^{(i)}(Y_{T}))]\in(0,\infty)\). We can then define the ℙ-equivalent probability measures \(\hat{\mathbb{Q}}^{(i)}\) via the Radon–Nikodým derivatives on \(\mathcal{F}_{T}\) by

$$\frac{d\hat{\mathbb{Q}}^{(i)}}{d\mathbb{P}} := \frac {V^{(i)}(T,\hat {X}^{(i)}_{T},Y_{T})}{V^{(i)}(0,0,0)}= \frac{e^{rT}U_{i}'(\hat{X}^{(i)}_{T}+ g^{(i)}(Y_{T}))}{q^{(i)}},\quad i=1,2,\dots,I. $$

We next prove that \(\hat{\mathbb{Q}}^{(i)}\in{\mathcal{M}}\). By the martingale property of \(V^{(i)}\), we have

$$\mathbb{E}\bigg[\frac{d\hat{\mathbb{Q}}^{(i)}}{d\mathbb{P}}\bigg|\mathcal{F}_{t}\bigg] =\frac {V^{(i)}(t,\hat{X}^{(i)}_{t},Y_{t})}{V^{(i)}(0,0,0)},\quad t\in[0,T]. $$

Therefore, the dynamics (A.19) of \(V^{(i)}\) together with Girsanov’s theorem ensure that \(\tilde{S}:= S/S^{(0)}\) is an \(N\)-dimensional \(\hat{\mathbb{Q}}^{(i)}\)-martingale; hence \(\hat {\mathbb{Q} }^{(i)}\in{\mathcal{M}}\). Since the volatility of \(\tilde{S}\) is \(e^{-rt}\) and the process \(\hat{H}^{(i)}\) defined by (A.17) is uniformly bounded, we have that the process \((\hat{X}^{(i)}_{t}e^{-rt})\) is a \(\hat{\mathbb{Q}}^{(i)}\)-martingale for \(t\in[0,T]\); hence \(\hat {H}^{(i)}\in{\mathcal{A}}_{i}\).

Finally, the verification of the optimality of \(\hat{H}^{(i)}\) is fairly standard and can be seen as follows. Fenchel’s inequality produces \(U_{i}(x) \le U^{*}_{i}(y) + xy\) for all \(x\in\mathbb{R}\) and \(y>0\), where \(U^{*}_{i}\) is the convex conjugate of \(U_{i}\), i.e., \(U^{*}_{i}(y):= \sup_{x\in \mathbb{R}}( U_{i}(x) - xy)\). Therefore, for arbitrary \(H\in{\mathcal{A}}_{i}\), we have

$$\begin{aligned} &\mathbb{E}\big[U_{i}\big( X^{0,H}_{T} + g^{(i)}(Y_{T})\big)\big]\\ &\quad\le\mathbb{E}\bigg[U^{*}_{i}\bigg(q^{(i)}\frac{d\hat{\mathbb {Q}}^{(i)}}{d\mathbb{P} }e^{-rT}\bigg)\bigg] + q^{(i)}\mathbb{E}\bigg[\frac{d\hat{\mathbb {Q}}^{(i)}}{d\mathbb{P} }e^{-rT}\big( X^{0,H}_{T} + g^{(i)}(Y_{T})\big)\bigg] \\ &\quad\le\mathbb{E}\bigg[U^{*}_{i}\bigg(q^{(i)}\frac{d\hat{\mathbb {Q}}^{(i)}}{d\mathbb{P} }e^{-rT}\bigg)\bigg] +q^{(i)} \mathbb{E}\bigg[\frac{d\hat{\mathbb {Q}}^{(i)}}{d\mathbb{P} }e^{-rT}g^{(i)}(Y_{T})\bigg] \\ &\quad= \mathbb{E}\bigg[U^{*}_{i}\bigg(q^{(i)}\frac{d\hat{\mathbb {Q}}^{(i)}}{d\mathbb{P} }e^{-rT}\bigg)\bigg] +q^{(i)} \mathbb{E}\bigg[\frac{d\hat{\mathbb {Q}}^{(i)}}{d\mathbb{P} }e^{-rT}\big( \hat{X}^{(i)}_{T} + g^{(i)}(Y_{T})\big)\bigg] \\ &\quad=\mathbb{E}\big[U_{i}\big( \hat{X}^{(i)}_{T} + g^{(i)}(Y_{T})\big)\big]. \end{aligned}$$

Indeed, the second inequality comes from the \(\hat{\mathbb{Q} }^{(i)}\)-supermartingale property of \(H\cdot \tilde{S}\), the first from the \(\hat{\mathbb{Q}}^{(i)}\)-martingale property of \(\hat {H}^{(i)}\cdot \tilde{S}\), and the last equality follows from the first order condition in the definition of \(U^{*}\); see, e.g., Lemma 3.4.3(i) in [11]. This verifies that \(\hat{H}^{(i)}\) attains the supremum in (A.18).

Step 2. Based on the previous step, we can rewrite the optimization problem (2.3) as

$$\sup_{c_{0}\in\mathbb{R}} \big(-e^{-a_{i}(c_{0} + g_{0})} -e^{a_{i}e^{rT}c_{0} -a_{i} u^{(i)}(0,0)}\big). $$

It is straightforward to solve this problem for \(\hat{c}^{(i)}_{0}\) and see that (A.16) ensures the clearing condition \(\sum_{i=1}^{I}\hat {c}^{(i)}_{0}=0\). □

Proof of Theorem 4.1

We consider the ODE system in Sect. 4.1 in terms of \(\alpha^{(i)}(t) \in\mathbb{R}\), \(\beta ^{(i)}(t) \in\mathbb{R}^{D}\) and \(\gamma^{(i)}(t) \in\mathbb{R}^{D\times D}\) for \(t\ge0\) and \(i=1,2,\dots,I\). Since the right-hand side of this system is locally Lipschitz-continuous as a function of the left-hand side, there exists a unique solution up to some explosion time \(T^{\mathrm{Riccati}}_{0}\in (0,\infty]\) by the Picard–Lindelöf theorem. For \(i=1,2,\dots,I\), we consider the quadratic form (4.2). By computing the various derivatives, we see that (4.2) solves the coupled PDE system (3.2).

It remains to perform the verification, and here we just point out how the proof of Theorem 3.1 can be adjusted to the present case where \(g^{(i)}\) is a quadratic function. The first issue is the martingale property of the process \(V^{(i)}\) with the dynamics (A.19). As in the proof of Theorem 3.1, the process \((V^{(i)}(t,\hat{X}^{(i)}_{t},Y_{t}))\)—with \(u^{(i)}\) defined by (4.2) and \(V^{(i)}\) defined by (3.1)—is a local martingale under ℙ with the dynamics (A.19). To see that \(V^{(i)}\) is a martingale, we note that the partial derivative \(\partial_{y}u^{(i)}\) is an affine function of \(y\) with deterministic continuous functions of time to maturity as coefficients. Therefore, the function \(\lambda\) defined by (3.3) is also an affine function of \(y\). Because \(dY_{t} = C(t)dW_{t}\), we can use Corollary 3.5.16 in [10] to see that \(V^{(i)}\) is a martingale on \([0,T]\) for \(T< T^{\mathrm{Riccati}}_{0}\).

Secondly, we must prove the \(\hat{\mathbb{Q}}^{(i)}\)-martingale property of the product process \(d(\hat{X}^{(i)}_{t}e^{-rt})= (\hat {H}^{(i)}_{t})^{T}d\tilde{S}_{t}\) with \(\hat{\mathbb{Q}}^{(i)}\) defined via \(V^{(i)}\) as in the proof of Theorem 3.1. The dynamics (A.19) and Girsanov’s theorem produce the \(\hat{\mathbb{Q}}^{(i)}\)-Brownian motions

$$ \begin{aligned} dW^{\hat{\mathbb{Q}}^{(i)},m}_{t}:=\left\{ \textstyle\begin{array}{l@{\quad}l} dW_{t}^{(m)} + \lambda(t,Y_{t})_{m}dt, & m=1,\dots,N, \\ dW_{t}^{(m)} + a_{i}( (\partial_{y} u^{(i)}(t,Y_{t}))^{T}C(t))_{m}dt, & m = N+1,\dots,D. \end{array}\displaystyle \right. \end{aligned} $$

Therefore, the drift in the \(\hat{\mathbb{Q}}^{(i)}\)-dynamics of \(dY_{t} = C(t)dW_{t}\) is an affine function of \(Y_{t}\) with bounded time-dependent coefficients.

Since the volatility of \(\tilde{S}\) is \(e^{-rt}\), it suffices to verify the square-integrability property \(\mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[\int _{0}^{T} (\hat{H}^{(i)}_{t})^{2}\,dt]=\int_{0}^{T} \mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[(\hat {H}^{(i)}_{t})^{2}]\,dt<\infty\), where \(\hat{H}^{(i)}\) is defined by (A.17). To this end, we define the stopping times

$$\tau^{(k)} := \inf\{s>0:|Y_{s}|\ge k\}\land T\quad\mbox{for } k\in \mathbb{N}. $$

Because the functions \(\lambda\) and \(\partial_{y} u^{(i)}\) are affine with uniformly bounded time-dependent coefficients, the above expression for \(W^{\hat{\mathbb{Q}}^{(i)},m}\) allows us to find two positive constants \(C_{1}\) and \(C_{2}\) (independent of \(k\)) such that

$$\begin{aligned} \mathbb{E}^{\hat{\mathbb{Q}}^{(i)}}\big[|Y_{t\land\tau ^{(k)}}|^{2}\big] &\le C_{1} + C_{2}\mathbb{E}^{\hat{\mathbb{Q}}^{(i)}}\bigg[\int_{0}^{t\land\tau ^{(k)}} |Y_{s}|^{2}\,ds \bigg] \\ &= C_{1} + C_{2}\mathbb{E}^{\hat{\mathbb{Q}}^{(i)}}\bigg[\int _{0}^{t\land\tau^{(k)}} |Y_{s\land\tau^{(k)}}|^{2}\,ds \bigg] \\ &\le C_{1} + C_{2}\int_{0}^{t} \mathbb{E}^{\hat{\mathbb{Q}}^{(i)}}\big[|Y_{s\land\tau ^{(k)}}|^{2}\big] \,ds, \end{aligned}$$

where we have used Tonelli’s theorem in the last inequality. Next, the mapping \([0,T]\ni t\mapsto\mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[|Y_{t\land \tau^{(k)}}|^{2}]\) is continuous by the dominated convergence theorem. Therefore, Gronwall’s inequality produces the bound

$$\mathbb{E}^{\hat{\mathbb{Q}}^{(i)}}\big[|Y_{t\land\tau ^{(k)}}|^{2}\big] \le C_{1}e^{C_{2}t},\quad t\in[0,T]. $$

Fatou’s lemma then produces

$$\mathbb{E}^{\hat{\mathbb{Q}}^{(i)}}[|Y_{t}|^{2}] \le\liminf_{n\to \infty} \mathbb{E}^{\hat {\mathbb{Q}}^{(i)}}\big[|Y_{t\land\tau^{(k)}}|^{2}\big] \le C_{1}e^{C_{2}t},\quad t\in[0,T]. $$

Finally, the definition (A.17) of \(\hat{H}^{(i)}\) and the affineness of \(\lambda\) and \(\partial_{y} u^{(i)}\) ensure that there exists a constant \(C_{3}\) such that \(\mathbb{E}^{\hat{\mathbb {Q}}^{(i)}}[(\hat {H}^{(i)}_{t})^{2}]\le C_{3}e^{C_{2}t}\). This latter expression is integrable on \([0,T]\) and the claim follows. □

Proof of Theorem 4.2

In this proof, the functions \(\tilde{\lambda}, \tilde{u}\) and \(\partial _{y}\tilde{u}\) refer to the functions from Theorem 4.1 (and its proof) when the endowment functions (4.1) are specified by the Taylor approximations (4.4).

The definitions (3.3) and (4.3) of \(\lambda\) and \(\tilde{\lambda}\) together with the triangle inequality give

$$\begin{aligned} \mathbb{E}[ \vert\lambda(t,Y_{t})- \tilde{\lambda}(t,Y_{t})\vert] \le\frac{1}{\tau_{\varSigma}} \bar{C}(t)^{T} \sum_{i=1}^{I} \mathbb{E}[ |\partial_{y} u^{(i)} (t,Y_{t}) -\partial_{y} \tilde{u}^{(i)} (t,Y_{t}) | ]. \end{aligned}$$

Therefore, it is enough to show that

$$ \mathbb{E}[ |\partial_{y_{d}} u^{(i)} (t,Y_{t}) -\partial_{y_{d}} \tilde {u}^{(i)} (t,Y_{t}) | ] \leq CT^{\frac{1+\alpha}{2}},\quad d=1,\dots ,D, i=1,\dots,I. $$

From the representation (A.9) of \(\partial_{y}u^{(i)}\) with \(f\) replaced by \(f^{(i)}(u)\), where \(f^{(i)}(u)\) is defined by (A.12), we get

$$\begin{aligned} &\vert\partial_{y_{d}}u^{(i)}(t,y)-\partial_{y_{d}}g^{(i)}(y) \vert \\ &\quad\leq\Big| \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \Big(g^{(i)}_{y_{d}}\big(y-L(t,T)z\big) -g^{(i)}_{y_{d}}(y)\Big) \,dz \Big\vert \\ &\quad\quad {}+ \Big\vert \int_{t}^{T} \int_{\mathbb{R}^{D}} \frac {e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \big(L(t,s)^{-T}z\big)_{d} f^{(i)}(u)\big(s,y-L(t,s)z\big)\,dz\,ds\Big\vert . \end{aligned}$$
(A.20)

The first term in (A.20) can be estimated by

$$\begin{aligned} &\Big\vert \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \Big(g^{(i)}_{y_{d}}\big(y-L(t,T)z\big) -g^{(i)}_{y_{d}}(y)\Big) \,dz \Big\vert \\ &\quad=\Big\vert \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} g_{yy_{d}}^{(i)}(s)^{T} L(t,T)z \,dz \Big\vert \\ &\quad=\Big\vert \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi )^{D/2}}\big( g_{yy_{d}}^{(i)}(s)^{T}-g_{yy_{d}}^{(i)}(y)^{T} \big)L(t,T)z \,dz \Big\vert \\ &\quad\leq[\partial_{yy}g^{(i)}]_{\alpha}\int_{\mathbb{R}^{D}} \frac {e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} |s-y|^{\alpha}|L(t,T)z| \,dz \\ &\quad\leq[\partial_{yy}g^{(i)}]_{\alpha}\int_{\mathbb{R}^{D}} \frac {e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} |L(t,T)z|^{\alpha}|L(t,T)z| \,dz \\ &\quad\leq c[\partial_{yy}g^{(i)}]_{\alpha}(T-t)^{(1+\alpha)/2}\int _{\mathbb{R} ^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} |z|^{\alpha+1}\, dz \\ &\quad= c [\partial_{yy}g^{(i)}]_{\alpha}(T-t)^{(1+\alpha)/2}. \end{aligned}$$

Here, the first equality is produced by the mean value theorem where \(s = s(z)\) is on the line segment connecting \(y-L(t,T)z\) and \(y\). The third and fourth inequality are due to Cauchy–Schwarz, \(|L(t,T)z| \le \Vert L(t,T)\Vert_{F} |z|\), combined with Claim 3 of Lemma A.2.

The second term in (A.20) can be estimated similarly by

$$\begin{aligned} &\Big\vert \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \big(L(t,s)^{-T}z\big)_{d} f^{(i)}(u)\big(s,y-L(t,s)z\big)\,dz\Big\vert \\ &\quad=\Big\vert \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \big(L(t,s)^{-T}z\big)_{d} \Big(f^{(i)}(u)\big(s,y-L(t,s)z\big)-f^{(i)}(u)(s,y)\Big)\,dz\Big\vert \\ &\quad\leq[f^{(i)}(u)]_{\alpha} \int_{\mathbb{R}^{D}} \frac{e^{-\frac{1}{2}|z|^{2}}}{(2\pi)^{D/2}} \Vert L(t,s)^{-T}\Vert_{F}|z| |L(t,s)z|^{\alpha}\,dz\\ &\quad\leq c [f^{(i)}(u)]_{\alpha} (s-t)^{\frac{\alpha-1}{2}}. \end{aligned}$$

By integrating \(s\) over \([t,T]\), we produce the overall estimate of (A.20) as

$$ \vert\partial_{y_{d}}u^{(i)}(t,y)-\partial_{y_{d}}g^{(i)}(y) \vert \leq C (T-t)^{(1+\alpha)/2}. $$
(A.21)

We also have

$$\begin{aligned} \vert\partial_{y_{d}}g^{(i)}(y)-\partial_{y_{d}}\tilde{g}^{(i)}(y) \vert &= \vert\partial_{y_{d}}g^{(i)}(y)-\partial_{y_{d}}g^{(i)}(0)-\partial _{yy_{d}}g^{(i)}(0)^{T}y \vert \\ &= \vert\partial_{yy_{d}}g^{(i)}(s)^{T}y -\partial _{yy_{d}}g^{(i)}(0)^{T}y \vert \\ &\leq\vert\partial_{yy_{d}}g^{(i)}(s)-\partial_{yy_{d}}g^{(i)}(0) \vert |y| \\ &\leq[\partial_{yy} g]_{\alpha}\vert y \vert^{1+\alpha} . \end{aligned}$$
(A.22)

Here the first equality follows from the definition of \(\tilde {g}^{(i)}\). The second equality is produced by the mean value theorem for a point \(s = s(y)\) on the line segment connecting \(y\) and 0. Finally, we claim that there exists a constant \(C\) such that

$$\begin{aligned} \vert\partial_{y_{d}}\tilde{g}^{(i)}(y) - \partial_{y_{d}}\tilde {u}^{(i)} (t,y) \vert &\leq C (T-t) (1+\vert y \vert) . \end{aligned}$$
(A.23)

To see this, we first note that

$$\begin{aligned} \vert\partial_{y_{d}}\tilde{g}^{(i)}(y) - \partial_{y_{d}}\tilde {u}^{(i)} (t,y) \vert = &\Big| \partial_{y_{d}}g^{(i)}(0)+\partial_{yy_{d}} g^{(i)}(0)^{T} y - \beta^{(i)}(T-t)_{d} \\ & {} -\Big(\big(\gamma^{(i)}(T-t) +\gamma^{(i)}(T-t)^{T}\big)y\Big)_{d}\Big|. \end{aligned}$$

Since \(\beta^{(i)}(0)_{d} = \partial_{y_{d}}g^{(i)}(0)\), the mean value theorem gives an \(s \in[0,T-t]\) with

$$\begin{aligned} \vert\partial_{y_{d}}g^{(i)}(0) - \beta^{(i)}(T-t)_{d} \vert = \vert (\beta ^{(i)})'(s)_{d}(T-t) \vert\le C(T-t), \end{aligned}$$

because the derivative \((\beta^{(i)})'\) is bounded on \([0,T]\) (the constant \(C\) does not depend on \(T\) as long as \(T< T_{0}^{\mathrm{Riccati}}\)). The estimate involving \(\gamma^{(i)}\) is similar and (A.23) follows.

By combining the estimates (A.21)–(A.23), we get

$$\begin{aligned} &\vert\partial_{y_{d}}u^{(i)}(t,y)-\partial_{y_{d}}\tilde{u}^{(i)}(t,y) \vert \\ &\quad\leq\vert\partial_{y_{d}}u^{(i)}(t,y)-\partial_{y_{d}}g^{(i)}(y) \vert + \vert\partial_{y_{d}}g^{(i)}(y) -\partial_{y_{d}}\tilde{g}^{(i)}(y) \vert \\ &\quad\quad{}+\vert\partial_{y_{d}}\tilde{g}^{(i)}(y)- \partial _{y_{d}}\tilde{u}^{(i)}(t,y) \vert \\ &\quad\leq C \big((T-t)^{\frac{1+\alpha}{2}} + |y|^{1+\alpha} + (T-t)(1+\vert y \vert)\big) . \end{aligned}$$
(A.24)

Finally, by taking expectations via (A.24), we obtain

$$\begin{aligned} &\mathbb{E}[ \vert\partial_{y_{d}}u^{(i)}(t,Y_{t})-\partial_{y_{d}}\tilde {u}^{(i)}(t,Y_{t}) \vert] \\ &\quad\leq\int_{\mathbb{R}^{D}} \varGamma(0,t,y) \vert\partial _{y_{d}}u^{(i)}(t,y)-\partial_{y_{d}}\tilde{u}^{(i)}(t,y) \vert \,dy \\ &\quad\leq C\int_{\mathbb{R}^{D}} \varGamma(0,t,y) \big(T^{\frac{1+\alpha }{2}} + \vert y \vert^{1+\alpha} + T(1+\vert y \vert)\big)\,dy\\ &\quad\leq CT^{\frac{1+\alpha}{2}}. \end{aligned}$$

The last inequality holds because we are considering \(T\in(0,1]\) and because

$$\int_{\mathbb{R}^{D}} \varGamma(0,t,y) \vert y \vert^{1+\alpha} \,dy \leq c\int_{\mathbb{R} ^{D}}\frac{1}{t^{D/2}} e^{-\frac{|y|^{2}}{\overline{\delta}t}}\vert y \vert^{1+\alpha} \,dy \leq c T^{\frac{1+\alpha}{2}} $$

for \(t\in[0,T]\). This estimate follows from the definition (A.3) of \(\varGamma\) and the bounds provided in Lemma A.2. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, J.H., Larsen, K. Taylor approximation of incomplete Radner equilibrium models. Finance Stoch 19, 653–679 (2015). https://doi.org/10.1007/s00780-015-0268-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-015-0268-9

Keywords

Mathematics Subject Classification

JEL Classification

Navigation