Advertisement

Finance and Stochastics

, Volume 21, Issue 1, pp 263–284 | Cite as

Market completion with derivative securities

  • Daniel C. Schwarz
Open Access
Article

Abstract

Let \(S^{F}\) be a ℙ-martingale representing the price of a primitive asset in an incomplete market framework. We present easily verifiable conditions on the model coefficients which guarantee the completeness of the market in which in addition to the primitive asset, one may also trade a derivative contract \(S^{B}\). Both \(S^{F}\) and \(S^{B}\) are defined in terms of the solution \(X\) to a two-dimensional stochastic differential equation: \(S^{F}_{t} = f(X_{t})\) and \(S^{B}_{t}:=\mathbb{E}[g(X_{1}) | \mathcal{F}_{t}]\). From a purely mathematical point of view, we prove that every local martingale under ℙ can be represented as a stochastic integral with respect to the ℙ-martingale \(S :=(S^{F}, S^{B})\). Notably, in contrast to recent results on the endogenous completeness of equilibria markets, our conditions allow the Jacobian matrix of \((f,g)\) to be singular everywhere on \(\mathbb{R}^{2}\). Hence they cover as a special case the prominent example of a stochastic volatility model being completed with a European call (or put) option.

Keywords

Completeness Derivatives Integral representation Diffusion Martingales Parabolic equations Analytic functions Jacobian determinant 

Mathematics Subject Classification (2010)

60G44 60H05 91G20 35K15 35K90 

JEL Classification

G10 

1 Introduction

Let \((\Omega ,\mathcal{F},\mathbb{P})\) be a probability space, consider a fixed time horizon equal to one and let \(\mathbf{F} = (\mathcal{F} _{t})_{t\in [0,1]}\) be a filtration satisfying the usual conditions with \(\mathcal{F}_{0}\) being ℙ-trivial and \(\mathcal{F}_{1} = \mathcal{F}\). Let \(S=(S^{j}_{t})\) be a \(d\)-dimensional stochastic process describing the evolution of the discounted prices of liquidly traded securities in a financial market and with the property that \(S\) is a (vector) martingale under the measure ℙ. The model is said to be complete if any contingent claim payoff can be obtained as the terminal value of a self-financing trading strategy. The second fundamental theorem of asset pricing (cf. [8]) allows us to restate the completeness property in purely mathematical terms as follows: every martingale \(M=(M_{t})\) admits an integral representation with respect to \(S\), that is,
$$ M_{t} = M_{0} + \int_{0}^{t} H_{u}\, \mathrm{d}S_{u},\quad t\in [0,1], $$
(1.1)
for some predictable \(S\)-integrable process \(H=(H^{j}_{t})\). The second fundamental theorem of asset pricing also asserts that the above statements are equivalent to ℙ being the unique martingale measure for \(S\) in the class of equivalent measures.

The process \(S\) may for example describe the prices of stocks or option contracts, which nowadays are often traded as liquidly as their underlyings. Depending on the application one has in mind, the construction of \(S\) differs significantly. In general there are three possibilities to consider. Given its initial value, \(S\) may be defined in a forward form, in terms of its predictable characteristics under the measure ℙ. In this case, the verification of the completeness property is straightforward. For example, if \(S\) is a driftless diffusion process under the measure ℙ with volatility matrix process \(\sigma =(\sigma_{t})\), then the market is complete if and only if \(\sigma \) has full rank \(\mathrm{d}\mathbb{P} \times \mathrm{d}t\) almost surely (cf. [10, Theorem 1.6.6]). Alternatively, \(S\) can be defined in a backward form, as the conditional expectation under ℙ of its given terminal value. Finally, some components of \(S\) may be defined in a forward form and others in a backward form leading to a forward–backward setup.

In the present paper, we assume the last setup above and focus on the case of two dimensions, that is, \(d=2\). In particular, let \(S^{F} = (S ^{F}_{t})\) and \(S^{B} = (S^{B}_{t})\) be scalar-valued martingales under ℙ, such that
$$ S = \left( \textstyle\begin{array}{c} S^{F} \\ S^{B} \end{array}\displaystyle \right) . $$
One may view the forward component as the discounted price of a primitive asset and the backward component as that of a derivative security. That is, given a ℙ-Brownian motion \(W = (W^{j}_{t})_{j=1,2}\), a stochastic process \(\sigma^{F} = ( \sigma^{F,j}_{t})_{j=1,2}\) and an \(\mathcal{F}_{1}\)-measurable random variable \(\psi \), the processes \(S^{F}\) and \(S^{B}\) are defined by
$$\begin{aligned} S^{F}_{t} =&\, S^{F}_{0} + \int_{0}^{t} \sigma^{F}_{u}\, \mathrm{d}W_{u}, \\ S^{B}_{t} :=&\,\mathbb{E}[\psi |\mathcal{F}_{t}], \quad \text{for } t \in [0,1]. \end{aligned}$$
We are looking for easily verifiable conditions on \(\sigma^{F}\) and \(\psi \) guaranteeing the integral representation property of all ℙ-martingales with respect to \(S\) and hence the completeness of the market, in which in addition to the primitive asset \(S^{F}\), also the derivative contract \(S^{B}\) can be traded.

In principle, the proof of our main result, Theorem 2.2 below, generalizes to the \(d\)-dimensional case. The reason we present the two-dimensional case only is twofold: first, the structural conditions on the coefficients \(\sigma^{F}\) and \(\psi \) become very complex in higher dimensions; second, using our current methods, an extension to higher dimensions would require additional regularity of \(\psi \) and in particular exclude the payoff functions of call and put options, which are only once weakly differentiable.

For our analysis, we assume that \(\sigma^{F}\) and \(\psi \) are specified in terms of a solution \(X\) to a two-dimensional stochastic differential equation with drift vector \(b=b(t,x)\) and volatility matrix \(\sigma = \sigma (t,x)\). With respect to the space variable, our conditions are quite classical: \(b=b(t,\cdot )\) is once continuously differentiable and \(\sigma = \sigma (t,\cdot )\) is twice continuously differentiable and possesses a bounded inverse. Further, the functions themselves and their derivatives are bounded. With respect to time, our conditions are quite exacting: \(b=b(\cdot ,x)\) and \(\sigma = \sigma ( \cdot ,x)\) have to be real analytic on \((0,1)\).

Our results extend and rigorously prove ideas on the completion of markets with derivative securities first formulated in [16] and [4].

The paper [16] is concerned with the specific case of stochastic volatility models. The main result in this paper requires the derivative payoff function to be a convex function of the stock price only and, unless given by the special case of a European call or put option, to be twice continuously differentiable. Perhaps most limiting from the point of view of applicability, it is required that the volatility risk premium is such that the drift coefficient of the volatility process under the equivalent martingale measure does not depend on the stock price. Moreover, also the correlation between the asset price and its (stochastic) volatility process and the volatility of the volatility process must not depend on the stock price.

In [4], the setup is not restricted to the two-dimensional case. However, the key conditions in that paper are not placed on model primitives, but on the conditional expectation \(\mathbb{E}[(S^{F}_{1},\psi )|\mathcal{F}_{t}] = (v^{1},v^{2})(t,X _{t})\). In particular, \(v=(v^{1},v^{2})\) is assumed to be (jointly) real analytic in the time and space variables, and in the main theorem of the paper, the Jacobian matrix (with respect to \(x\)) of \(v=v(t,x)\) is assumed to be nonzero on some open subset of \((0,1) \times \mathbb{R}^{2}\).

Our work is intimately related to recent results on the integral representation of martingales, which were motivated by the problem of the endogenous completeness of continuous-time Radner equilibria in financial economics (cf. [11, 9, 15, 1]). In both cases, there exist securities in the market whose prices are constructed in backward form, as conditional expectations (of a stream of dividend payments in the case of Radner equilibria, or of the terminal payoff of a derivative security in our setting), and the goal is to establish conditions for the completeness of the financial market under consideration. The differences in the financial setup are twofold: first, in the classical setting of a Radner equilibrium, all security prices are defined in backward form, whereas the problem of market completion with derivative securities requires some security prices to be defined in forward form, leading to a forward–backward setup; second, in an equilibrium setting, the martingale measure for the price process \(S\) is determined endogenously in terms of the utility functions of individual agents, whereas in our setting the measure ℙ is given exogenously. The distinguishing feature of the present paper concerns an important technical assumption placed on the Jacobian matrix of the vector of terminal payoffs. If \(S^{F}_{1} = f(X_{1})\) and \(\psi = g(X_{1})\), the aforementioned results require the Jacobian matrix
$$ \left( \textstyle\begin{array}{cc} f_{x^{1}} & f_{x^{2}} \\ g_{x^{1}} & g_{x^{2}} \end{array}\displaystyle \right) (x), \quad x\in \mathbb{R}^{2}, $$
to have full rank at least on some open subset of \(\mathbb{R}^{2}\). This condition is not satisfied even in the most pivotal example of the completion of a stochastic volatility model with a European call or put option, where \(f(x^{1},x^{2}) = f(x^{1})\) and \(g(x^{1},x^{2}) = g(x ^{1})\) and hence the Jacobian matrix is singular everywhere on \(\mathbb{R}^{2}\). We replace this requirement with a novel and weaker condition, involving aside from \(f\) and \(g\) also the coefficients of the state process \(b\) and \(\sigma \), which is satisfied in the aforementioned example of a typical stochastic volatility model being completed with a European call option (see Sect. 6).

Due to the differences in the financial setup described above, we do not include any equilibrium-related examples in this paper. In principle, however, our structural condition on the model primitives could be easily applied in an equilibrium setting, where it would replace the assumption that the Jacobian matrix of terminal dividend payments has full rank, and therefore could be used to establish the endogenous completeness of Radner equilibria.

At first sight, it may appear that the most restrictive condition, limiting the applicability of our result, is the boundedness assumption on the coefficients of the diffusion \(X\). This assumption stems from the theory of elliptic and parabolic partial differential equations, which plays an essential part in our proofs. However, we demonstrate in Sect. 6 how we can still accommodate popular models from financial mathematics such as geometric Brownian motion or mean-reverting processes by means of suitable changes of variables.

Notation and basic concepts

Let \(\mathbf{X}\) be a Banach space with norm \(\|\cdot \|\). In the sequel, we frequently use maps \(h:[0,1]\to \mathbf{X}\) which are Hölder-continuous on \([0,1]\), that is, there exist constants \(N>0\) and \(\delta >0\) such that
$$ \|h(u)-h(t)\| \leq N|u-t|^{\delta }, \quad u,t\in [0,1], $$
and analytic on \((0,1)\), that is, for every \(u\in (0,1)\), there exist \(\epsilon (u)>0\) and a family \(\{A_{n}(u)\}\) of elements in \(\mathbf{X}\) such that
$$ h(t) = \sum_{n=0}^{\infty } A_{n}(u)(t-u)^{n}, \quad t\in (0,1), |t-u|< \epsilon (u). $$
For multi-indices \(\alpha = (\alpha_{1},\ldots ,\alpha_{d})\) of nonnegative integers, we use the notation convention \(|\alpha |:= \sum_{i=1}^{d} \alpha_{i}\) and
$$ D^{\alpha } :=\frac{\partial^{|\alpha |}}{\partial x^{\alpha_{1}}_{1} \cdots \partial x^{\alpha_{d}}_{d}}. $$
Let \(U\subset \mathbb{R}^{d}\). Throughout the text, the following spaces will be used:

\(\mathbf{L}^{p}_{\mathrm{loc}}(U)\) denotes the Lebesgue space of locally \(p\)-integrable, real-valued functions \(h\) on \(U\): for every bounded, open subset \(V\) of \(U\), \(h\in \mathbf{L}^{p}(U)\); \(\mathbf{L}^{p}_{ \mathrm{loc}}:=\mathbf{L}^{p}_{\mathrm{loc}}(\mathbb{R}^{2})\).

\(\mathbf{L}^{p}(U)\) (for \(p\geq 1\)) is the Lebesgue space of Lebesgue-measurable, real-valued functions \(h\) on \(U\) with the norm \(\|h\|_{\mathbf{L}^{p}(U)}:=(\int_{U} |h|^{p}\, \mathrm{d}x )^{1/p}\); \(\mathbf{L}^{p} :=\mathbf{L}^{p}(\mathbb{R}^{2})\).

\(\mathbf{L}^{\infty }(U)\) is the Lebesgue space of essentially bounded, real-valued functions \(h\) on \(U\) with the norm \(\|h\|_{\mathbf{L}^{ \infty }(U)}:=\text{ess} \sup_{U}|h|\); \(\mathbf{L}^{\infty } := \mathbf{L}^{\infty }(\mathbb{R}^{2})\).

\(\mathbf{C}(U)\) is the Banach space of all bounded and continuous real-valued functions \(h\) on \(U\) with the norm \(\|h\|_{\mathbf{C}(U)}:= \sup_{U}|h|\); \(\mathbf{C}:=\mathbf{C}(\mathbb{R}^{2})\).

\(\mathbf{C}^{k}(U)\) is the Banach space of all \(k\)-times continuously differentiable, real-valued functions \(h\) on \(U\) with the norm
$$ \| h\|_{\mathbf{C}^{k}(U)} = \|h\|_{\mathbf{C}(U)} + \sum_{1\leq |\alpha |\leq k}\|D^{\alpha } u\|_{\mathbf{C}(U)}; $$
\(\mathbf{C}^{k} :=\mathbf{C}^{k}(\mathbb{R}^{2})\).
Recall that a locally integrable function \(h\) on \(U\) is weakly differentiable if for every index \(j=1,\ldots ,d\), there exists a locally integrable function \(g^{j}\) such that the identity
$$ \int_{U} g^{j}(x)\varphi (x)\, \mathrm{d}x = -\int_{U} h(x) \frac{ \partial \varphi }{\partial x^{j}}(x)\, \mathrm{d}x $$
holds for every function \(\varphi \) belonging to \(\mathbf{C}^{\infty } _{0}(U)\), the space of infinitely many times differentiable functions with compact support in \(U\). In this case we define \(h_{x^{j}}:=g^{j}\). Weak derivatives of higher orders are defined recursively.

As is common, for \(p\geq 1\), we denote by \(p'\) the conjugate exponent of \(p\), defined by \(p' :=p/(p-1)\) for \(1< p<\infty \), \(p':=\infty \) if \(p=1\), and \(p':=1\) if \(p=\infty \).

With these definitions in mind we define the following spaces:

\(\mathbf{W}^{m}_{p}(U)\) (for \(m\in \{0,1,\ldots \}\) and \(p\geq 1\)) is the Banach space of \(m\)-times weakly differentiable functions \(h\) with the norm
$$ \| h \|_{\mathbf{W}^{m}_{p}(U)} :=\|h\|_{\mathbf{L}^{p}(U)} + \sum_{1\leq |\alpha |\leq m} \| D^{\alpha } h\|_{\mathbf{L}^{p}(U)}; $$
the case \(m=0\) recovers the classical Lebesgue spaces \(\mathbf{L}^{p}(U)\). \(\mathbf{W}^{m}_{p}:=\mathbf{W}^{m}_{p}( \mathbb{R}^{2})\).

\(\mathbf{W}^{m}_{p,0}(U)\) (for \(m\in \{0,1,\ldots \}\) and \(p\geq 1\)) is the Banach space obtained by taking the closure of \(\mathbf{C}^{ \infty }_{0}(U)\) in the space \(\mathbf{W}^{m}_{p}(U)\).

\(\mathbf{W}^{-m}_{p}(U)\) (for \(m\in \{0,1,\ldots \}\) and \(p \geq 1\)) is the Banach space of all distributions \(h\) of the form
$$ h = \sum_{0\leq |\alpha |\leq m} (-1)^{|\alpha |}\langle D^{\alpha } \cdot , u_{\alpha }\rangle , $$
(1.2)
where \(\langle \cdot ,\cdot \rangle \) denotes the inner product in \(\mathbf{L}^{2}\) and \(u_{\alpha }\in \mathbf{L}^{p}(U)\), with the norm
$$ \|h\|_{\mathbf{W}^{-m}_{p}(U)} :=\min \bigg\{ \sum_{0\leq |\alpha | \leq m} \| u_{\alpha }\|_{\mathbf{L}^{p}(U)} : u \text{ satisfies } \mbox{(1.2)}\bigg\} . $$
For \(T\subset \mathbb{R}\), we also define \(\mathbf{W}^{r,m+2r}_{p}(T \times U)\) (for \(r,m\in \{0,1,\ldots \}\) and \(p\geq 1\)) as the Banach space of functions \(h=h(t,x)\), \(r\)-times weakly differentiable in \(t\) and \((m+2r)\)-times weakly differentiable in \(x\) with the norm
$$ \| h \|_{\mathbf{W}^{r,m+2r}_{p}(T\times U)} := \sum_{{\scriptstyle |\alpha | + 2\rho \leq m+2r \atop\scriptstyle \rho \leq r}} \| D^{\alpha } \partial_{t}^{\rho } h\|_{\mathbf{L}^{p}(T \times U)}. $$

Our notation is in agreement with standard notation from linear algebra. Given two vectors \(x\), \(y\) in \(\mathbb{R}^{d}\), \(xy\) denotes the scalar product and \(|x|:=\sqrt{xx}\). Given a matrix \(M\in \mathbb{R}^{m \times n}\) with \(m\) rows and \(n\) columns, \(Mx\) denotes its product with the column vector \(x\), \(M^{\star }\) its transpose and \(\|M\|_{F} :=\sqrt{ \mathrm{tr}(MM^{\star })}\). For an \(n\times n\) matrix \(M\), we denote the determinant of \(M\) either by \(|M|\) or by \(\det {M}\). Let \(\ell = (\ell _{1},\ldots ,\ell_{k})\) denote a multi-index complying with the condition \(1\leq \ell_{1}<\cdots <\ell_{k}\leq d\). Given \(n\times n\) matrices \(M\), \(C^{1},\ldots ,C^{k}\), we write \(M(\ell ; C^{1},\ldots ,C ^{k})\) for the matrix that is obtained from \(M\) by replacing the \(\ell_{p}\)th column of \(M\) by the \(\ell_{p}\)th column of \(C^{p}\) for \(p=1,\ldots ,k\), while keeping the remaining columns unchanged; if \(k>n\), \(M(\ell ; C^{1},\ldots ,C^{k}):=0\). Let \(A\) be an operator on a Banach space \(\mathbf{X}\) and \(M\) an \(n\times n\) matrix such that \(m^{ij}\) is in the domain of \(A\), \(i,j=1,\ldots ,n\). We write \(AM\) for the entrywise application of the operator \(A\); so \(AM :=(Am^{ij})_{i,j=1, \ldots ,n}\).

For a suitably regular function \(h=h(t,x):T\times \mathbb{R}^{d} \to \mathbb{R}^{n}\), we denote by \(J[h]=J[h](t,x)\) the Jacobian matrix function of the vector-valued function \(h(t,\cdot )\), i.e.,
$$ J[h](t,x) :=\left( \textstyle\begin{array}{c} \nabla_{x} h^{1} \\ \vdots \\ \nabla_{x} h^{n} \end{array}\displaystyle \right) (t,x), \quad (t,x)\in T\times \mathbb{R}^{d}, $$
where \(\nabla_{x}h\) is the gradient vector of \(h(t,\cdot )\), that is, \(\nabla_{x}h :=(\partial_{x^{1}}h,\ldots ,\partial_{x^{d}}h)\). Similarly, for a suitably regular function \(h=h(t,x):T\times \mathbb{R}^{d}\to \mathbb{R}\), we denote by \(H[h]=H[h](t,x)\) the Hessian matrix function of the scalar-valued function \(h(t,\cdot )\), that is, \(H[h](t,x) :=J[\nabla_{x} h^{\star }](t,x)\), for \((t,x) \in T\times \mathbb{R}^{d}\).

Throughout the text, \(N>0\) denotes a constant whose value may vary from line to line.

2 Main result: forward–backward martingale representation

Let \(b = b(t,x) : [0,1]\times \mathbb{R}^{2} \to \mathbb{R} ^{2}\) and \(\sigma = \sigma (t,x) : [0,1]\times \mathbb{R}^{2} \to \mathbb{R}^{2\times 2}\) be measurable functions, which for all \(i,j=1,2\) satisfy the following assumption:
  1. (A1)
    The maps \(t\mapsto b^{j}(t,\cdot )\) and \(t\mapsto \sigma^{ij}(t, \cdot )\) from \([0,1]\) to \(\mathbf{C}\) are Hölder-continuous and their restriction to \((0,1)\) is analytic. The map \(t\mapsto \sigma ^{ij}(t,\cdot )\) is continuous from \([0,1]\) to \(\mathbf{C}^{2}\) and the map \(t\mapsto b^{j}(t,\cdot )\) is continuous from \([0,1]\) to \(\mathbf{C}^{1}\). The matrix \(\sigma \) is invertible and there exists a constant \(N>0\) such that
    $$ \|\sigma^{-1}(t,x)\|_{F}\leq N, \quad (t,x) \in [0,1]\times \mathbb{R}^{2}. $$
    (2.1)
     

Remark 2.1

Note that (2.1) is equivalent to the uniform ellipticity of the covariance matrix function \(a:=\sigma \sigma^{ \star }\), i.e.,
$$ ya(t,x)y = \|\sigma^{\star }(t,x) y\|_{F}^{2} \geq \frac{1}{N^{2}}|y|^{2}, \quad y\in \mathbb{R}^{2},\ (t,x)\in [0,1]\times \mathbb{R}^{2}. $$
Let \(X_{0}\in \mathbb{R}^{2}\). The assumptions on \(b\) and \(\sigma \) in (A1) imply that given a complete, filtered probability space \((\Omega ,\mathcal{F}_{1},\mathbf{F}=(\mathcal{F} _{t})_{t\in [0,1]},\mathbb{P})\) on which is defined a Brownian motion \(W\) with values in \(\mathbb{R}^{2}\), there exists a unique stochastic process \(X\), also taking values in \(\mathbb{R}^{2}\), such that
$$ X_{t} = X_{0} + \int_{0}^{t} b(u,X_{u})\, \mathrm{d}u + \int_{0}^{t} \sigma (u,X_{u})\, \mathrm{d}W_{u}, \quad t \in [0,1], $$
(cf. [7, Theorem 5.2.2]). Here the filtration \(\mathbf{F}\) is assumed to be the augmentation of the Brownian filtration, that is,
$$ \mathcal{F}_{t} :=\sigma (\mathcal{F}^{W}_{t} \cup \mathcal{N}), \quad t\in [0,1], $$
where \(\mathcal{F}^{W}_{t}\) denotes the \(\sigma \)-field generated by \((W_{u})_{u\in [0,t]}\) and \(\mathcal{N}\) the collection of all ℙ-nullsets.
Let the measurable function \(r:[0,1]\times \mathbb{R}^{2}\to \mathbb{R}\) satisfy the following:
  1. (A2)
    The map \(t\mapsto r(t,\cdot )\) is Hölder-continuous as a map from \([0,1]\) to \(\mathbf{C}\), continuous as a map from \([0,1]\) to \(\mathbf{C}^{1}\), analytic as a map from \((0,1)\) to \(\mathbf{C}\). The function \(r\) is nonnegative, i.e.,
    $$ r(t,x)\geq 0, \quad (t,x)\in [0,1]\times \mathbb{R}^{2}. $$
     
Let the measurable function \(f=f(t,x):[0,1]\times \mathbb{R}^{2} \to \mathbb{R}\) be three times weakly differentiable with respect to \(x\) and assume that there exists a constant \(N>0\) such that for \(j,k,\ell =1,2\), we have:
  1. (A3)

    The map \(t\mapsto e^{-N|\cdot |}\partial_{x^{j}x^{k}}f(t, \cdot )\) from \((0,1)\) to \(\mathbf{L}^{\infty }\) is analytic, the map \(t\mapsto e^{-N|\cdot |}\partial_{x^{j}}f(t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{\infty }\) is continuously differentiable, and the map \(t\mapsto e^{-N|\cdot |}\partial_{x^{j}x^{k}x^{\ell }}f(t, \cdot )\) from \([0,1]\) to \(\mathbf{L}^{\infty }\) is continuous.

     
Recall that \(a :=\sigma \sigma^{\star }\) is the covariance function of \(X\). We denote by \(\mathcal{L}^{X}(t)\), \(t\in [0,1]\), the infinitesimal generator of the process \(X\), i.e.,
$$\begin{aligned} \mathcal{L}^{X}(t) &:=\frac{1}{2}\sum_{j,k=1}^{2} a^{jk}(t,x)\frac{ \partial^{2}}{\partial x^{j} \partial x^{k}} + \sum_{j=1}^{2} b^{j}(t,x)\frac{ \partial }{\partial x^{j}}, \end{aligned}$$
and define the functions \(A=A(t,x)\), \(B=B(t,x)\) and \(C = C(t,x)\) on \([0,1]\times \mathbb{R}^{2}\) by
$$ \begin{aligned} A^{jk} &:=|J[f,a^{jk}]|-2(-1)^{j}(H[f]a)^{(3-j)k},\\ B^{j} &:=|J[f,b ^{j}]| -(-1)^{j}\big(\partial_{t} + \mathcal{L}^{X}(t) - r\big) \partial_{x^{(3-j)}}f,\\ C &:=|J[f,r]|, \end{aligned} $$
for \(j,k=1,2\).
For suitably regular functions \(v=v(x)\), \(\varphi = \varphi (x)\) on \(\mathbb{R}^{2}\), for a bounded, open set \(K\) in \(\mathbb{R}^{2}\) and for \(t\in [0,1]\), we define the pairing1
$$\begin{aligned} \mathcal{B}_{K}[v,\varphi ;t] &:=\int_{K} \Biggl(\frac{1}{2}\sum_{j,k=1} ^{2}A^{jk}(t,x)\frac{\partial v}{\partial x^{j}}\frac{\partial \varphi }{\partial x^{k}} \\ & \quad\ {}- \sum_{j=1}^{2}\bigg(B^{j} - \frac{1}{2} \sum_{k=1}^{2}\frac{ \partial A^{jk}}{\partial x^{k}}\bigg)(t,x)\frac{\partial v}{\partial x^{j}}\varphi +C(t,x)v\varphi \Biggr) \,\mathrm{d}x. \end{aligned}$$
Let the measurable function \(g = g(x) :\mathbb{R}^{2}\to \mathbb{R}\) be once weakly differentiable and assume that there exists a constant \(N>0\) with:
  1. (A4)
    Either the Jacobian matrix \(J[f,g](1,\cdot )\) has full rank almost everywhere on \(\mathbb{R}^{2}\), or for every bounded, open set \(K\) in \(\mathbb{R}^{2}\), there exists a function \(\varphi =\varphi (x)\) belonging to \(\mathbf{W}^{1}_{p,0}(K)\) for some \(p\geq 1\) such that \(\mathcal{B}_{K}[g,\varphi ;1]\neq 0\) and
    $$ \biggl| \frac{\partial g}{\partial x^{j}}(x)\biggr| \leq e^{N(1+|x|)}, \quad x\in \mathbb{R}^{2},\ j=1,2. $$
     
Given the above definitions, we define the scalar-valued random variable \(\psi \) by
$$ \psi :=g(X_{1})e^{-\int_{0}^{1}r(t,X_{t})\, \mathrm{d}t}. $$

The main result of the paper is

Theorem 2.2

(Forward–backward martingale representation)

Suppose that (A1)–(A4) hold. Then the solution \((S^{F},S^{B},Z)\) to the forward–backward stochastic differential equation
$$ \left\{ \begin{aligned} S^{F}_{t} &= S^{F}_{0} + \int_{0}^{t} e^{-\int_{0}^{u} r(s,X_{s})\, \mathrm{d}s}(\nabla_{x} f\sigma )(u,X_{u})\, \mathrm{d}W_{u},\\ S^{B} _{t} &= e^{-\int_{0}^{1} r(u,X_{u})\, \mathrm{d}u}g(X_{1}) - \int_{t} ^{1} e^{-\int_{0}^{u} r(s,X_{s})\, \mathrm{d}s}Z_{u}\, \mathrm{d}W_{u} \end{aligned} \right. $$
(2.2)
is well defined. Moreover, every martingale \(M\) underis a stochastic integral with respect to the two-dimensional ℙ-martingale \(S = (S^{F},S^{B})\), that is, (1.1) holds and the market model is complete under ℙ.

Remark 2.3

From a purely theoretical point of view, Theorem 2.2 asserts that under (A1)–(A4), the volatility process of the forward–backward stochastic differential equation (2.2) has full rank \(\mathrm{d}\mathbb{P} \times \mathrm{d}t\) almost surely.

Remark 2.4

If the function \(g=g(x)\) has slightly better regularity, we may interpret the structural condition stated in (A4) in a classical sense. To illustrate this, we define the linear differential operator
$$ \mathcal{Q}(t) :=\frac{1}{2}\sum_{j,k=1}^{2}A^{jk}(t,x)\frac{\partial ^{2} }{\partial x^{j}\partial x^{k}} + \sum_{j=1}^{2} B^{j}(t,x) \frac{ \partial }{\partial x^{j}} - C(t,x), \quad t\in [0,1], $$
and assume that \(g=g(x)\) is twice weakly differentiable. Then having \(\mathcal{B}_{K}[g,\varphi ;1]\neq 0\) for all bounded, open sets \(K\) in \(\mathbb{R}^{2}\) is equivalent to the assumption that \(\mathcal{Q}(1)g \neq 0\) almost everywhere on \(\mathbb{R}^{2}\).

The proof of Theorem 2.2 is given in Sect. 5 and relies on specific smoothness and integrability properties of the solution to a parabolic equation, which we obtain in Sect. 3 and on the invertibility of a Jacobian matrix, which we study in Sect. 4.

3 Regularity of the solution to the associated parabolic equation

For \((t,x)\in [0,1]\times \mathbb{R}^{2}\), consider an elliptic operator
$$ \mathcal{G}(t) :=\sum_{j,k=1}^{2} a^{jk}(t,x)\frac{\partial^{2}}{ \partial x^{j} \partial x^{k}} + \sum_{j=1}^{2} b^{j}(t,x)\frac{ \partial }{\partial x^{j}} + c(t,x), $$
(3.1)
where the coefficients \(a^{jk}, b^{j}, c : [0,1]\times \mathbb{R}^{2} \to \mathbb{R}\) are measurable functions and satisfy:
  1. (B1)
    The maps \(t\mapsto a^{jk}(t,\cdot )\), \(t\mapsto b^{j}(t,\cdot )\), \(t\mapsto c(t,\cdot )\) from \([0,1]\) to \(\mathbf{C}\) are Hölder-continuous and their restriction to \((0,1)\) is analytic. The map \(t\mapsto a^{jk}(t,\cdot )\) is continuous from \([0,1]\) to \(\mathbf{C}^{2}\) and the maps \(t\mapsto b^{j}(t,\cdot )\), \(t\mapsto c(t, \cdot )\) are continuous from \([0,1]\) to \(\mathbf{C}^{1}\). The matrix \(a\) is symmetric, \(a^{ij} = a^{ji}\), and uniformly elliptic, i.e., there exists \(N>0\) such that
    $$ ya(t,x)y\geq \frac{1}{N^{2}}|y|^{2},\quad (t,x) \in [0,1]\times \mathbb{R}^{2}, y\in \mathbb{R}^{2}, $$
    and the function \(c\) is nonpositive, i.e.,
    $$ c(t,x) \leq 0, \quad (t,x)\in [0,1]\times \mathbb{R}^{2}. $$
     
Let \(g=g(x) : \mathbb{R}^{2} \to \mathbb{R}\) be a measurable function such that for some \(p>1\), we have:
  1. (B2)

    The function \(g\) belongs to \(\mathbf{W}^{1}_{p}\).

     

Theorem 3.1

Suppose that conditions (B1) and (B2) hold. Then there exists a unique measurable function \(v=v(t,x)\) on \([0,1]\times \mathbb{R}^{2}\) such that
  1. 1.

    \(t\mapsto v(t,\cdot )\) is a continuous map from \([0,1]\) to \(\mathbf{W}^{1}_{p}\);

     
  2. 2.

    \(t\mapsto v(t,\cdot )\) is an analytic map from \((0,1)\) to \(\mathbf{W} ^{2}_{p}\);

     
  3. 3.

    \(t\mapsto v(t,\cdot )\) is a \(p\)-integrable map from \([0,1)\) to \(\mathbf{W}^{3}_{p}\);

     
  4. 4.

    \(t\mapsto \partial_{t} v(t,\cdot )\) is a \(p\)-integrable map from \([0,1)\) to \(\mathbf{W}^{1}_{p}\);

     
and such that \(v=v(t,x)\) solves the homogeneous Cauchy problem
$$\begin{aligned} \left( \frac{\partial }{\partial t} + \mathcal{G}(t)\right) v &= 0, \quad t\in [0,1), \end{aligned}$$
(3.2)
$$\begin{aligned} v(1,\cdot ) &= g. \end{aligned}$$
(3.3)

Proof

By assumption (B1), we know that for each \(t\in [0,1]\) and \(j,k=1,2\), the function \(a^{jk}(t,\cdot )\) is in \(\mathbf{C}^{2}\). In particular, the first-order partial derivatives of \(a^{jk}\) with respect to \(x\) are bounded and therefore the matrix \(a\) is uniformly continuous with respect to \(x\). Under the assumptions (B1) and (B2), the assertions of items 1 and 2 are immediately obtained upon making the time change \(t\to 1-t\) in [11, Theorem 3.1].

In addition, Theorem 3.1 in [11] tells us that \(t\mapsto v(t,\cdot )\) is a continuously differentiable map from \([0,1)\) to \(\mathbf{L}^{p}\) and a continuous map from \([0,1)\) to \(\mathbf{W}^{2}_{p}\), which implies that \(v=v(t,x)\) belongs to \(\mathbf{W}^{1,2}_{p}([0,1)\times \mathbb{R}^{2})\). Therefore, given the symmetry and uniform ellipticity of the matrix function \(a=a(t,x)\), the uniform continuity of \(a(t,\cdot )\), the fact that each function \(a^{jk}(t,\cdot )\), \(b^{j}(t,\cdot )\), \(c(t,\cdot )\) belongs to \(\mathbf{C}^{1}\) and the nonnegativity of \(c=c(t,x)\), we may use [14, Corollary 5.2.4] to deduce that \(v=v(t,x)\) in fact belongs to \(\mathbf{W}^{1,3}_{p}([0,1)\times \mathbb{R}^{2})\). The regularity in items 3 and 4 follows immediately. □

In the next section, we need the following corollary of Theorem 3.1, where instead of (B2), we assume that the measurable function \(g=g(x)\) is once weakly differentiable and has the following property:
  1. (B3)
    There exists a constant \(N\geq 0\) such that
    $$ e^{-N|\cdot |}\frac{\partial g}{\partial x^{j}} (\cdot ) \in \mathbf{L}^{\infty },\quad j=1,2. $$
     
Fix a function \(\phi = \phi (x) : \mathbb{R}^{2} \to \mathbb{R}\) which satisfies
$$ \phi \in \mathbf{C}^{\infty }(\mathbb{R}^{2}) \text{ and } \phi (x) = |x| \text{ when } |x|\geq 1. $$
(3.4)

Corollary 3.2

Suppose conditions (B1) and (B2) hold. Let \(\phi =\phi (x)\) satisfy condition (3.4). Then there exist a unique continuous function \(v=v(t,x)\) on \([0,1]\times \mathbb{R}^{2}\) and a constant \(N\geq 0\) such that for every \(p\geq 1\),
  1. 1.

    \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is a continuous map from \([0,1]\) to \(\mathbf{W}^{1}_{p}\);

     
  2. 2.

    \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is an analytic map from \((0,1)\) to \(\mathbf{W}^{2}_{p}\);

     
  3. 3.

    \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is a \(p\)-integrable map from \([0,1)\) to \(\mathbf{W}^{3}_{p}\);

     
  4. 4.

    \(t\mapsto e^{-N\phi (\cdot )}\partial_{t}v(t,\cdot )\) is a \(p\)-integrable map from \([0,1)\) to \(\mathbf{W}^{1}_{p}\);

     
and such that \(v=v(t,x)\) solves the Cauchy problem (3.2) and (3.3).

Proof

From Assumption (B3), we deduce the existence of a constant \(M>0\) such that
$$ \biggl| \frac{\partial g}{\partial x^{i}}(x)\biggr| \leq Me^{M|x|}, \quad x\in \mathbb{R}^{2}, $$
and therefore such that
$$ \left| g(x)\right| \leq |x| Me^{M|x|} + M, \quad x\in \mathbb{R}^{2}. $$
One easily verifies now that for \(N>M\) and \(\phi =\phi (x)\) satisfying (3.4), we have \(\|e^{-N\phi } g\|_{ \mathbf{W}^{1}_{p}}<\infty \) for every \(p\geq 1\) and hence that
$$ e^{-N\phi } g \in \mathbf{W}^{1}_{p},\quad p\geq 1. $$
(3.5)
Hereafter we choose the constant \(N\geq 0\) from (B3) to also satisfy \(N>M\).
Let \(C\geq 0\) be a constant and define the functions \(\tilde{b}^{j} = \tilde{b}^{j}(t,x)\) and \(\tilde{c}=\tilde{c}(t,x)\) such that for \(t\in [0,1]\) and \(u\in \mathbf{C}^{\infty }((0,1)\times \mathbb{R} ^{2})\),
$$ \left( \frac{\partial }{\partial t} + \tilde{\mathcal{G}}(t)\right) (e ^{-N\phi + Ct}u) = e^{-N\phi +Ct}\left( \frac{\partial }{\partial t}+ \mathcal{G}(t)\right) u, $$
where
$$ \tilde{\mathcal{G}}(t) :=\sum_{j,k=1}^{2} a^{jk}(t,x)\frac{\partial ^{2}}{\partial x^{j} \partial x^{k}} + \sum_{j=1}^{2} \tilde{b}^{j}(t,x)\frac{ \partial }{\partial x^{j}} + \tilde{c}(t,x). $$
Given the properties of the function \(\phi \) in (3.4), for \(C\) large enough, the coefficients \(\tilde{b}^{j}\) and \(\tilde{c}^{j}\) satisfy the same conditions as \(b^{j}\) and \(c\) in (B1). Since it follows from (3.5) that also \(e^{-N\phi +C}g\) belongs to \(\mathbf{W}^{1}_{p}\) for every \(p\geq 1\), we deduce from Theorem 3.1 the existence of a measurable function \(\tilde{v} = \tilde{v}(t,x)\) which, for every \(p>1\), complies with items 1–4 of Theorem 3.1 and solves the Cauchy problem
$$\begin{aligned} \frac{\partial \tilde{v}}{\partial t} + \tilde{\mathcal{G}}(t) \tilde{v} &= 0, \quad t\in [0,1), \end{aligned}$$
(3.6)
$$\begin{aligned} \tilde{v}(1,\cdot ) &= e^{-N\phi +C}g. \end{aligned}$$
(3.7)
For \(p>2\), by Sobolev’s embedding theorem, the continuity of the map \(t\mapsto \tilde{v}(t,\cdot )\) in \(\mathbf{W}^{1}_{p}\) implies its continuity in \(\mathbf{C}\). It follows that the function \(\tilde{v}= \tilde{v}(t,x)\) is continuous on \([0,1]\times \mathbb{R}^{2}\).

Defining \(v:=e^{N\phi -Ct}\tilde{v}\), we observe that \(\tilde{v}\) solves (3.6) and (3.7) if and only if \(v\) solves the Cauchy problem (3.2) and (3.3). For \(p>1\), the regularity of \(\tilde{v} = e^{-N\phi +Ct}v\) implies items 1–4 in the corollary. The proof is completed by noting that the case \(p=1\) follows trivially from the case \(p> 1\) by taking the constant \(N\) slightly larger. □

For \((t,x)\in [0,1]\times \mathbb{R}^{2}\), define
$$ v_{j}(t,x) :=\frac{\partial v}{\partial x^{j}}(t,x),\quad j = 1,2, $$
(3.8)
and consider the elliptic operator
$$ \mathcal{G}_{\ell }(t) :=\sum_{j,k=1}^{2} \frac{\partial a^{jk}}{ \partial x^{\ell }}(t,x)\frac{\partial^{2}}{\partial x^{j} \partial x ^{k}} + \sum_{j=1}^{2} \frac{\partial b^{j}}{\partial x^{\ell }}(t,x)\frac{ \partial }{\partial x^{j}} + \frac{\partial c}{\partial x^{\ell }}(t,x), \quad \ell =1,2. $$
(3.9)
Then we obtain the following corollary, which will be needed in the next section.

Corollary 3.3

Suppose that conditions (B1) and (B3) hold. Let \(v=v(t,x)\) be the function generated by Corollary 3.2 and let \(v_{j}\) be defined as in (3.8). Then \(v _{j} = v_{j}(t,x)\) solves the nonhomogeneous partial differential equation
$$ \frac{\partial v_{j}}{\partial t} + \mathcal{G}(t)v_{j} + \mathcal{G} _{j}(t)v = 0, \quad t\in (0,1). $$
(3.10)

Proof

From Corollary 3.2, we know that the function \(v=v(t,x)\) is three times weakly differentiable with respect to \(x\) and that the derivative with respect to \(t\) of the same function is once weakly differentiable with respect to \(x\). Given condition (B1), we also know that the coefficients of the operator \(\mathcal{G}\) are once continuously differentiable with respect to \(x\). Hence we may differentiate the parabolic partial differential equation (3.2) with respect to \(x^{j}\), \(j=1,2\), which shows that \(v_{j} = v_{j}(t,x)\) satisfies (3.10). □

4 Invertibility of the Jacobian matrix

Let \(a=a(t,x)\), \(b=b(t,x)\), \(c=c(t,x)\) and \(g=g(x)\) be the coefficients from Sect. 3. Let the measurable function \(f=f(t,x): [0,1] \times \mathbb{R}^{2}\to \mathbb{R}\) be three times weakly differentiable with respect to \(x\) and assume that there exists a constant \(N\geq 0\) such that, for \(j,k,\ell =1,2\), it holds that:
  1. (B4)

    The map \(t\mapsto e^{-N|\cdot |}\partial_{x^{j}x^{k}}f(t,\cdot )\) from \((0,1)\) to \(\mathbf{L}^{\infty }\) is analytic, the map \(t \mapsto e^{-N|\cdot |}\partial_{x^{j}}f(t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{\infty }\) is continuously differentiable, and the map \(t \mapsto e^{-N|\cdot |}\partial_{x^{j}x^{k}x^{\ell }}f(t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{\infty }\) is continuous.

     
We define the functions \(A=A(t,x)\), \(B=B(t,x)\) and \(C = C(t,x)\) on \([0,1]\times \mathbb{R}^{2}\) by
$$ \begin{aligned} A^{jk} &:=|J[f,a^{jk}]|-2(-1)^{j}(H[f]a)^{(3-j)k},\\ B^{j} &:=|J[f,b ^{j}]| -(-1)^{j}\big(\partial_{t} + \mathcal{G}(t)\big) \partial_{x^{(3-j)}}f,\\ C &:=|J[f,c]|, \end{aligned} $$
for \(j,k=1,2\).
For suitably regular functions \(v,\varphi :\mathbb{R}^{2}\to \mathbb{R}\), for an open, bounded set \(K\) in \(\mathbb{R}^{2}\) and for \(t\in [0,1]\), we define the pairing
$$\begin{aligned} \mathcal{A}_{K}[v,\varphi ;t] &:=\int_{K} \Biggl(\sum_{j,k=1}^{2}A^{jk}(t,x)\frac{ \partial v}{\partial x^{j}}\frac{\partial \varphi }{\partial x^{k}} \\ & \quad\ {}- \sum_{j=1}^{2}\bigg(B^{j} - \sum_{k=1}^{2}\frac{\partial A^{jk}}{\partial x^{k}}\bigg)(t,x)\frac{\partial v}{\partial x^{j}} \varphi - C(t,x)v\varphi \Biggr)\, \mathrm{d}x. \end{aligned}$$
We assume that the following assumption is satisfied:
  1. (B5)

    Either the Jacobian matrix \(J[f,g](1,\cdot )\) has full rank almost everywhere on \(\mathbb{R}^{2}\), or for every open, bounded set \(K\) in \(\mathbb{R}^{2}\), there exists a test function \(\varphi =\varphi (x)\) belonging to \(\mathbf{W}^{1}_{p,0}(K)\) for some \(p\geq 1\) such that \(\mathcal{A}_{K}[g,\varphi ;1]\neq 0\).

     

The following theorem is the main result of this section and will eventually allow us to prove the martingale representation stated in Theorem 2.2.

Theorem 4.1

Suppose conditions (B1) and (B3)(B5) are in place. Let \(v= v(t,x)\) be the function which is furnished by Corollary 3.2. Then the Jacobian matrix function \(J[f,v] = J[f,v](t,x)\) has full rank almost everywhere with respect to Lebesgue measure on \([0,1]\times \mathbb{R}^{2}\).

Before we can prove Theorem 4.1, we first need to establish several lemmas below.

Let \(\mathbf{X}\) and \(\mathbf{Y}\) be Banach spaces, \(\mathbf{E}\) an open subset of \(\mathbf{X}\) and consider a map \(h: \mathbf{E}\to \mathbf{Y}\). If it exists, we denote by \(D^{k}h(x)\) the \(k\)th Fréchet derivative of \(h\) at the point \(x\in \mathbf{E}\); as is well known, this constitutes a \(k\)-linear map on the \(k\)-fold product \(\mathbf{X}\times \cdots \times \mathbf{X}\). Accordingly, for \(x^{1},\ldots ,x^{k}\in \mathbf{X}\), we denote by \(D^{k}h(x^{1}, \ldots ,x^{k})\) the \(k\)th Fréchet differential.

Lemma 4.2

Given matrices \(M,C,C^{1},C^{2}\in \mathbb{R}^{2\times 2}\), the first and second order Fréchet differentials of the determinant map at \(M\) are given by
$$\begin{aligned} D \det M(C) &= \sum_{\ell =1}^{2}\det M(\ell ;C), \\ D^{2} \det M(C^{1},C^{2}) &= \sum_{\ell =1}^{2}\det M(1,2;C^{\ell },C ^{3-\ell }). \end{aligned}$$

Proof

The expressions are special cases of equations (4) and (6) in [3]. □

Define the linear partial differential operator
$$ \mathcal{P}(t) :=\sum_{j,k=1}^{2} A^{jk}(t,x)\frac{\partial^{2}}{ \partial x^{j} \partial x^{k}} + \sum_{j=1}^{2} B^{j}(t,x) \frac{ \partial }{\partial x^{j}} + C(t,x), \quad t\in [0,1]. $$

Lemma 4.3

Let \(f =f(t,x),v =v(t,x) : [0,1]\times \mathbb{R}^{2} \to \mathbb{R}\) be measurable functions which on \((0,1)\times \mathbb{R}^{2}\) are once weakly differentiable with respect to \(t\), three times weakly differentiable with respect to \(x\) and once weakly differentiable with respect to \(t\) and \(x\), and let \(\mathcal{G}(t)\) and \(\mathcal{G}_{j}(t)\) be the operators defined in (3.1) and (3.9), respectively. Define \(f_{j} :=\partial_{x^{j}}f\), \(v_{j} :=\partial_{x^{j}}v\), \(j=1,2\), and assume that \(v_{j}\) satisfies the partial differential equation
$$ \frac{\partial v_{j}}{\partial t} + \mathcal{G}(t)v_{j} + \mathcal{G} _{j}(t)v = 0, \quad t\in (0,1). $$
(4.1)
Then the determinant function \(w = w(t,x)\) defined on \([0,1]\times \mathbb{R}^{2}\) by \(w :=|J[f,v]|\) satisfies the nonhomogeneous partial differential equation
$$ \frac{\partial w}{\partial t} + \big(\mathcal{G}(t)+c\big) w = - \mathcal{P}(t)v. $$
(4.2)

Proof

Given our differentiability hypothesis on \(f=f(t,x)\) and \(v=v(t,x)\), we may differentiate the determinant function \(w=w(t,x)\) with respect to \(t\). Let us abbreviate throughout the proof of this lemma \(J:=J[f,v]\). A simple application of the chain rule from Fréchet differential calculus (cf. [2, Chapter X.4]) and the fact that \(v_{j}\) satisfies the partial differential equation (4.1) yields
$$ \frac{\partial w}{\partial t} = D \det J \bigg( -\mathcal{G}(t)J - \Big( \textstyle\begin{array}{c} 0 \\ \nabla_{x}\mathcal{G}(t) \end{array}\displaystyle \Big) v +\Big(\frac{\partial }{\partial t}+\mathcal{G}(t)\Big) \Big( \textstyle\begin{array}{c} \nabla_{x} f \\ 0 \end{array}\displaystyle \Big) \bigg) ,\quad t\in (0,1). $$
Direct computation of \(\mathcal{G}(t)w\) and using the identity \(2c \det J = c D \det J(J)\) and the linearity of the Fréchet derivative show that we may replace the term \(-D\det J (\mathcal{G}(t)J)\) above with
$$ \sum_{j,k=1}^{2} a^{jk} D^{2} \det J \left( \frac{\partial J}{\partial x^{j}}, \frac{\partial J}{\partial x^{k}} \right) - \big(\mathcal{G}(t) + c\big) w, \quad t\in (0,1). $$
By the explicit formulae for the first and second order Fréchet derivative of the determinant map derived in Lemma 4.2 and the symmetry of the matrix function \(a\), we obtain after some computations that
$$ \frac{\partial w}{\partial t} + \big(\mathcal{G}(t)+c\big) w = 2 \sum_{j,k=1}^{2} a^{jk} |J[f_{j},v_{k}]| +\left| \textstyle\begin{array}{c} (\partial_{t}+\mathcal{G}(t))\nabla_{x} f \\ \nabla_{x} v \end{array}\displaystyle \right| -\left| \textstyle\begin{array}{c} \nabla_{x} f \\ (\nabla_{x} \mathcal{G}(t)) v \end{array}\displaystyle \right| . $$
Collecting the coefficients of \(\partial^{2}_{x^{j}x^{k}}v\), \(\partial_{x^{j}}v\) and \(v\) yields the result. □

Lemma 4.4

Let \(\gamma^{j},\eta : [0,1]\times \mathbb{R}^{2} \to \mathbb{R}\), \(j=1,2\), be measurable functions such that for \(p>1\), the maps \(t\mapsto \gamma^{j}(t,\cdot ),t\mapsto \eta (t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{p}_{\mathrm{loc}}\) are continuous. Let \(K\) be an open, bounded set in \(\mathbb{R}^{2}\) and \(\varphi =\varphi (x)\) a test function belonging to \(\mathbf{W}^{1}_{p',0}(K)\). Then for each \(t\in [0,1]\), the pairing
$$ \tilde{\mathcal{A}}_{K}(\varphi ;t) :=\int_{K}\Biggl( \sum_{j=1}^{2} \gamma^{j}(t,x) \frac{\partial \varphi }{\partial x^{j}} + \eta (t,x) \varphi \Biggr)\, \mathrm{d}x $$
is a bounded, linear functional on \(\mathbf{W}^{1}_{p',0}(K)\). Moreover, the map \(t\mapsto \tilde{\mathcal{A}}_{K}(\cdot ;t)\) is continuous as a map from \([0,1]\) to \(\mathbf{W}^{-1}_{p}(K)\).

Proof

By the triangle inequality and the Hölder inequality, for each \(t\in [0,1]\),
$$ \begin{aligned} | \tilde{\mathcal{A}}_{K}(\varphi ;t)| &\leq \int_{K} \Biggl(\sum_{j=1} ^{2} \left| \gamma^{j}(t,x) \frac{\partial \varphi }{\partial x^{j}}\right| + |\eta (t,x)\varphi | \Biggr)\,\mathrm{d}x\\ &\leq \| \varphi \|_{\mathbf{W}^{1}_{p'}(K)}\Biggl(\sum_{j=1}^{2} \|\gamma^{j}(t, \cdot ) \|_{\mathbf{L}^{p}(K)} + \|\eta (t,\cdot )\|_{\mathbf{L}^{p}(K)} \Biggr), \end{aligned} $$
which implies the boundedness of the linear functional \(\tilde{\mathcal{A}}_{K}(\cdot ;t)\).
To prove the continuity of the map \(t\mapsto \tilde{\mathcal{A}}_{K}( \cdot ;t)\) from \([0,1]\) to \(\mathbf{W}^{-1}_{p}(K)\), observe that for each \(t\in [0,1]\), \(\tilde{\mathcal{A}}_{K}(\cdot ;t)\) is in the dual space of \(\mathbf{W}^{1}_{p',0}(K)\). We recall that the dual space of \(\mathbf{W}^{1}_{p',0}(K)\) is isometrically isomorphic to \(\mathbf{W} ^{-1}_{p}(K)\). It follows that
$$\begin{aligned} &\| \tilde{\mathcal{A}}_{K}(\cdot ;t) - \tilde{\mathcal{A}}_{K}( \cdot ;u)\|_{\mathbf{W}^{-1}_{p}(K)} \\ &\quad \leq \sum_{j=1}^{2} \|\gamma^{j}(t,\cdot ) - \gamma^{j}(u,\cdot ) \|_{\mathbf{L}^{p}(K)} + \|\eta (t,\cdot ) - \eta (u,\cdot )\|_{ \mathbf{L}^{p}(K)}, \end{aligned}$$
which implies the desired continuity of the map \(t\mapsto \tilde{\mathcal{A}}_{K}(\cdot ;t)\) by the continuity of the maps \(t\mapsto \gamma^{j}(t,\cdot ),t\mapsto \eta (t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{p}_{\mathrm{loc}}\). □

Proof of Theorem 4.1

We define
$$ w(t,x) :=|J[f,v]|(t,x), \quad (t,x)\in [0,1]\times \mathbb{R}^{2}. $$
The claim of the theorem is true if and only if the set
$$ G :=\{(t,x)\in [0,1]\times \mathbb{R}^{2} : w(t,x) = 0\} $$
has Lebesgue measure zero on \([0,1]\times \mathbb{R}^{2}\). This is equivalent to the set
$$ H :=\bigg\{ x\in \mathbb{R}^{2} : \int_{0}^{1} 1_{G}(t,x)\, \mathrm{d}t > 0\bigg\} $$
having Lebesgue measure zero on \(\mathbb{R}^{2}\).
From Corollary 3.2 and condition (B4), we deduce that for every \(p\geq 1\), the map \(t\mapsto e^{-N\phi (\cdot )}w(t,\cdot )\) is analytic as a map from \((0,1)\) to \(\mathbf{W}^{1}_{p}\). Moreover, by Sobolev’s embedding theorem, for \(p>2\), it is also analytic as a map from \((0,1)\) to \(\mathbf{C}\). Suppose for a contradiction that
$$ \int_{\mathbb{R}^{2}} 1_{H}(t,x)\, \mathrm{d}x >0. $$
From the analyticity of \(t\mapsto w(t,\cdot )\), it follows that if \(x\in H\), then \(w(t,x)=0\) for all \(t\in (0,1)\) and therefore that
$$ \lim_{t\uparrow 1}w(t,x) = 0, \quad x\in H. $$
We first prove the claim of the theorem assuming that \(J[f,g](1,x)\) has full rank almost everywhere on \(\mathbb{R}^{2}\). From Corollary 3.2 and (B4), we know that for every \(p\geq 1\), the map \(t\mapsto e^{-N\phi (\cdot )}w(t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{p}\) is continuous. It follows that the map \(t\mapsto w(t,\cdot )\) from \([0,1]\) to \(\mathbf{L}^{p}_{\mathrm{loc}}\) is continuous and hence that for all open, bounded sets \(K\) in \(\mathbb{R}^{2}\),
$$ \|w(t,\cdot ) - w(1,\cdot )\|_{\mathbf{L}^{p}(K)} \to 0, \quad t\uparrow 1. $$
We deduce that \(w(1,x) = |J[f,v]|(1,x) = 0\) for every \(x\) belonging to \(H\). Now recall that by (B5), the matrix function \(J[f,v](1,\cdot ) = J[f,g](1,\cdot )\) has full rank almost everywhere on \(\mathbb{R}^{2}\).

Let us now assume that for every open, bounded set \(K\) in \(\mathbb{R} ^{2}\) there exists a test function \(\varphi = \varphi (x)\) belonging to \(\mathbf{W}^{1}_{p',0}(K)\) such that \(\mathcal{A}_{K}[g,\varphi ;1] \neq 0\). From Corollary 3.2 and (B4), we know that the functions \(f=f(t,x)\) and \(v=v(t,x)\) satisfy the differentiability hypothesis of Lemma 4.3, and from Corollary 3.3 that \(v_{j}\) satisfies the partial differential equation (4.1). It follows from (4.2) that if \(w(t,x) = 0\) for all \((t,x)\in (0,1) \times H\), then also \(\mathcal{P}(t)v=0\) for all \((t,x)\in (0,1) \times H\).

From Corollary 3.2, we know that for every \(p\geq 1\), \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is a continuous map from \([0,1]\) to \(\mathbf{W}^{1}_{p}\). In particular, for every \(p> 1\), \(t\mapsto v(t,\cdot )\) and \(t\mapsto \partial_{x^{j}}v(t,\cdot )\) are continuous maps from \([0,1]\) to \(\mathbf{L}^{p}_{\mathrm{loc}}\). Assumptions (B1) and (B4) imply that also \(t\mapsto A^{jk}(t,\cdot ), t\mapsto B^{j}(t,\cdot ), t\mapsto C(t, \cdot ), t\mapsto \partial_{x^{k}}A^{jk}(t,\cdot )\) are continuous maps from \([0,1]\) to \(\mathbf{L}^{p}_{\mathrm{loc}}\), for every \(p> 1\). It follows from Lemma 4.4 that for any open, bounded set \(K^{\prime}\subset H\), for any test function \(\varphi = \varphi (x)\) of class \(\mathbf{W}^{1}_{p^{\prime},0}(K^{\prime})\) and for any fixed \(t\in [0,1]\), the pairing \(\mathcal{A}_{K^{\prime}}[v,\cdot ;t]\) is a bounded, linear functional on \(\mathbf{W}^{1}_{p^{\prime},0}(K^{\prime})\). Actually, for \(t\in (0,1)\), \(\mathcal{A}_{K^{\prime}}[v,\varphi ;t] = 0\) is the weak formulation of the partial differential equation \(\mathcal{P}(t)v = 0\). It follows that \(\mathcal{A}_{K^{\prime}}[v,\varphi ;t] = 0\) for all \(t \in (0,1)\) and every \(\varphi =\varphi (x)\) belonging to \(\mathbf{W}^{1}_{p^{\prime},0}(K^{\prime})\) and therefore that
$$ \lim_{t\uparrow 1}\mathcal{A}_{K^{\prime}}[v,\varphi ;t] = 0, \quad \varphi \in \mathbf{W}^{1}_{p^{\prime},0}(K^{\prime}). $$
Also from Lemma 4.4, we know that the map \(t\mapsto \mathcal{A}_{K^{\prime}}[v,\cdot ;t]\) is continuous as a map from \([0,1]\) to \(\mathbf{W}^{-1}_{p}(K^{\prime})\) and therefore that
$$ \| \mathcal{A}_{K^{\prime}}[v,\cdot ;t] - \mathcal{A}_{K^{\prime}}[v,\cdot ;1] \|_{\mathbf{W}^{-1}_{p}(K^{\prime})} \longrightarrow 0, \quad t\uparrow 1. $$
It follows that \(\mathcal{A}_{K^{\prime}}[v,\varphi ;1] = \mathcal{A}_{K ^{\prime}}[g,\varphi ;1] =0\) for every \(\varphi \in \mathbf{W}^{1}_{p^{\prime},0}(K ^{\prime})\). Now recall that by (B5), for every open, bounded set \(K\) and some \(p>1\), there exists a test function \(\varphi \in \mathbf{W}^{1}_{p^{\prime},0}(K)\) such that \(\mathcal{A}_{K}[g, \varphi ;1] \neq 0\). □

5 Proof of Theorem 2.2

From here onwards we adopt the notation introduced in Sect. 2 and assume that conditions (A1)–(A4) are in place.

We fix a function \(\phi =\phi (x)\) on \(\mathbb{R}^{2}\) satisfying (3.4) and recall that \(\mathcal{L}^{X}(t)\), \(t\in [0,1]\), is the infinitesimal generator of the process \(X\).

Lemma 5.1

There exist a unique continuous function \(v=v(t,x)\) on \([0,1]\times \mathbb{R}^{2}\) and a constant \(N\geq 0\) such that the following hold:
  1. 1.
    For every \(p\geq 1\),
    1. (a)

      \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is a continuous map from \([0,1]\) to \(\mathbf{W}^{1}_{p}\);

       
    2. (b)

      \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is an analytic map from \((0,1)\) to \(\mathbf{W}^{2}_{p}\);

       
    3. (c)

      \(t\mapsto e^{-N\phi (\cdot )}v(t,\cdot )\) is a \(p\)-integrable map from \([0,1)\) to \(\mathbf{W}^{3}_{p}\);

       
    4. (d)

      \(t\mapsto e^{-N\phi (\cdot )}\partial_{t}v(t,\cdot )\) is a \(p\)-integrable map from \([0,1)\) to \(\mathbf{W}^{1}_{p}\).

       
     
  2. 2.
    The function \(v=v(t,x)\) solves the homogeneous Cauchy problem
    $$\begin{aligned} \frac{\partial v}{\partial t} + (\mathcal{L}^{X}(t) -r)v &= 0, \quad t \in [0,1), \end{aligned}$$
    (5.1)
    $$\begin{aligned} v(1,\cdot ) &= g. \end{aligned}$$
    (5.2)
     
  3. 3.

    The Jacobian matrix function \(J[f,v]=J[f,v](t,x)\) has full rank almost everywhere with respect to Lebesgue measure on \([0,1]\times \mathbb{R}^{2}\).

     

Hereafter we denote by \(v=v(t,x)\) the function defined in Lemma 5.1.

Proof

Observe that (A1)–(A4) imply (B1) and (B3)–(B5) on the corresponding coefficients in Theorem 4.1. The assertions for \(v\) and \(J[f,v]\) now follow directly from Theorem 4.1. □

Lemma 5.2

The martingale
$$ S^{B}_{t} :=\mathbb{E}[\psi |\mathcal{F}_{t}], \quad t \in [0,1], $$
is well defined and has the representation
$$ S^{B}_{t} = v(t,X_{t})e^{-\int_{0}^{t} r(u,X_{u})\, \mathrm{d}u}. $$
(5.3)
Moreover, for \(t\in (0,1)\),
$$ \mathrm{d}S^{B}_{t} = e^{-\int_{0}^{t} r(u,X_{u})\, \mathrm{d}u}(\nabla _{x} v\sigma )(t,X_{t})\, \mathrm{d}W_{t}. $$
(5.4)

Proof

Assume that the process \(S^{B}\) is actually defined by (5.3). From the continuity of \(v\) on \([0,1]\times \mathbb{R}^{2}\), it then follows that \(S^{B}\) is in fact a continuous process on \([0,1]\), and from the expression (5.2) for \(v(1,\cdot )\) that \(S^{B}_{1}=\psi \). Hence, to complete the proof, it remains to show that \(S^{B}\) given by (5.3) is a martingale under the measure ℙ.

From Lemma 5.1, we know that the map \(t\mapsto e^{-N \phi (\cdot )}v(t,\cdot )\) is analytic as a map from \((0,1)\) to \(\mathbf{W}^{2}_{p}\); in particular, it is continuously differentiable. This allows us to use a variant of the Itô formula due to Krylov (cf. [13, Sect. 2.10, Theorem 1]) and accounting for (5.1), we immediately obtain (5.4) from (5.3).

We have shown that \(S^{B}\) is a continuous local martingale. It only remains to verify that the process is of class (D) or has an integrable majorant. Recall that for every \(p\geq 1\), the map \(t\mapsto e ^{-N\phi (\cdot )}v(t,\cdot )\) is continuous from \([0,1]\) to \(\mathbf{W}^{1}_{p}\). It follows from Sobolev’s embedding theorem that for \(p>2\), the same map is continuous from \([0,1]\) to \(\mathbf{C}\). Therefore,
$$ |v(t,x)| \leq e^{N(1+|x|)}. $$
In particular, accounting for the growth properties of \(r=r(t,x)\),
$$ \sup_{t\in [0,1]}|S^{B}_{t}| \leq e^{N(1+\sup_{t\in [0,1]}|X_{t}|)}. $$
As \(\sup_{t\in [0,1]}|X_{t}|\) has all exponential moments, the martingale property for \(S^{B}\) follows. □
The proof of Theorem 2.2 is now completed easily. Equations (2.2) and (5.4) show that
$$ \left\{ \begin{aligned} \mathrm{d}S^{F}_{t} &= e^{-\int_{0}^{t} r(u,X_{u})\, \mathrm{d}u}( \nabla_{x} f\sigma )(t,X_{t})\, \mathrm{d}W_{t},\\ \mathrm{d}S^{B}_{t} &= e^{-\int_{0}^{t} r(u,X_{u})\, \mathrm{d}u}(\nabla_{x} v\sigma )(t,X _{t})\, \mathrm{d}W_{t}. \end{aligned} \right. $$
(5.5)
By the growth properties of \(r = r(t,x)\), \(f=f(t,x)\) and \(\sigma = \sigma (t,x)\) in (A1)–(A3), it is easily verified that also the continuous local martingale \(S^{F} = (S^{F} _{t})\) is a true martingale.
In view of (5.5), we obtain
$$ \mathrm{d}S_{t} = e^{-\int_{0}^{t} r(u,X_{u})\, \mathrm{d}u} ( J[f,v] \sigma )(t,X_{t})\, \mathrm{d}W_{t}, \quad t\in [0,1]. $$
We recall that by the Brownian integral representation property, every ℙ-martingale \(M\) is a stochastic integral with respect to \(W\), i.e.,
$$ \mathrm{d}M_{t} = \tilde{H}_{t}\, \mathrm{d}W_{t},\quad t\in [0,1], $$
for some progressively measurable, locally square-integrable process \(\tilde{H} = (\tilde{H}_{t})\). Hence, in order to deduce the integral representation property (1.1), it remains to show that the matrix process
$$ (J[f,v]\sigma )(t,X_{t}), \quad t\in [0,1], $$
(5.6)
has full rank on \(\Omega \times [0,1]\) almost surely under the product measure \(\mathrm{d}\mathbb{P} \times \mathrm{d}t\). From Lemma 5.1, we know that the matrix function \(J[f,v] = J[f,v](t,x)\) has full rank almost everywhere under Lebesgue measure on \([0,1]\times \mathbb{R}^{2}\). From the nonsingularity assumption in (A1), we know that also the matrix function \(\sigma = \sigma (t,x)\) has full rank almost everywhere under Lebesgue measure on \([0,1]\times \mathbb{R}^{2}\). The conclusion that (5.6) has full rank on \(\Omega \times [0,1]\) almost surely now follows easily from the fact that under (A1), the distribution of \(X_{t}\) has a density under Lebesgue measure on \(\mathbb{R}^{2}\); see [17, Theorem 9.1.9].

6 Example: a class of stochastic volatility models

In this section, we apply our main result in Theorem 2.2 to prove the completeness of a financial market in which one stock with price process \(P=(P_{t})\) and one call option with price process \(V=(V_{t})\) are traded. The processes \(P\) and \(V\) are defined by
$$ \begin{aligned} \mathrm{d}P_{t} &= rP_{t}\, \mathrm{d}t + \nu (Y_{t})P_{t}\, \mathrm{d}W ^{1}_{t},\\ \mathrm{d}Y_{t} &= \big(\alpha (m-Y_{t}) - \mu (P_{t},Y _{t})\big)\, \mathrm{d}t + \sigma (Y_{t})\, \mathrm{d}W_{t},\\ V_{t} &= e^{-r(1-t)}\mathbb{E}[(P_{1}-\Gamma )^{+}|\mathcal{F}_{t}], \end{aligned} $$
(6.1)
for constants \(\Gamma ,\alpha ,m,r\in \mathbb{R}\) with \(\Gamma >0\), \(r\geq 0\). In particular, this covers the class of stochastic volatility models introduced in [6, Eq. (2.7)].
The coefficients \(\nu ,\mu ,\sigma^{j}:\mathbb{R}\to \mathbb{R}\), \(j=1,2\), are assumed to satisfy the following condition:
  1. (C1)
    There exist constants \(N,D,\rho ,\epsilon >0\) such that for all \(y\in \mathbb{R}\) with \(\nu (y)>N\) and \(\sigma^{j}(y)>N\), the derivative \(\mathrm{d}\nu /\mathrm{d}y(y) \neq 0\) almost everywhere on ℝ and the functions \(\nu \), \(\sigma^{j}\) and \(\mu (p, \cdot )\) are infinitely differentiable and satisfy
    $$ \biggl| \frac{\partial^{k}\mu }{\partial y^{k}}(p,y)\biggr| + \biggl| \frac{ \partial^{k}\nu }{\partial y^{k}}(y)\biggr| + \biggl| \frac{\partial ^{k}\sigma^{j}}{\partial y^{k}}(y)\biggr| \leq \frac{Dk!}{(\rho + \epsilon |y|)^{k}}, \quad (p,y)\in \mathbb{R}\times \mathbb{R}. $$
    The function \(\mu =\mu (p,y)\) has first and second continuous derivatives in \(p\) and \(y\), and \(y(e^{p})^{\ell } \partial_{y}^{k} \partial_{p}^{\ell }\mu \in \mathbf{L}^{\infty }\), \(\ell =0,1\), \(k=1,2\), \(\ell +k\leq 2\).
     

We are now ready to state the main result of this section.

Theorem 6.1

Suppose that condition (C1) is satisfied. Then the \((P,V)\)-market defined by (6.1) is complete.

Remark 6.2

We draw attention to the fact that in (C1), the quite specific assumptions on the space regularity of the coefficients of (6.1) are solely necessary because we are allowing \(P\) to evolve according to a geometric Brownian motion and \(Y\) to have mean reverting dynamics, both cases in which the coefficients are unbounded. This can be seen easily from the proof of Theorem 6.1 below. In the absence of this particular choice of dynamics, the verification of the assumptions of Theorem 2.2 is much simpler.

Remark 6.3

Two specific examples of functions which satisfy the conditions on \(\nu \) and \(\sigma \) in (C1) are scaled and shifted versions of the \(\operatorname{arctan}\) and \(\operatorname{tanh}\) functions.

Proof of Theorem 6.1

Consider the stochastic processes
$$ \textstyle\begin{array}{rl@{\qquad}rl} X^{1}_{t} &:=\log P_{t}, & S^{F}_{t} &:=e^{-rt+X^{1}_{t}}, \\ X^{2}_{t}&:=e^{\alpha t}(Y_{t}-m), &S^{B}_{t} &:=\mathbb{E}[e ^{-r}(e^{X^{1}_{1}}-\Gamma )^{+}|\mathcal{F}_{t}]. \end{array} $$
By a simple application of Itô’s formula, we find that
$$ \begin{aligned} \mathrm{d}X^{1}_{t} &= \biggl( r-\frac{1}{2}\nu (m+e^{-\alpha t}X^{2} _{t})^{2}\biggr) \, \mathrm{d}t + \nu (m+e^{-\alpha t}X^{2}_{t})\, \mathrm{d}W^{1}_{t},\\ \mathrm{d}X^{2}_{t} &= -e^{\alpha t}\mu (e^{X ^{1}_{t}},m+e^{-\alpha t}X^{2}_{t})\, \mathrm{d}t + e^{\alpha t}\sigma (m+e^{-\alpha t}X^{2}_{t})\, \mathrm{d}W_{t}. \end{aligned} $$
We define the functions \(\tilde{\nu }(t,x^{2}):=\nu (m+e^{-\alpha t}x ^{2})\), \(\tilde{\sigma }^{j}(t,x^{2}) :=\sigma^{j}(m+e^{-\alpha t}x ^{2})\) and \(\tilde{\mu }(t,x^{1},x^{2}):=\mu (e^{x^{1}},m+e ^{-\alpha t}x^{2})\). Now observe that by Lemma A.1 in Appendix A, the maps \(t\mapsto \tilde{\nu }(t,\cdot ), \tilde{\sigma }^{j}(t,\cdot )\) from \([0,1]\) to \(\mathbf{C(\mathbb{R})}\) and the map \(t\mapsto \tilde{\mu }(t,\cdot ,\cdot )\) from \([0,1]\) to \(\mathbf{C}\) are analytic. By computing the derivative with respect to \(t\) and using the bounds on the derivatives of \(\mu \), \(\nu \) and \(\sigma^{j}\) hypothesized in (C1), it is verified easily that the maps \(t\mapsto \tilde{\nu }(t,\cdot ), \tilde{\sigma }(t,\cdot )\) are continuous from \([0,1]\) to \(\mathbf{C}^{2}(\mathbb{R})\) and the map \(t\mapsto \tilde{\mu }(t,\cdot ,\cdot )\) is continuous from \([0,1]\) to \(\mathbf{C}^{1}\). Therefore, conditions (A1)–(A3) are satisfied.
It remains to verify the condition (A4). With the definitions \(f(x^{1}):=e^{-r + x^{1}}\), \(a^{11}(x^{2}):=(\tilde{\nu }(1,x ^{2}))^{2}\), \(a^{12}(x^{2}):=e^{\alpha }(\tilde{\nu }\tilde{\sigma } ^{1})(1,x^{2})\), \(b^{1}(x^{2}):=r-(1/2)(\tilde{\nu }(1,x^{2}))^{2}\), \(g(x^{1}) :=e^{-r}(e^{x^{1}}-\Gamma )^{+}\), the pairing \(\mathcal{B} _{K}\) becomes
$$\begin{aligned} \mathcal{B}_{K}[g,\varphi ;1] &= \frac{1}{2}\int_{K} \Biggl(\sum_{k=1} ^{2} \bigg(\frac{\mathrm{d}f}{\mathrm{d}x^{1}}\frac{\mathrm{d}a^{1k}}{ \mathrm{d}x^{2}}\frac{\partial \varphi }{\partial x^{k}} + \frac{ \partial }{\partial x^{k}}\Big(\frac{\mathrm{d}f}{\mathrm{d}x^{1}}\frac{ \mathrm{d}a^{1k}}{\mathrm{d}x^{2}}\Big)\varphi \bigg)\frac{\mathrm{d}g}{ \mathrm{d}x^{1}} \\ & \qquad\qquad\qquad{}-2 \frac{\mathrm{d}f}{\mathrm{d}x^{1}}\frac{\mathrm{d}b ^{1}}{\mathrm{d}x^{2}}\frac{\mathrm{d}g}{\mathrm{d}x^{1}}\varphi \Biggr)\, \mathrm{d}x \\ &\begin{aligned} =\frac{1}{2}e^{-r}\int_{K\cap \{x^{1}\geq \log \Gamma \}}\Biggl(&e ^{x^{1}}\operatorname{div}\biggl( \frac{\mathrm{d}f}{\mathrm{d}x^{1}}\frac{ \mathrm{d}a^{11}}{\mathrm{d}x^{2}}\varphi ,\frac{\mathrm{d}f}{ \mathrm{d}x^{1}}\frac{\mathrm{d}a^{12}}{\mathrm{d}x^{2}}\varphi \biggr) \\ &{} -2e ^{x^{1}} \frac{\mathrm{d}f}{\mathrm{d}x^{1}}\frac{\mathrm{d}b^{1}}{ \mathrm{d}x^{2}}\varphi \Biggr)\, \mathrm{d}x \end{aligned} \\ &=-\frac{1}{2}(e^{-r}\Gamma )^{2} \int_{\hat{K}} \frac{\mathrm{d}a ^{11}}{\mathrm{d}x^{2}}\varphi (\log \Gamma ,\cdot )\, \mathrm{d}x^{2}, \end{aligned}$$
where \(\hat{K}:=K\cap \{x^{1}=\log \Gamma \}\) and the last step follows by a variant of the divergence theorem and the fact that \(\mathrm{d}b ^{1}/\mathrm{d}x^{2} = -(1/2)\mathrm{d}a^{11}/\mathrm{d}x^{2}\). Since
$$ \frac{\mathrm{d}a^{11}}{\mathrm{d}x^{2}}(\cdot ) = 2\biggl( \tilde{\nu }\frac{\mathrm{d}\tilde{\nu }}{\mathrm{d}x^{2}}\biggr) (1, \cdot ) = 2e^{-\alpha }\biggl( \nu \frac{\mathrm{d}\nu }{\mathrm{d}y}\biggr) (m+e^{-\alpha }\cdot ), $$
it follows from (C1) that for every bounded, open set \(K\) in \(\mathbb{R}^{2}\), we can find a function \(\varphi =\varphi (x)\) in \(\mathbf{W}^{1}_{p,0}(K)\) for some \(p>1\) such that \(\mathcal{B} _{K}[g,\varphi ;1]\neq 0\). For example, we can choose an appropriately truncated, shifted and scaled version of the function \(\varphi (x) = -|x|\). The result now follows by Theorem 2.2. □

Footnotes

  1. 1.

    By a “pairing”, we mean the duality pairing of the bounded linear functional given by \(\mathcal{B}_{K}[v,\cdot ;t]\) and the test function \(\varphi \). See for example [5, Appendix D.3(a)].

Notes

Acknowledgements

It is a pleasure to thank Dmitry Kramkov for introducing me to the topic of market completion with derivative securities and for interesting discussions. I would like to thank Léonard Monsaingeon and Peter Takác for discussions relating to the presented work and Johannes Ruf for comments on an early version of this paper.

References

  1. 1.
    Anderson, R.M., Raimondo, R.C.: Equilibrium in continuous-time financial markets: endogenously dynamically complete markets. Econometrica 76, 841–907 (2008) MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bhatia, R.: Matrix Analysis. Graduate Texts in Mathematics. Springer, Berlin (1997) CrossRefzbMATHGoogle Scholar
  3. 3.
    Bhatia, R., Jain, T.: Higher order derivatives and perturbation bounds for determinants. Linear Algebra Appl. 431, 2102–2108 (2009) MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Davis, M., Obłój, J.: Market completion using options. In: Stettner, Ł. (ed.) Advances in Mathematics of Finance. Banach Center Publications, vol. 83, pp. 49–60 (2002) Google Scholar
  5. 5.
    Evans, L.: Partial Differential Equations. Graduate Studies in Mathematics. Am. Math. Soc., Reading (2010) CrossRefzbMATHGoogle Scholar
  6. 6.
    Fouque, J.-P., Sircar, R., Papanicolaou, G.: Derivatives in Financial Markets with Stochastic Volatility. Cambridge Univ. Press, London (2000) zbMATHGoogle Scholar
  7. 7.
    Friedman, A.: Stochastic Differential Equations and Applications, vol. 1. Academic Press, New York (1975) zbMATHGoogle Scholar
  8. 8.
    Harrison, J.M., Pliska, S.R.: A stochastic calculus model of continuous trading: complete markets. Stoch. Process. Appl. 15, 313–316 (1983) MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Hugonnier, J., Malamud, S., Trubowitz, E.: Endogenous completeness of diffusion driven equilibrium markets. Econometrica 80, 1249–1270 (2012) MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance. Springer, Berlin (1998) CrossRefzbMATHGoogle Scholar
  11. 11.
    Kramkov, D., Predoiu, S.: Integral representation of martingales motivated by the problem of endogenous completeness in financial economics. Stoch. Process. Appl. 124, 81–100 (2014) MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Krantz, S.G., Parks, H.R.: A Primer of Real Analytic Functions. Birkhäuser Advanced Texts Basler Lehrbücher, Basel (2002) CrossRefzbMATHGoogle Scholar
  13. 13.
    Krylov, N.V.: Controlled Diffusion Processes. Applications of Mathematics, vol. 14. Springer, Berlin (1980) CrossRefzbMATHGoogle Scholar
  14. 14.
    Krylov, N.V.: Lectures on Elliptic and Parabolic Equations in Sobolev Spaces. Graduate Studies in Mathematics, vol. 96. Am. Math. Soc., Reading (2008) zbMATHGoogle Scholar
  15. 15.
    Riedel, F., Herzberg, F.: Existence of financial equilibria in continuous time with potentially complete markets. J. Math. Econ. 49, 398–404 (2013) MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Romano, M., Touzi, N.: Contingent claims and market completeness in a stochastic volatility model. Math. Finance 7, 399–412 (1997) MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Stroock, D.W., Varadhan, S.R.S.: Multidimensional Diffusion Processes. Springer, Berlin (2006) zbMATHGoogle Scholar

Copyright information

© The Author(s) 2016

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of MathematicsUniversity College LondonLondonUK

Personalised recommendations