Market Completion with Derivative Securities

Let $S^F$ be a $\mathbb{P}$-martingale representing the price of a primitive asset in an incomplete market framework. We present easily verifiable conditions on model coefficients which guarantee the completeness of the market in which in addition to the primitive asset one may also trade a derivative contract $S^B$. Both $S^F$ and $S^B$ are defined in terms of the solution $X$ to a $2$-dimensional stochastic differential equation: $S^F_t = f(X_t)$ and $S^B_t:=\mathbb{E}[g(X_1) | \mathcal{F}_t]$. From a purely mathematical point of view we prove that every local martingale under $\mathbb{P}$ can be represented as a stochastic integral with respect to the $\mathbb{P}$-martingale $S := (S^F\ S^B)$. Notably, in contrast to recent results on the endogenous completeness of equilibria markets, our conditions allow the Jacobian matrix of $(f,g)$ to be singular everywhere on $\mathbf{R}^2$. Hence they cover, as a special case, the prominent example of a stochastic volatility model being completed with a European call (or put) option.


Introduction
Let (Ω, F , P) be a probability space, consider a fixed time horizon equal to one and let F = (F t ) t∈[0,1] be a filtration satisfying the usual conditions with F 0 containing only Ω and the null sets of P and with F 1 = F . Let S = (S j t ) be a d-dimensional stochastic process describing the evolution of the discounted prices of liquidly traded securities in a financial market and with the property that S is a (vector) martingale under the measure P. The model is said to be complete, if any contingent claim payoff can be obtained as the terminal value of a self-financing trading strategy. The 2nd fundamental theorem of asset pricing (cf. [Harrison and Pliska, 1983]) allows us to restate the completeness property in purely mathematical terms as follows: every local martingale M = (M t ) admits an integral representation with respect to S, that is, for some predictable S-integrable process H = (H j t ). The 2nd fundamental theorem of asset pricing also asserts that the above statements are equivalent to P being the unique martingale measure for S in the class of equivalent measures.
The process S may for example describe the prices of stocks or option contracts, which nowadays are often traded as liquidly as their underlyings. Depending on the application one has in mind the construction of S differs significantly. In general there are three possibilities to consider. Given its initial value, S may be defined in a forward form, in terms of its predictable characteristics under the measure P. In this case the verification of the completeness property is straightforward. For example, if S is a driftless diffusion process under the measure P with volatility matrix-process σ = (σ t ), then the market is complete if and only if σ has full rank dP × dt almost surely (cf. [Karatzas and Shreve, 1998, Theorem 6.6]). Alternatively, S can be defined in a backward form, as the conditional expectation under P of its given terminal value. Finally, some components of S may be defined in a forward form and others in a backward form leading to a forward-backward setup.
In the present paper we assume the last setup above and focus on the case of two dimensions that is d = 2. In particular, let S F = (S F t ) and S B = (S B t ) be scalar-valued martingales under P, such that One may view the forward component as the discounted price of a primitive asset and the backward component as that of a derivative security. That is, given a process σ F = (σ F,j t ) j=1,2 , a P-Brownian motion W = (W j t ) j=1,2 and a random variable ψ, the processes S F and S B are defined by We are looking for easily verifiable conditions on σ F and ψ, guaranteeing the integral representation property of all P-martingales with respect to S and hence the completeness of the market, in which, in addition to the primitive asset S F , also the derivative contract S B can be traded.
In principal the proof of our main result Theorem 1 below generalizes to the d-dimensional case. The reason we present the two-dimensional case only is twofold: first, the structural conditions on the coefficients σ F and ψ become very complex in higher dimensions; second, using our current methods an extension to higher dimensions would require additional regularity of ψ and, in particular, exclude the payoff functions of call and put options, which are only once weakly differentiable.
For our analysis we assume that σ F and ψ are specified in terms of a solution X to a twodimensional stochastic differential equation with drift vector b = b(t, x) and volatility matrix σ = σ(t, x). With respect to the space variable our conditions are quite classical: b = b(t, ·) is once continuously differentiable and σ = σ(t, ·) is twice continuously differentiable and possesses a bounded inverse. Further, the functions themselves and their derivatives are bounded. With respect to time our conditions are quite exacting: b = b(·, x) and σ = σ(·, x) have to be real analytic on (0, 1).
Our results extend and rigorously prove ideas on the completion of markets with derivative securities, first formulated in [Romano and Touzi, 1997] and [Davis and Oblój, 2002].
The paper [Romano and Touzi, 1997] is concerned with the specific case of stochastic volatility models. The main result in this paper requires the derivative payoff function to be a convex function of the stock price only and, unless given by the special case of a European call or put option, to be twice continuously differentiable. Perhaps most limiting from the point of view of applicability, it is required that the volatility risk premium is such that the drift coefficient of the volatility process under the equivalent martingale measure does not depend on the stock price. Moreover, also the correlation between the asset price and its (stochastic) volatility process and the volatility of the volatility process must not depend on the stock price.
In [Davis and Oblój, 2002] the setup is not restricted to the two-dimensional case. However, the key conditions in this paper are not placed on model primitives, but on the conditional expectation E[(S F 1 , ψ) ⋆ |F t ] = v(t, X t ). In particular, v is assumed to be (jointly) real analytic in the time and space variables and, in the main theorem of the paper, the Jacobian matrix (with respect to x) of v = v(t, x) is assumed to be nonzero on some open subset of (0, 1) × R 2 .
Our work is intimately related to recent results on the integral representation of martingales, which were motivated by the problem of the endogenous completeness of continuous-time Radner equilibria in financial economics (cf. [Kramkov and Predoiu, 2014, Hugonnier et al., 2012, Riedel and Herzberg, 2013, Anderson and Raimondo, 2008). The differences between these results and ours are twofold: first, in a Radner equilibrium setting S is specified purely in a backward form and the setup does not accommodate a forward component; second and perhaps most important, if S F 1 = f (X 1 ) and ψ = g(X 1 ) the aformentioned results require the Jacobian matrix to have full rank at least on some open subset of R 2 . This condition is not satisfied even in the most pivotal example of the completion of a stochastic volatility model with a European call or put option, where the corresponding Jacobian matrix is singular everywhere on R 2 . We replace this requirement with a novel condition involving aside from f and g also the coefficients of the state process b and σ and which is satisfied in the aforementioned example of a typical stochastic volatility model being completed with a European call option (see Section 6).
At first sight it may appear that the most restrictive condition, limiting the applicability of our result, is the boundedness assumption on the coefficients of the diffusion X. This assumption stems from the theory of elliptic and parabolic partial differential equations, which plays an essential part in our proofs. However, we demonstrate in Section 6 how we can still accommodate popular models from financial mathematics such as geometric Brownian motion or mean-reverting processes by means of suitable changes of variables.
Notation and basic concepts Let X be a Banach space with norm · . In the sequel we will frequently use maps h : [0, 1] → X, which are Hölder continuous on [0, 1], that is, there exist constants N > 0 and δ > 0 such that and analytic on (0, 1), that is, for every u ∈ (0, 1) there exist ǫ(u) > 0 and a family {A n (u)} of elements in X, such that For multi-indices α = (α 1 , . . . , α d ) of nonnegative integers, we use the notation convention |α| Let U ⊂ R d . Throughout the text the following spaces will be used: C k (U ): the Banach space of all k-times continuously differentiable, real-valued functions h on U with the norm Recall that a locally integrable function h on U is weakly differentiable, if for every index j = 1, . . . , d there exists a locally integrable function g j such that the identity holds for every function ϕ belonging to C ∞ 0 (U ), the space of infinitely many times differentiable functions with compact support in U . In this case we define h x j △ = g j . Weak derivatives of higher orders are defined recursively.
As is common, for p ≥ 1, we denote by p ′ the conjugate exponent of p, defined by With these definitions in mind we define the following spaces: where ·, · denotes the inner product in L 2 and u α ∈ L p (U ), with the norm For T ⊂ R, we also define W r,m+2r p (T × U ) (for r, m ∈ {0, 1, . . .} and p ≥ 1): the Banach space of functions h = h(t, x), r-times weakly differentiable in t and (m + 2r)-times weakly differentiable in x with the norm Our notation is in agreement with standard notation from linear algebra. Given two vectors x, y in R d , xy denotes the scalar product and |x| △ = √ xx. Given a matrix M ∈ R m×n with m rows and n columns, M x denotes its product with the column vector x, M ⋆ its transpose and M F △ = tr(M M ⋆ ). For an n × n matrix M we denote the determinant of M either by |M | or by det M . Let l = (l 1 , . . . , l k ) denote a multiindex complying with the condition 1 ≤ l 1 < . . . < l k ≤ d. Given n × n matrices M , C 1 , . . . , C k , we write M (l; C 1 , . . . , C k ) for the matrix that is obtained from M by replacing the l p th column of M by the l p th column of C p , for p = 1, . . . , k, while keeping the remaining columns unchanged; if k > n, M (l; C 1 , . . . , C k ) △ = 0. Let A be an operator on a Banach space X and M an n × n matrix such that m ij is in the domain of A, i, j = 1, . . . , n. We write AM for the entrywise application of the operator A; to wit AM

For a suitably regular function
Throughout the text N > 0 denotes a constant, the value of which may vary from line to line.
Remark 1. Note that (3) is equivalent to the uniform ellipticity of the covariance matrix-function a Let X 0 ∈ R 2 . The assumptions on b and σ in (A1) imply that given a complete, filtered probability space (Ω, F 1 , F = (F t ) t∈[0,1] , P) on which is defined a Brownian motion W with values in R 2 , there exists a unique stochastic process X, also taking values in R 2 , such that (cf. [Friedman, 1975, Theorem 2.2, Ch. 5, p. 104]). Here the filtration F is assumed to be the augmentation of the Brownian filtration, that is, where F W t denotes the σ-field generated by (W u ) u∈ [0,t] and N denotes the collection of all P-null sets.
Let the measurable function r : [0, 1] × R 2 → R satisfy the following: (A2) The map t → r(t, ·) is Hölder continuous as a map of [0, 1] to C, continuous as a map of [0, 1] to C 1 , analytic as a map of (0, 1) to C. The function r is nonnegative: Let the measurable function f = f (t, x) : [0, 1] × R 2 → R be three times weakly differentiable with respect to x and assume that there exists a constant N > 0 such that, for j, k, l = 1, 2, it holds that: Recall that a △ = σσ ⋆ is the covariance function of X. We denote by L X (t), t ∈ [0, 1], the infinitesimal generator of the process X: Let the measurable function g = g(x) : R 2 → R be once weakly differentiable and assume that there exists a constant N > 0 such that: (A4) Either the Jacobian matrix J[f, g](1, ·) has full rank almost everywhere on R 2 or, for every bounded, open set K in R 2 there exists a function ϕ = ϕ(x) belonging to W 1 p,0 (K), for some p ≥ 1, such that B K [g, ϕ; 1] = 0 and Given the above definitions we define the scalar-valued random variable ψ by The main result of the paper is Theorem 1 (Forward-Backward Martingale Representation). Suppose that (A1), (A2), (A3) and (A4) hold. Then the solution (S F , S B , Z) to the forward-backward stochastic differential equation is well-defined. Moreover, every local martingale M under P is a stochastic integral with respect to the two-dimensional P-martingale S = (S F t , S B t ), that is (1) holds and the market model is complete under P.
Remark 2. If the function g = g(x) has slightly better regularity we may interpret the structural condition stated in (A4) in a classical sense. To illustrate this, we define the linear differential operator The proof of Theorem 1 is given in Section 5 and relies on specific smoothness and integrability properties of the solution to a parabolic equation, which we obtain in Section 3 and on the invertibility of a Jacobian matrix, which we study in Section 4.

Regularity of the Solution to the Associated Parabolic Equation
For (t, x) ∈ [0, 1] × R 2 , consider an elliptic operator where the coefficients a jk , b j , c : [0, 1] × R 2 → R are measurable functions and satisfy: 1] to C are Hölder continuous and their restriction to (0, 1) is analytic. The map t → a jk (t, ·) is continuous of [0, 1] to C 2 and the maps t → b j (t, ·), t → c(t, ·) are continuous of [0, 1] to C 1 . The matrix a is symmetric: a ij = a ji , and uniformly elliptic: there exists N > 0 such that and the function c is nonpositive: Let g = g(x) : R 2 → R be a measurable function such that for some p > 1: (B2) the function g belongs to W 1 p . Theorem 2. Suppose that conditions (B1) and (B2) hold. Then there exists a unique measurable is an analytic map of (0, 1) to W 2 p , v(1, ·) = g.
Proof. By assumption (B1) we know that for each t ∈ [0, 1] and j, k = 1, 2, the function a jk (t, ·) is in C 2 . In particular, the first-order partial derivatives of a jk with respect to x are bounded and therefore the matrix a is uniformly continuous with respect to x. Under the assumptions (B1) and (B2) the assertions of items one and two are immediately obtained upon making the time change t → 1 − t in Theorem 3.1 in [Kramkov and Predoiu, 2014].
In addition Theorem 3.1 in [Kramkov and Predoiu, 2014] tells us that t → v(t, ·) is a continuously differentiable map of [0, 1) to L p and a continuous map of [0, 1) to W 2 p , which implies that v = v(t, x) belongs to W 1,2 p ([0, 1)×R 2 ). Therefore, given the symmetry and the uniform ellipticity of the matrix-function a = a(t, x), the uniform continuity of a(t, ·), the fact that each function a jk (t, ·), b j (t, ·), c(t, ·) belongs to C 1 and the nonnegativity of c = c(t, x), we may use Corollary 5.2.4 in [Krylov, 2008] to deduce that v = v(t, x) in fact belongs to W 1,3 p ([0, 1) × R 2 ). The regularity in items three and four follows immediately.
In the next section we will require the following corollary of Theorem 2, where instead of (B2) we assume that the measurable function g = g(x) is once weakly differentiable and has the following property: (B3) There exists a constant N ≥ 0 such that Fix a function φ = φ(x) : R 2 → R, which satisfies φ ∈ C ∞ (R 2 ) and φ(x) = |x| when |x| ≥ 1.
Proof. From Assumption (B3) we deduce the existence of a constant M > 0 such that and, therefore, such that |g(x)| ≤ |x|M e M|x| + M, x ∈ R 2 .
One easily verifies now that for N > M and φ = φ(x) satisfying (9) e −N φ g W 1 p < ∞ for every p ≥ 1 and hence that e −N φ g ∈ W 1 p , p ≥ 1.
Hereafter we choose the constant N ≥ 0 from (B3) to also satisfy N > M .
Let C ≥ 0 be a constant and define the functionsb j =b j (t, x) andc =c(t, x), so that for t ∈ [0, 1] and u ∈ C ∞ ((0, 1) × R 2 ), Given the properties of the function φ asserted in (9), for C large enough, the coefficientsb j and c j satisfy the same conditions as b j and c in (B1). Since it follows from (10) that also e −N φ+C g belongs to W 1 p , for every p ≥ 1, we deduce from Theorem 2 the existence of a measurable functioñ v =ṽ(t, x), which, for every p > 1, complies with items one to four of Theorem 2 and solves the Cauchy problem v(1, ·) = e −N φ+C g.
For p > 2, by Sobolev's embedding theorem, the continuity of the map t →ṽ(t, ·) in W 1 p implies its continuity in C. It follows that the functionṽ =ṽ(t, x) is continuous on [0, 1] × R 2 .
Defining v △ = e N φ−Ctṽ , we observe thatṽ solves (11) and (12), if and only if v solves the Cauchy problem (7) and (8). For p > 1, the regularity ofṽ = e −N φ+Ct v implies items one to four in the corollary. The proof is completed by noting that the case p = 1 follows trivially from the case p > 1 by taking the constant N slightly larger.
and consider the elliptic operator Then we obtain the following corollary, which will be needed in the next section.
Corollary 2. Suppose that conditions (B1) and (B3) hold. Let v = v(t, x) be the function generated by Corollary 1 and let v j be defined as in (13). Then v j = v j (t, x) solves the nonhomogeneous partial differential equation Proof. From Corollary 1 we know that the function v = v(t, x) is three times weakly differentiable with respect to x and that the derivative with respect to t of the same function is once weakly differentiable with respect to x. Given condition (B1) we also know that the coefficients of the operator G are once continuously differentiable with respect to x. Hence we may differentiate the parabolic partial differential equation (7) with respect to x j , j = 1, 2, which shows that v j = v j (t, x) satisfies (15).

Invertibility of the Jacobian Matrix
Let a = a(t, x), b = b(t, x), c = c(t, x) and g = g(x) be the coefficients from Section 3. Let the measurable function f = f (t, x) : [0, 1] × R 2 → R be three times weakly differentiable with respect to x and assume that there exists a constant N ≥ 0 such that, for j, k, l = 1, 2, it holds that: We define the functions A = A(t, x), B = B(t, x) and C = C(t, x) on [0, 1] × R 2 by for j, k = 1, 2.
For suitably regular functions v, ϕ : R 2 → R, for an open, bounded set K in R 2 and for t ∈ [0, 1], we define the pairing We assume that the following assumption is satisfied: (B5) Either the Jacobian matrix J[f, g](1, ·) has full rank almost everywhere on R 2 or for every open, bounded set K in R 2 there exists a test function ϕ = ϕ(x) belonging to W 1 p,0 (K), for some p ≥ 1, such that A K [g, ϕ; 1] = 0.
The following theorem is the main result of this section and will eventually allow us to prove the martingale representation stated in Theorem 1. Before we can proof Theorem 3 we first need to establish several lemmas below.
Let X and Y be Banach spaces, E an open subset of X and consider a map h : E → Y. If it exists, we denote by D k hx the k-th Fréchet derivative of h at the point x ∈ E; as is well known, this constitutes a k-linear map on the k-fold product X×. . .×X. Accordingly, for x 1 , . . . , x k ∈ X we denote by D k hx(x 1 , . . . , x k ) the k-th Fréchet differential.
Lemma 1. Given matrices M, C, C 1 , C 2 ∈ R 2×2 , the first and second order Fréchet differentials of the determinant map at M are given by Proof. The expressions are special cases of equations (4) and (6) in [Bhatia and Jain, 2009].
Define the linear partial differential operator x) : [0, 1] × R 2 → R be measurable functions, which on (0, 1) × R 2 are once weakly differentiable with respect to t, three times weakly differentiable with respect to x and once weakly differentiable with respect to t and x, and let G(t) and G j (t) be the operators defined in (6) and (14) respectively. Define f j = ∂ x j v, j = 1, 2, and assume that v j satisfies the partial differential equation Then the determinant function w = w(t, x) defined on [0, 1] × R 2 by w △ = |J[f, v]| satisfies the nonhomogeneous partial differential equation Proof. Given our differentiability hypothesis on f = f (t, x) and v = v(t, x) we may differentiate the determinant function w = w(t, x) with respect to t. Let us abbreviate throughout the proof of this lemma J △ = J [f, v]. A simple application of the chain rule from Fréchet differential calculus (cf. [Bhatia, 1997, Chapter X.4]) and the fact that v j satisfies the partial differential equation (16) yields The direct computation of G(t)w and making use of the identity 2c det J = cD det J(J) and the linearity of the Fréchet derivative show that we may replace the term −D det J(G(t)J) above with 2 j,k=1 a jk D 2 det J ∂J ∂x j , ∂J ∂x k − (G(t) + c)w, t ∈ (0, 1).
By the explicit formulae for the first and second order Fréchet derivative of the determinant map derived in Lemma 1 and the symmetry of the matrix-function a, we obtain after some computations Collecting the coefficients of ∂ 2 x j x k v, ∂ x j v and v yields the result. Lemma 3. Let γ j , η : [0, 1] × R 2 → R, j = 1, 2, be measurable functions such that, for p > 1, the maps t → γ j (t, ·), t → η(t, ·) of [0, 1] to L p,loc are continuous. Let K be an open, bounded set in R 2 and ϕ = ϕ(x) a test function belonging to W 1 p ′ ,0 (K). Then, for each t ∈ [0, 1], the pairing is a bounded, linear functional on W 1 p ′ ,0 (K). Moreover, the map t →Ã K (·; t) is continuous as a map of [0, 1] to W −1 p (K).
Proof. By the triangle inequality and the Hölder inequality, for each t ∈ [0, 1], which implies the boundedness of the linear functionalÃ K (·; t).
To prove the continuity of the map t →Ã K (·; t) of [0, 1] to W −1 p (K), observe that, for each t ∈ [0, 1],Ã K (·; t) is in the dual space of W 1 p ′ ,0 (K). We recall that the dual space of W 1 p ′ ,0 (K) is isometrically isomorphic to W −1 p (K). It follows that which implies the desired continuity of the map t →Ã K (·; t) by the continuity of the maps t → γ j (t, ·), t → η(t, ·) of [0, 1] to L p,loc .
Proof. For the proof of Theorem 3 we define The claim of the theorem is true if and only if the set has Lebesgue measure zero on [0, 1] × R 2 . This is equivalent to the set having Lebesgue measure zero on R 2 .
From Corollary 1 and (B4) we deduce that, for every p ≥ 1, the map t → e −N φ(·) w(t, ·) is analytic as a map of (0, 1) to W 1 p . Moreover, by Sobolev's embedding theorems, for p > 2, it is also analytic as a map of (0, 1) to C. Suppose, for a contradiction, that From the analyticity of t → w(t, ·), it follows that if x ∈ H then w(t, x) = 0 for all t ∈ (0, 1) and therefore that lim t↑1 w(t, x) = 0, x ∈ H.
We first proof the claim of the theorem assuming J[f, g](1, x) has full rank almost everywhere on R 2 . From Corollary 1 and (B4) we know that, for every p ≥ 1, the map t → e −N φ(·) w(t, ·) of [0, 1] to L p is continuous. It follows that the map t → w(t, ·) of [0, 1] to L p,loc is continuous and hence that, for all open, bounded sets K in R 2 , We deduce that w(1, ·) = |J[f, v]|(1, ·) = 0 almost everywhere. Now recall that by (B5) the matrix-function J [f, v](1, ·) = J[f, g](1, ·) has full rank almost everywhere on R 2 .
Let us now assume that for every open, bounded set K in R 2 there exists a test function ϕ = ϕ(x) belonging to W 1 p ′ ,0 (K) such that A K [g, ϕ; 1] = 0. From Corollary 1 and (B4) we know that the functions f = f (t, x) and v = v(t, x) satisfy the differentiability hypothesis of Lemma 2 and from Corollary 2 that v j satisfies the partial differential equation (16). It follows from (17) that if w(t, x) = 0 for all (t, x) ∈ (0, 1) × H, then also P(t)v = 0 for all (t, x) ∈ (0, 1) × H.

Proof of Theorem 1
From here onwards we adopt the notation introduced in Section 2 and assume that conditions (A1), (A2), (A3) and (A4) are in place.
We fix a function φ = φ(x) on R 2 satisfying (9) and recall that L X (t), t ∈ [0, 1], is the infinitesimal generator of the process X: Lemma 4. There exists a unique continuous function v = v(t, x), on [0, 1] × R 2 and a constant N ≥ 0 such that the following hold: 1. For every p ≥ 1, v(1, ·) = g. Lemma 5. The martingale is well-defined and has the representation Moreover, for t ∈ (0, 1), Proof. Assume that the process S B is actually defined by (21). From the continuity of v on [0, 1] × R 2 it follows that it is in fact a continuous process on [0, 1] and from the expression (20) for v(1, ·) that S B 1 = ψ. Hence, to complete the proof it remains to show that S B , given by (21) is a martingale under the measure P.
From Lemma 4 we know that the map t → e −N φ(·) v(t, ·) is analytic as a map of (0, 1) to W 2 p ; in particular, it is continuously differentiable. This allows us to use a variant of the Itô formula due to Krylov (cf. [Krylov, 1980, Section 2.10, Theorem 1]) and accounting for (19), we immediately obtain (22).
We have shown that S B is a continuous local martingale. It only remains to verify the uniform integrability of the process. Recall that, for every p ≥ 1, the map t → e −N φ(·) v(t, ·) is continuous from [0, 1] to W 1 p . It follows from Sobolev's embedding theorem that, for p > 2, the same map is continuous from [0, 1] to C. Therefore, |v(t, x)| ≤ e N (1+|x|) .
In particular, accounting for the growth properties of r = r(t, x), As sup t∈[0,1] |X t | has all exponential moments the martingale property for S B follows.
The proof of Theorem 1 is now completed easily. Equations (5) and (22) show that By the growth properties of r = r(t, x), f = f (t, x) and σ = σ(t, x) in (A1), (A2) and (A3) it is easily verified that also the continuous local martingale S F = (S F t ) is a true martingale. In view of (23) we obtain We recall that, by the Brownian integral representation property, every P-local martingale M is a stochastic integral with respect to W : for some progressively measurable, square-integrable processH = (H t ). Hence, in order to deduce the integral representation property (1) it remains to show that the matrix process 6 Example: a class of stochastic volatility models In this section we apply our main result Theorem 1 to prove the completeness of a financial market in which one stock with price process P = (P t ) and one call option with price process V = (V t ) are traded. The processes P and V are defined by for constants Γ, α, m, r ∈ R with Γ > 0, r ≥ 0. In particular this covers the class of stochastic volatility models introduced in [Fouque et al., 2000, Equation (2.7), p. 43].
We are now ready to state the main result of this section.
Theorem 4. Suppose that condition (C1) is satisfied. Then the (P t , V t )-market defined by (25) is complete.
Remark 3. We draw attention to the fact that in (C1) the quite specific assumptions on the space regularity of the coefficients of (25) are solely necessary because we are allowing P to evolve according to a geometric Brownian motion and Y to have mean reverting dynamics, both cases in which the coefficients are unbounded. This can be seen easily from the proof of Theorem 4 below. In the absence of this particular choice of dynamics the verification of the assumptions of Theorem 1 is much simpler. Proof. Consider the stochastic processes By a simple application of Itô's formula we find that dX 1 t = r − 1 2 ν(m + e −αt X 2 t ) 2 dt + ν(m + e −αt X 2 t ) dW 1 t dX 2 t = −e αt µ(e X 1 t , m + e −αt X 2 t ) dt + e αt σ(m + e −αt X 2 t ) dW t .
Proof. To prove the analyticity assertion we have to show the existence of positive constants L, M > 0, such that ∂ k h ∂t k (t, ·) By our differentiability hypothesis we may apply the formula of Faá di Bruno to obtain where the sum is taken over all α 1 , . . . , α k such that α 1 +2α 2 +. . .+kα k = k. Using our estimates on the derivatives of g and f and Lemma 1.4.1 in [Krantz and Parks, 2002, p. 18] we estimate Taking norms the estimate (26) follows with M = D/(1 + ǫδ) and L = Rǫδ/(1 + 2ǫδ).