Effective Approximation Methods for Constrained Utility Maximization with Drift Uncertainty

In this paper, we propose a novel and effective approximation method for finding the value function for general utility maximization with closed convex control constraints and partial information. Using the separation principle and the weak duality relation, we transform the stochastic maximum principle of the fully observable dual control problem into an equivalent error minimization stochastic control problem and find the tight lower and upper bounds of the value function and its approximate value. Numerical examples show the goodness and usefulness of the proposed method.


Introduction
There has been extensive research in utility maximization for continuous-time stochastic models, see Pham [19] for expositions. If all model parameters are known or can be observed, then one only needs to solve the optimization problem. However, if some model parameters are not observable, as in many financial applications, then one needs to extract the information of unknown parameters as well as to solve the optimization problem. Thanks to the separation principle and the filtering theory, the unobservable model may be first transformed into an equivalent fully observable model which is then solved using the known optimization methods, see Björk et al. [4] for an excellent introduction of the topic.
For utility maximization with incomplete market information, the traded risky asset is usually assumed to have observable volatility but unobservable growth rate, see Karatzas and Xue [13]. To uncover the unknown growth rate, one may compute its conditional expectation (the filter) with updated market information (the filtration generated by traded assets). It is in general difficult to compute the filter as one needs to solve a stochastic partial differential equation (SPDE), but there are three important special cases where the filtering equations can be expressed in finite-dimensional closed form. These filters are consequently easily implemented in practice and are found in a wide range of applications. They are Kalman-Bucy filter for linear diffusion, Wonham filter for finite state Markov chain, and Bayesian filter for random variable. Each of them has been widely studied in portfolio optimization, see, for example, Lakner [15] and Papanicolaou [18] for the linear diffusion model, Sass and Haussmann [21] and Eksi and Ku [7] for the continuous-time finite state Markov chain model, Ekstrom and Vaicenavicius [8] and Bismuth et al. [3] for the random variable model. All the aforementioned papers deal with only specific (power or logarithmic) utility without control constraints.
To solve a stochastic optimal control problem, one may use the dynamic programming principle (DPP) to derive the HJB equation (a nonlinear partial differential equation (PDE)) for the value function in the Markovian case, or the convex duality and the martingale representation for the optimal terminal state and the replicating control strategy in the convex case, or the stochastic maximum principle (SMP) to derive the fully coupled forward and backward stochastic differential equation (FBSDE) for the optimal state and adjoint processes, see Fleming and Soner [9], Karatzas and Shreve [12], and Yong and Zhou [23] for these methodologies. For utility maximization with closed convex control constraints, one may also use the dual control approach to solving the problem, see Li and Zheng [16], which is particularly effective when there is only one state variable for wealth process and the control constraint set is a cone, then the dual HJB equation is a linear PDE and the dual value function has a Feynman-Kac representation, see Bian et al. [1].
It is considerably more difficult to solve utility maximization with partial information, even if the filtering equation has a finite-dimensional closed form. The key reason is that the model has at least two state variables, one for the wealth process and one for the correlated filter process. Both the primal HJB equation and the dual HJB equation are fully nonlinear PDEs, which is in sharp contrast with utility maximization of one state variable as in [1]. One may also view the model having one state variable (wealth) satisfying a stochastic differential equation (SDE) with random coefficients (filters) and use the SMP to get the fully coupled nonlinear controlled FBSDE with the control satisfying the Hamiltonian condition, which is again highly difficult to solve. More discussions on portfolio optimization with partial information can be found in the literature, for example, Fouque et al. [10] perform perturbation analy-sis, Brennan [5] analyzes the effect of uncertainty about the mean return of the risky asset on investors' optimal strategies by comparing the myopic and "full information" allocations, Bichuch and Guasoni [2] discuss the price-divide ratio and interest rate equilibrium over time.
In this paper, for utility maximization with general utility functions, closed convex control constraints and partial information, instead of trying to find the value function and optimal control exactly, a highly difficult task as discussed above, we suggest a novel and effective computational method for finding tight lower and upper bounds of the value functions. The idea is to transform the SMP of the equivalent fully observable dual control problem, which is difficult to solve as it is a system of constrained FBSDEs, into an equivalent form in forward controlled SDEs and then further into an error minimization problem, which is relatively easy to solve as it is a combined scalar minimization and optimal stochastic control problem. It opens a way of finding a good approximate optimal solution, a feature not yet available for solving the constrained FBSDE from the SMP in the literature, and, thanks to the weak duality relation, the tight lower and upper bounds for the value function and its approximate value.
The rest of the paper is organized as follows. In Sect. 2 we introduce general utility maximization with partial information and then use the separation principle and the innovation process to transform the problem into an equivalent fully observable problem. We also give three examples of finite-dimensional filters. In Sect. 3 we review three well known methods for solving the filtered utility maximization, including the primal and dual HJB equations, and SMP, and illustrate these methods with an example which has a closed-form solution. In Sect. 4 we propose an effective approximation method for finding the lower and upper bounds of the value function. In Sect. 5 we do some numerical tests for power utility and discuss the relevant information values. Section 6 concludes the paper. Appendix includes some equations and formulas used in the paper.

Model and Equivalent Filtered Problem
In this section we introduce the market models that will be employed and the optimal choice of investors with partial information under a closed convex constraint. As the setup in Björk et al. [4], we consider the stochastic basis ( , F, F, P) for financial markets, where the filtration F = {F t } 0≤t≤T satisfies the usual conditions. In what follows, we consider a market consisting of N + 1 securities, among them one is the risk-free bond account whose price is denoted by S 0 (t): and others are risky securities with prices {S n (t)} N n=1 : where {W (t), t ∈ [0, T ]} is a R N -valued standard Brownian motion, S(t) = (S 1 (t), . . . , S N (t)) T , μ(t) = (μ 1 (t), . . . , μ N (t)) T (a T is the transpose of a) and 123 σ (t) = (σ nm (t)) N n,m=1 . Denote the filtration generated by the asset price processes S 1 , . . . , S N as F S . The interest rate {r (t)} and the volatility rates σ (t) are assumed to be uniformly bounded F S t -progressively measurable processes on × [0, T ]. We also assume that there exists k ∈ R + such that This ensures the matrices σ (t), σ T (t) are invertible and uniformly bounded by Xu and Shreve [22]. The drift processes of the return, μ(t), are assumed to be F-adapted processes.

Remark 1
Throughout the paper different information sets are assumed to be available for various market participants. The full information is given by the filtration F, while the observable information is given by the filtration F S , generated by the evolution of asset price processes S and we have F S ⊂ F. The completely observable case is obtained by assuming F = F S .

Remark 2
The assumption that r is F S -adapted (cf. [13]) implies that the interest rates can be known by observing the stock prices only, and F r ,S = F S .
Define a self-financing trading strategy as π = (π(t)) t∈[0,T ] , which is an Ndimensional F S -progressively measurable process and π i (t) denotes the fraction of the wealth invested in the stock i for i = 1, . . . , N at time t ∈ [0, T ]. Additionally, the set of admissible portfolio strategies is given by where K ⊆ R N is a closed convex set containing 0, and Some examples of K are discussed in Sass [20], which shows the generality of this assumption and some common situations are included, such as short selling prohibited, limited funds and so on. Given any F S -measurable π ∈ A, the dynamics of the investor's total wealth X π is given as where x > 0 and 1 ∈ R N has all unit entries. A pair (X π , π) is said to be admissible if F S -measurable π ∈ A and X π satisfies (2). The utility function U : (0, ∞) → R considered here is continuous, increasing, concave and U (0) = 0. Define the value of the expected utility maximization problem as We assume that −∞ < V < +∞ to avoid trivialities and as the available information is only the securities' dynamics, we actually are facing a stochastic control problem with partial information. Any π * ∈ A satisfying E[U (X π * (T ))] = V is called the optimal control, and the corresponding X * = X π * is called the optimal state process. The above partially observable problem can be reduced to an equivalent problem under full information, as in [4]. Here we define the innovation processV = is the filter for μ(t). The following result holds: Theorem 1 (Fujisaki et al. [11]) Assume that (σ −1 (t)) t∈[0,T ] is uniformly bounded From the definition ofV in (4) we have the following representatives: The corresponding wealth process would be Under this transformation, the original partially observed problem has been transformed to a related problem as in the full information. After solving the reformulated completely observed problem, the partially observable case would be discussed by embedding the filtering equations for unobservable processes. As discussed in [4], for general hidden Markov models, the infinite dimensional state space problem of the Kolmogorov backward equation would make it impossible to give explicit solutions of optimal control. We next give three examples of special but important filters, they are the Kalman-Bucy filter, the Wonham filter and the Bayesian filter. (1), where H satisfies the following SDE:

Example 1 Linear stochastic differential equation Suppose that μ(t) = H (t) in
123 ). Under specific cases,Ĥ (t) can be solved explicitly in terms of (t), see Appendix A [15]. Note that the above process H is not necessarily mean-revering. If λ is a diagonal matrix with positive diagonal entries, then H is an N -dimensional mean-reverting Ornstein-Uhlenbeck (OU) process [15].

Example 2 Continuous time finite state Markov chain process
, called the Wonham filter, satisfies the following SDE: The prior law m represents the subjective beliefs of the investor about the likelihood of the different values that B might take. The volatility matrix σ is assumed to be constant. The knowledge of B is updated with new observable information. By [6],μ(t) := E[B|F S t ], called the Bayesian filter, satisfies the following SDE: where ψ is a matrix-valued function determined by m. In particular, if B ∼ N (b 0 , 0 ), a multivariate normal distribution with mean b 0 and covariance matrix 0 , then ψ(t, b) is independent of b and given by and the filtered processμ is a Gaussian process measurable with respect to F S , see [6] for details.
Using the innovation processV in (4), we can transform a partially observed problem into a fully observed problem (3) with the wealth process X satisfying the SDE (5), whereμ(t) is a filtered drift process. We assume from now on thatμ(t) = μ(Ĥ (t)) for some deterministic function μ andĤ satisfies the following SDE: For the Kalman and Bayesian filters, we have μ(h) = h and for the Wonham filter, we have μ(h) = Mh with d = N . The corresponding value function is defined by, for 0 ≤ t ≤ T , is a constant given F t (full informaton) but a random variable given F S t (partial information). Such a distinction is important when we discuss the information value of H (t) and that ofĤ (t), see Sect. 5.3.

Optimality Conditions
To solve the filtered utility maximization problem (11), we may use one of the following three methods: stochastic control, convex duality and stochastic maximum principle. We next give a brief discussion of these methods.

HJB Equation
After filtering, the stochastic control approach applies, and the value function satisfies the following HJB equation with the terminal condition J (T , x, h) = U (x). Equation (12) is a nonlinear PDE with control constraint, which is in general difficult to solve, even numerically. There is one important special case in which the nonlinear PDE (12) can be simplified into a semilinear PDE, and the solution may have a representation in terms of the solution of a BSDE. For the case U ( (12), we have Example 4 Suppose the utility function U (x) = (1/β)x β (power utility) with K = R N andĤ satisfies SDE (7) , then we have an ansatz for J : and the optimal control π * is given by where a scalar, and the detailed ODEs for A, B, C are given in Appendix C. SinceĤ (t) = h, π * (t) depends on the conditional expectation value of H (t), given F S t , but not the value of H (t) itself.

Dual HJB Equation
Define the dual function of U as We have thatŨ is a continuous, decreasing and convex function on (0, ∞). The dual process is given by, for 0 ≤ t ≤ T , and v is the dual control process defined in the set The dual problem is the following: Any is called the optimal dual control and the corresponding Y (y * ,v * ) the optimal dual process. Fix y, the dual value function is defined bỹ Suppose K is a closed convex cone, which gives δ K (v) = 0 for v ∈K and ∞ otherwise, whereK = {v : v π ≥ 0, ∀π ∈ K } is the positive polar cone of K . The dual value functionJ satisfies the following dual HJB equation: Tσ T After giving optimal dual control y, v by (15) and strong duality, the primal value function J (t, x, h) and the primal optimal control can be derived using the dual value function. This is also a nonlinear PDE with control constraint, which is also impossible to give explicit solutions. Instead we focus on the following specific case.
Example 5 Assume the same setting as Example 4. ThenK = {0}, which gives the dual control v(t) = 0 and the dual value functionJ (t, We have an ansatz forJ : whereÂ(t) is a N × N symmetric matrix,B(t) a R N vector,Ĉ(t) a scalar, andÂ,B,Ĉ satisfy some ODEs, see Appendix D for these equations. Solving (15), we havẽ with the minimum point y (13) and (17), The optimal control is given by which is exactly (14).

Stochastic Maximum Principle
For constrained utility maximization, one may also use the SMP to solve it. There is extensive literature on this. Here we only cite the results from [16] and the reader can find more references and discussions there. [16] gives the necessary and sufficient optimality conditions for both primal and dual problems in terms of constrained controlled FBSDEs and characterizes their dynamic relations of the optimal control, the state process, and the adjoint process. Under some regularity and integrablility assumptions on utility function and stochastic processes, we have the following result.
satisfies the following conditionŝ The optimal control for the primal problem with initial wealth x 0 is given bŷ We give an example to illustrate its use.

Example 6
Assume the same setting as Example 4, thenṽ(t) = 0. Solving the BSDE in (18), we haveP By Theorem 3.10 of [16], the optimal strategy is given by Since φ(t) = β 1−βJ (t, y, h) andJ has an ansatz (16), using Itô's formula and the Feynman-Kac formula, we have We have recovered the optimal control. In general, it is difficult to give ϕ(t) as this is from the martingale representation theorem.

Effective Approximation Method
For general utilities with closed convex constraint case, one can write the HJB equation but it is not possible to find an ansatz solution even for power utility due to control constraint. For the same reason one cannot apply the martingale representation theorem to construct a control (replicating portfolio which may be negative) and therefore the standard martingale method cannot be used to solve the problem. The primal and dual value functions satisfy the following weak duality relation: The inequalities show that the dual formulation gives an upper bound for the primal value function. Instead of focusing on the exact controls, we explore the tight lower and upper bounds of the value function for general cases. We show it is possible to achieve this with the dual FBSDE. Assume (y, v) is a feasible dual control. By Theorem (2), (y, v) is an optimal dual control if and only if (Y (y,v) ,P,Q) satisfying (18) and (19). (19) can be rewritten aŝ The dual FBSDE system (18) and (19) is equivalent to, also noting (10) forĤ , 123 and E w|P(T ) +Ũ (yY (T )) where w ∈ (0, 1) is a given constant. Here we have used the fact that δ K is the support Consider the following optimal control problem: Note that (21) is a forward controlled SDE system with state variables Y ,P,Ĥ and control variables π, v, and (22) is a standard control problem with an additional decision variable y > 0. If we can manage to find (y, π, v) that makes the objective function zero, then we have solved (18) and (19). The key advantage of (22) over the dual FBSDE system (18) and (19) is that (22) is an optimal control problem and the known optimization techniques can be used to solve it, which is in sharp contrast to the dual FBSDE system (18) and (19) that is a pure equation system and difficult to find its solution.
In general, we may only be able to find (y, π, v) that makes the objective function close to zero, but not exactly zero, then (y, π, v) is not a solution to the dual FBSDE system (18) and (19), that is, not the optimal solution to the dual problem. However, (y, π, v) and (Y ,P) still provide useful information about the value function, that is, we can get the lower and upper bounds as If the difference of LB and UB is small, we may approximate the value function J in (11) by a simple average (LB + UB)/2 with π a good approximate feasible control corresponding to the lower bound. This shows the usefulness of solving the control problem (22), that is, one may find a good approximate solution with (22), which is essentially impossible if one tries to achieve the same with the dual FBSDE system (18) and (19). To find the approximate optimal solution of (22), we may proceed as follows: Divide the interval [0, T ] into n subintervals with grid points t i = ih, i = 0, 1, . . . , n, and step size h = T /n. On each interval [t i , t i+1 ), i = 0, 1, . . . , n − 1, choose constant controls π i and v i that are F t i measurable. Discretize (21) to get a discrete time controlled system with Y i denoting Y (t i ), etc.

Numerical Examples
In this section we use the above method to compute the lower and upper bounds under Kalman filtering case. For simplicity, we assume the market has riskless asset and one risky asset and r , σ are constants, and utility function is power utility U (x) = (1/β)x β with 0 < β < 1. We consider two cases: one is K = R and the other K = R + , the former givesK = {0} and the latterK = R + . We need to solve the discrete time control problem (27).

Unconstrained Case
The optimal value at time 0 is given by (13), that is,  (27). Since K = R and π i is F S t i measurable andĤ i is exogenous, we have v i = 0 for all i and we consider controls π i in the following form: π i = a i + b iĤi which incorporates the OU processĤ in controls, where a i , b i are constants to be determined, and denote by a = (a 0 , . . . , a n−1 ) T ∈ R n and b = (b 0 , . . . , b n−1 ) T ∈ R n . We can now write out the discrete version of (21), together with SDE forĤ : for i = 0, 1, . . . , n − 1, The discrete version of problem (29) is given by We still need to compute the expectation to get function f , which can be achieved by taking the sample average. Specifically, for fixed y, a, b, generate n independent standard normal random variables , which generates a sample path of Y ,P,Ĥ . We can repeat this procedure M times and take the average of M copies of |P n +Ũ (yY n )| 2 , which gives an approximate value for f (y, a, b). The problem now is to find (y, a, b), with a total of 2n+1 variables, such that the objective function f (y, a, b) is minimized. This is a finite dimensional nonlinear minimization problem.
For numerical results, we try two forms for control π , one is π i = a +bĤ i with a, b being constants (Form I), the other is π i = a i + bĤ i with a i , b being constants (Form  II) Tables 1 and 2 show that the considered two formulas for controls give good approximate values MV with relative errors less than one percent compared with the benchmark values, although relative errors between LB and UB are slightly bigger in comparison, which is expected in estimating bounds. Tables 1 and 2 also show that the optimal values are similar for different combinations of h 0 and σ 0 , two parameters for the initial distribution of H (0), which indicates the optimal value is less sensitive to the initial estimate of these parameters. The results show that by using dual method, we can always give a range for the value function and generate tight lower and upper bounds. Additionally, the estimated controls for both primal and dual problems can be derived clearly. Moreover, the results show that the mean values are quite close to the benchmark results for most cases. 0). In the numerical tests, we in particular discuss two forms:

Constrained Case
Here we use the same parameter settings as in the unconstrained case.
By solving the above minimization problem (27), the optimal results of two forms are given below (w = 0.5). In Form I, the optimal parameters are a = 0.04682, b = −0.1281,ã = −0.0019,b = −0.1958, and y = 0.3172. In Form II, the optimal parameters are b = −0.1060,b = −0.0572, y = 0.3196 and the estimation of a(t),ã(t) is given in Table 3. Under one sample path ofĤ , the controls π, v estimated by two forms are given in Table 3. The lower and upper bounds for primal value function obtains accordingly. The parameter w is added to control the weight put on different objectives, to show the sensitivity of the results with it, the results are listed in Table 4. In the following tables, the shorthand notations LB, UB and OB denote lower bound, upper bound and estimated values of the corresponding objective functions respectively,and rel-diff(%) = (UB − LB)/LB × 100. The results illustrate that when w = 0.9, the relative difference and the value of the objective function are the best among all the choices.
In the considered OU case, we suppose some known assumptions for initial state of hidden sequenceĤ , whose first and second moments(h 0 , σ 0 ) are given in advance. Tables 5 and 6 give the results by varying the initial assumptions. These tables show that the method always generate good bounds for different assumptions of the initial sates. More specifically, under almost all cases, Form I would give better estimations. Additionally, as σ 0 get bigger, that is, we are less confident in the assumption, in this circumstance the bounds would be wider.

Remark 4
For simplicity, in the above numerical cases, we only focus on the 1dimensional cases, that is only one risky asset considered. Our method can be easily generalized to d-dimensional problems with polynomial growth of computation. In our numerical examples, if the constraint set K = R d (d > 1), then the corresponding considered controls are π(t) = a + bĤ (t) for some a ∈ R d , b ∈ R d×d , the total number of parameters to be determined is d + d 2 . If the constraint set K = R d + , similarly we may choose π(t) = (a + bĤ (t)) + , the number of parameters is d + d 2 , not 2 d . Even we use piecewise constant controls with n subintervals, then the number of parameters is n (d + d 2 ). Therefore, in our setting, the number of parameters would grow polynomially with respect to the number of traded assets and subintervals, not 123 For fixed parameters a i , b j , controls π(t) are determined onceĤ (t) are known. In other words, there is no exponential explosion 2 d as we do not need to check possible combinations of π 1 (t) and π 2 (t) being positive or zero, they are determined naturally by a i , b j ,Ĥ (t) and a i , b j can be found by a continuous variable minimization in a finite dimension space. We emphasize that π(t) = (a+bĤ (t)) + when K = R d + is a feasible control, but NOT an optimal control for problem (27), which is in general difficult to find. There are many ways of choosing feasible controls, for example, we may also set π(t) = (a + bĤ (t) +Ĥ (t) T cĤ (t)) + , whereĤ (t) T cĤ (t) ∈ R d with the ith component given byĤ (t) T c iĤ (t) and c i ∈ R d×d for i = 1, . . . , d, and then determine a, b, c by solving a minimization problem with d + d 2 + d 3 variables. The numerical examples for d = 1 show that the choice of control π(t) = (a + bĤ (t)) + provides a good compromise in the sense that it is easy to compute while gives tight lower and upper bounds. These control forms still provide lower and upper bounds for d > 1, but other forms may exist to give tighter bounds. It is still an open question on the best parametric form of feasible controls for lower and upper bounds in multidimensional case.

Remark 5
The dual FBSDE method is applicable for general constrained optimal portfolio selection problems, including general utilities and other filtering cases of hidden processes. The CIR case given in [18] can be discussed similarly.

Information Value of Learning
We may call the problem (3) subject to (5) the utility maximization with learning, in which the admissible control π is F S measurable. If μ and W can be observed and the admissible control π is F measurable, we may call the corresponding problem (3) subject to (2) the utility maximization with full information. In other words, we focus on the following cases: (P1) we can fully observe H process and use it to find the value 123 Table 3 Optimal a(t),ã(t),  function and optimal control, (P2) we cannot observe H process and use the Kalman-Bucy filter to learn the process H . Intuitively the investors with full information would gain more than those with partial information, as the full information investors master the market better. To gain some insight into the magnitude of the effect of information sets, we assume U (x) = (1/β)x β with 0 < β < 1 and N = 1, K = R. For the full information case (P1), the value function is given by where A f , B f , C f satisfy some ODEs, see Appendix E Both H (t) and J f (t, x, h) are F t measurable but not F S t measurable. On the other hand, for the partial information 123 case (P2), bothĤ (t)) and the value function J (t, x,Ĥ (t)), see (13) and Appendix C, are F S t measurable. We cannot directly compare J f (t, x, H (t)) and J (t, x,Ĥ (t)) as the former is a random variable in F S t while the latter a constant in F S t . However, we can compute the conditional expectation of J f (t, x, H (t)) given F S t and then compare its value with J (t, x,Ĥ (t)). The difference of the two, is the so called information premium or the loss in utility due to partial information.
Papanicolaou [18,Proposition 3.15] shows that the information premium is always nonnegative. This is from the average value point of view, if we draw samples of J f (t, x, H (t)) and J (t, x,Ĥ (t)), they do not necessarily have that relationship. We next illustrate numerically the point with the optimal value at time 0 and draw some sample paths. The full information value function at time 0 is given by J f (0, x 0 , H (0)), where H (0) is an observed value under the full information setting and is a sample from the normal distribution with mean h 0 and variance σ 0 .
For the no information with learning case (P2), the value function at time 0 is given by J (0, x 0 , h 0 ). Table 7 lists the numerical results of information values at time 0, where the column , H (0))], which is calculated as  Table 7. We give four comparing results under ρ = 0, ρ = 0.5 and σ 0 = 0.2, σ 0 = 0.4 separately. It is observed that the value of investors with full information is greater than those with partial information, which verifies the comparing relation (31) numerically. Figures 1, 2, 3 and 4 plot the sample paths of H and G. In Figs. 1 and 3, H is simulated using (6) andĤ is simulated using (7). In Figs. 2 and 4 The results indicate that although E[G f (t, H (t))|F S Table 7 Value function at time 0 by varying h

Conclusions
In this paper we propose a novel and effective approximation method to find the value function for general utility maximization with closed convex control constraints and uncertain drift coefficients of the stock. Using separation principle and the dual FBSDE, we transform the utility maximization with partial information into an equivalent, fully observable, error minimization stochastic control problem and, using the There remain many open questions, for example, convergence and error analysis of discrete-time stochastic optimization problem (24) and (25) to its continuous-time counterpart (21) and (22), theoretical estimation of the difference between the lower and upper bounds (23), the best parametric form of feasible controls for lower and upper bounds in multidimensional case. We leave these and other questions for future research. with the terminal condition A(T ) = 0. This is a Riccati-type ODE.
with the terminal condition B(T ) = 0. This is a linear ODE once A is known and can be easily solved.
with the terminal condition C(T ) = 0. This is a linear ODE once A, B are known and can be easily solved. The above equations depend on (t) of (8) andˆ R (t), we have to solve them numerically. For this purpose, we solve (t) first and then substitute into the equations to derive the numerical results.
If we set the initial variance of the Kalman filter to be its equilibrium value, that is, σ 0 = √ Gσ − (λσ + ρσ H )σ , then (t) = σ 0 andˆ R (t) = √ G − λσ for all t ≥ 0, in this case, the Riccati for A(t) is solvable with a closed form formula. The analytical results of A(t) and the estimated results from the fourth order Runge-Kutta method are given in Table 9. Similar results can be obtained for B(t). Table 9 shows that their numerical results are the same to the first four decimal places.
with the terminal conditionB(T ) = 0. This is a linear ODE onceÂ is known.
with the terminal conditionĈ(T ) = 0. This is a linear ODE onceÂ,B are known.