1 Introduction and formulation of the problem

Let T>0 be given and fixed. Denote by \(\mathbb {S}^{n}\) the totality of n×n symmetric matrices, and by \(\mathbb {S}^{n}_{+}\) its subset of all n×n nonnegative matrices. We mean by an n×n matrix S≥0 that \(S\in \mathbb {S}^{n}_{+}\) and by a matrix S>0 that S is positive definite. For a matrix-valued function \(R: [0,T]\to \mathbb {S}^{n}\), we mean by R≫0 that R(t) is uniformly positive, i.e. there is a positive real number α such that R(t)≥αI for any t∈[0,T].

In this paper, we consider the following linear controlled stochastic differential equation (SDE)

$$\begin{array}{@{}rcl@{}} dX_{s}&=&\left[A_{s}X_{s}+B_{s}^{1}u_{s}^{1}+B_{s}^{2}u_{s}^{2}\right]ds \\ \quad&\quad+& \sum\limits_{j=1}^{d}\left[C_{s}^{j}X_{s}+D_{s}^{1j} u_{s}^{1}+D_{s}^{2j} u_{s}^{2}\right]{dW}_{s}^{j}, \quad s>0;\quad X_{0}=x_{0}, \end{array} $$

with the following quadratic cost functional

$$ J(u)\triangleq \frac{1}{2}\mathbb E\int_{0}^{T}\left[ \left\langle Q_{s}X_{s}, X_{s} \right\rangle + \left\langle R_{s}^{1}u_{s}^{1}, u_{s}^{1} \right\rangle + \left\langle R_{s}^{2}u_{s}^{2}, u_{s}^{2} \right\rangle \right]ds+\frac{1}{2}\mathbb{E} \left[ \left\langle G X_{T}, X_{T} \right\rangle \right]. $$

Here, \((W_{t})_{0 \le t \le T}=\left (W_{t}^{1},\cdots,W_{t}^{d}\right)_{0 \le t \le T}\) is a d-dimensional Brownian motion on a probability space \((\Omega, \mathcal {F}, \mathbb {P})\). Denote by \((\mathcal {F}_{t})\) the augmented natural filtration generated by (Wt). A,B1,B2,Cj,D1j and D2j are all bounded Borel measurable functions from [0,T] to \(\mathbb {R}^{n\times n}, \mathbb {R}^{n\times l_{1}}, \mathbb {R}^{n\times l_{2}}, \mathbb {R}^{n\times n}, \mathbb {R}^{n\times l_{1}}\), and \(\mathbb {R}^{n\times l_{2}}\), respectively. Q,R1, and R2 are nonnegative definite, and they are all essentially bounded measurable functions on [0,T] with values in \({\mathbb S}^{n}, {\mathbb S}^{l_{1}}\), and \({\mathbb S}^{l_{2}}\), respectively. In the first four sections, R1 and R2 are further assumed to be positive definite. \(G\in \mathbb S^{n}\) is positive semi-definite. A control u=(u1,u2) is a process \(u\in L^{2}_{\mathcal {F}}\left (0, \, T; \, \mathbb {R}^{l_{1}+l_{2}}\right)\), and \(X^{u}\in L^{2}_{\mathcal {F}}(\Omega ; \, C(0, \, T; \, \mathbb {R}^{n}))\) is the corresponding state process with initial value \(x_{0}\in \mathbb {R}^{n}\).

We will use the following notations. Let l>0 be an integer. For a given σ-field \(\mathcal {G}\subset \mathcal {F}\), \(L^{2}_{\mathcal {G}}\left (\Omega ; \, \mathbb {R}^{l}\right)\) is the set of random variables \(\xi : (\Omega, \mathcal {G}) \rightarrow \left (\mathbb {R}^{l}, {\mathcal {B}}\left (\mathbb {R}^{l}\right)\right)\) with \(\mathbb {E}\left [|\xi |^{2}\right ]<+\infty \). \(L^{\infty }_{\mathcal {G}}\left (\Omega ; \, \mathbb {R}^{l}\right)\) is the set of essentially bounded random variables \(\xi : (\Omega, {\mathcal {G}}) \rightarrow \left (\mathbb {R}^{l}, {\mathcal {B}}\left (\mathbb {R}^{l}\right)\right)\). \(L^{2}_{\mathcal {F}}\left (t, \, T; \, \mathbb {R}^{l}\right)\) is the set of \(\{\mathcal {F}_{s}\}_{s\in [t,T]}\)-adapted \(\mathbb {R}^{l}\)-valued processes f={fs:tsT} such that \(\mathbb {E}\left [\int _{t}^{T}\left |f_{s}\right |^{2}\, ds \right ]< \infty \), and denoted by \(L^{2}\left (t, T; \mathbb {R}^{l}\right)\) if the underlying filtration is the trivial one. \(L^{\infty }_{\mathcal {F}}\left (t, \, T; \, \mathbb {R}^{l}\right)\) is the set of essentially bounded \(\{\mathcal {F}_{s}\}_{s\in [t,T]}\)-adapted \(\mathbb {R}^{l}\)-valued processes. \(L^{2}_{\mathcal {F}}\left (\Omega ; \, C\left (t, \, T; \, \mathbb {R}^{l}\right)\right)\) is the set of continuous \(\{\mathcal {F}_{s}\}_{s\in [t,T]}\)-adapted \(\mathbb {R}^{l}\)-valued processes f={fs:tsT} such that \(\mathbb {E}\left [\sup _{s\in [t,T]}\left |f_{s}\right |^{2}\,\right ] < \infty \). All vectors used in this paper are column vectors. For a matrix M, M is its transpose, and \(|M|=\sqrt {\sum _{i,j}m_{ij}^{2}}\) is the Frobenius norm. For a vector- or matrix-valued function K defined on the time interval [0,T] and differentiable at time t∈[0,T], K ̇t or K ̇(t) stands for the derivative at time t. Define

$$ B\!:=\!\left(B^{1},B^{2}\right), D:=\left(D^{1}, D^{2}\right), R:= \text{diag} \left(R^{1},R^{2}\right), \quad u:=\left(\left(u^{1}\right)^{\prime},\left(u^{2}\right)^{\prime}\right)^{\prime}; $$

and for a matrix K with suitable dimensions and \((t,x,u)\in [0,T]\times \mathbb {R}^{n}\times \mathbb {R}^{l_{1}+l_{2}}\),

$$\begin{aligned} &(C_{t}x+D_{t} u){dW}_{t}=\sum\limits_{j=1}^{d} \left(C_{t}^{j}x+D_{t}^{1j} u^{1}+D_{t}^{2j} u^{2}\right) {dW}_{t}^{j}; \quad C_{t}^{\prime}K\!:=\!\sum\limits_{j=1}^{d} \left(C_{t}^{j}\right)^{\prime} K^{j}; \\ &D^{\prime}_{t}{KD}_{t}\!:=\!\sum\limits_{j=1}^{d} \left(D_{t}^{j}\right)^{\prime}{KD}_{t}^{j}, C_{t}^{\prime}{KD}_{t}\!:=\!\sum\limits_{j=1}^{d} \left(C_{t}^{j}\right)^{\prime}{KD}_{t}^{j}, C_{t}^{\prime}{KC}_{t}\!:=\!\sum\limits_{j=1}^{d} \left(C_{t}^{j}\right)^{\prime}{KC}_{t}^{j}. \end{aligned} $$

If both u1 and u2 are adapted to the natural filtration of the underlying Brownian motion W (i. e., \(u^{i}\in U^{i}_{\text {ad}}= L^{2}_{\mathcal {F}}\left (0,T;\mathbb {R}^{l_{i}}\right)\) for i=1,2), it is well-known that the optimal control exists and can be synthesized into the following feedback of the state:

$$ u_{t}=\left(R_{t}+D_{t}^{\prime}K_{t}D_{t}\right)^{-1}\left(K_{t}B_{t}+C^{\prime}_{t}K_{t}D_{t}\right)^{\prime} X_{t}, \quad t\in [0,T]. $$

Here K solves the following Riccati differential equation:

$$\begin{array}{@{}rcl@{}} &&\dot{K}_{s}+A_{s}^{\prime}K_{s}+K_{s}A_{s}+C_{s}^{\prime}K_{s}C_{s}+Q_{s} \\ &-&\left(K_{s}B_{s}+C^{\prime}_{s}K_{s}D_{s}\right)\left(R_{s}+D_{s}^{\prime}K_{s}D_{s}\right)^{-1}\left(K_{s}B_{s}+C^{\prime}_{s}K_{s}D_{s}\right)^{\prime} \\ &=&0, s\in [0,T]; K_{T} = G. \end{array} $$

See Wonham (1968), Haussmann (1971), Bismut (1976, 1977), Peng (1992), and Tang (2003) for more details on the general Riccati equation arising from linear quadratic optimal stochastic control with both state- and control-dependent noises and deterministic coefficients.

In this paper, we consider the following situation: there are two controllers called the deterministic controller and the random controller: the former can impose a deterministic action u1 only, i.e., \(u^{1}\in U^{1}_{\text {ad}}=L^{2}\left (0,T;\mathbb {R}^{l_{1}}\right)\); while the latter can impose a random action u2, more precisely \(u^{2}\in U^{2}_{\text {ad}}=L^{2}_{\mathcal {F}}\left (0,T;\mathbb {R}^{l_{2}}\right)\). Firstly, we apply the conventional variational technique to characterize the optimal control via a system of fully coupled forward-backward stochastic differential equations (FBSDEs) of mean-field type. Then we give the solution of the FBSDEs with two (but decoupled) Riccati equations, and derive the respective optimal feedback law for both deterministic and random controllers, using solutions of both Riccati equations. Existence and uniqueness is given to both Riccati equations. The optimal state is shown to satisfy a linear stochastic differential equation (SDE) of mean-field type. Both the singular and infinite time-horizonal cases are also addressed.

The rest of the paper is organized as follows. In Section 2, we give the necessary and sufficient condition of the mixed optimal controls via a system of FBSDEs. In Section 3, we synthesize the mixed optimal control into linear closed forms of the optimal state. We derive two (but decoupled) Riccati equations, and study their solvability. We state our main result. In Section 4, we address some particular cases. In Section 5, we discuss singular linear quadratic control cases. Finally in Section 6, we discuss the infinite time-horizonal case.

2 Necessary and sufficient condition of mixed optimal controls

The following necessary and sufficient condition can be proved in a straightforward way.

Theorem 1

Assume that u is an optimal control, and X is the corresponding solution. Then there is a unique pair of processes \(\left (p(\cdot), k:=(k^{j}(\cdot))_{j=1,\cdots, d}\right))\in L^{2}_{\mathcal {F}}\left (0,T;\mathbb {R}^{n}\right)\times \left (L^{2}_{\mathcal {F}}\left (0,T;\mathbb {R}^{n}\right)\right)^{d}\) (hereafter called the adjoint processes) satisfying the BSDE (6):

$$ \left\{ \begin{array}{rcl} dp(s)&= &-\left[A_{s}^{\prime}p(s)+C_{s}^{\prime} k(s)+Q_{s}X^{*}_{s}\right]\, ds +k^{\prime}(s){dW}_{s},\quad s\in[0,T], \\ p(T)&= &G X^{*}_{T}; \end{array}\right. $$

and the following optimality conditions hold true:

$$\begin{array}{@{}rcl@{}} \mathbb{E}\left[\left(B_{s}^{1}\right)^{\prime}p(s)+\left(D_{s}^{1}\right)^{\prime}k(s)+R_{s}^{1}u^{1*}_{s}\right]&=0, \end{array} $$
$$\begin{array}{@{}rcl@{}} \left(B_{s}^{2}\right)^{\prime}p(s)+\left(D_{s}^{2}\right)^{\prime}k(s)+R_{s}^{2}u^{2*}_{s}&=0. \end{array} $$

These optimality conditions are also sufficient for (X,u) to be an optimal pair.


It is obvious that BSDE (6) has a unique solution \((p(\cdot), k(\cdot))\in L^{2}_{\mathcal {F}}\left (0,T;\mathbb {R}^{n}\right)\times \left (L^{2}_{\mathcal {F}}\left (0,T;\mathbb {R}^{n}\right)\right)^{d}\).

Using the convex perturbation, we obtain in a straightforward way the necessary conditions of the optimal control u:

$$ {}\mathbb{E}\int_{0}^{T}\left\langle \left(B_{s}^{i}\right)^{\prime}p(s)+\left(D_{s}^{i}\right)^{\prime}k(s)+R_{s}^{i}u^{i*}_{s}, u^{i}_{s}\right\rangle ds=0, \quad \forall u^{i}\in U_{\text{ad}}^{i}; \quad i=1,2. $$

Clearly, the preceding conditions are equivalent to (7) and (8). The sufficiency can be proved in a standard way. □

3 Synthesis of the mixed optimal control

3.1 Ansatz

Let u be a control, X be the corresponding state process, and (p,k) the unique solution to (6) such that both (7) and (8) are satisfied. Define

$$ \overline{X}:=\mathbb{E}[X], \quad \widetilde{X}:=X-\overline{X}; \quad \overline{u^{2}}:=\mathbb{E}\left[u^{2}\right], \quad \widetilde{u^{2}}:=u^{2}-\overline{u^{2}}. $$

We expect a feedback of the following form

$$ p(s)=P_{1}(s)\widetilde{X_{s}}+P_{2}(s)\overline{X_{s}} $$

for some n×n matrix-valued absolutely continuous functions P1(·) and P2(·) defined on the time interval [0,T]. Applying Ito’s formula, we have

$$ \begin{aligned} dp(s)&=\Dot{P}_{1}(s)\widetilde{X}_{s}ds+P_{1}(s)\left[A_{s}\widetilde{X}_{s}+B^{2}_{s} \widetilde{u^{2}_{s}}\right]ds \\ &\quad+ P_{1}(s)\left[C_{s}X_{s}+D_{s} u_{s}\right]{dW}_{s}\\ &\quad+\Dot{P}_{2}(s)\overline{X_{s}}ds+P_{2}(s)\left[A_{s}\overline{X_{s}}+B_{s}^{1}u_{s}^{1}+B_{s}^{2}\overline{u_{s}^{2}}\right]\, ds. \end{aligned} $$


$$ k(s)=P_{1}(s)\left(C_{s}X_{s}+D_{s} u_{s}\right). $$

Define for i=1,2,

$$\begin{array}{@{}rcl@{}} \Lambda_{i}(S)&\!:= R^{i}+\left(D^{i}\right)^{\prime}SD^{i}, \quad S\in \mathbb{S}^{n}; \end{array} $$
$$\begin{array}{@{}rcl@{}} \widehat\Lambda(S)&:= \Lambda_{1}(S)-\left(D^{1}\right)^{\prime}SD^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}, \quad S\in \mathbb{S}^{n}; \end{array} $$


$$\begin{array}{@{}rcl@{}} \Theta_{i}:=\left(B^{2}\right)^{\prime}P_{i}+\left(D^{2}\right)^{\prime}P_{1}C. \end{array} $$

Plugging Eqs. (11) and (13) into the optimality conditions (7) and (8):

$$\begin{array}{@{}rcl@{}} \left(B_{s}^{1}\right)^{\prime}P_{2}(s)\overline{X_{s}}+\left(D_{s}^{1}\right)^{\prime}P_{1}(s)\left(C_{s}\overline{X_{s}}+D_{s}^{1} u^{1}_{s}+D_{s}^{2} \overline{u_{s}^{2}}\right)+R_{s}^{1}u^{1}_{s}&=0, \end{array} $$
$$\begin{array}{@{}rcl@{}} \left(B_{s}^{2}\right)^{\prime}\left[P_{1}(s)\widetilde{X_{s}}+P_{2}(s)\overline{X_{s}}\right]\quad\quad & \\ +\left(D_{s}^{2}\right)^{\prime}P_{1}(s)\left[C_{s}\left(\widetilde{X_{s}}+\overline{X_{s}}\right)+D_{s}^{1} u^{1}_{s} +D_{s}^{2} u_{s}^{2}\right]+R_{s}^{2}u^{2}_{s}&=0. \end{array} $$

From the last equality, we have

$$ u^{2}=-\Lambda_{2}^{-1}(P_{1})\left[\Theta_{1}\widetilde X+\Theta_{2} \overline X+\left(D^{2}\right)^{\prime}P_{1}D^{1}u^{1}\right] $$

and consequently

$$ \overline {u^{2}}=-\Lambda_{2}^{-1}(P_{1})\left[\Theta_{2} \overline X+\left(D^{2}\right)^{\prime}P_{1}D^{1}u^{1}\right]. $$

In view of (17), we have

$$\begin{array}{@{}rcl@{}} \left(B_{s}^{1}\right)^{\prime}P_{2}(s)\overline{X_{s}}+\left(D_{s}^{1}\right)^{\prime}P_{1}(s)C_{s}\overline{X_{s}}& \\ +\left(D_{s}^{1}\right)^{\prime}P_{1}(s)D_{s}^{2} \overline{u_{s}^{2}}+\Lambda_{1}\left(P_{1}(s)\right)u^{1}_{s}=&0 \end{array} $$

and therefore,

$$\begin{array}{@{}rcl@{}} \Lambda_{1}\left(P_{1}(s)\right)u^{1}_{s}+\left(B_{s}^{1}\right)^{\prime}P_{2}(s)\overline{X_{s}}+\left(D_{s}^{1}\right)^{\prime}P_{1}(s)C_{s}\overline{X_{s}}& \\ -\left(D_{s}^{1}\right)^{\prime}P_{1}(s)D_{s}^{2}\Lambda_{2}^{-1}(P_{1})\left[\Theta_{2}(s) \overline{X_{s}}+\left(D_{s}^{2}\right)^{\prime}P_{1}(s)D_{s}^{1}u^{1}_{s}\right]&=0 \end{array} $$

or equivalently

$$\begin{array}{@{}rcl@{}} \left[\Lambda^{1}\left(P_{1}\right)-\left(D^{1}\right)^{\prime}P_{1}D^{2}\Lambda_{2}^{-1}\left(P_{1}\right)\left(D^{2}\right)^{\prime}P_{1}D^{1}\right] u^{1}& \\ =-\left[\left(B^{1}\right)^{\prime}P_{2}+\left(D^{1}\right)^{\prime}P_{1}C-\left(D^{1}\right)^{\prime}P_{1}D^{2}\Lambda_{2}^{-1}(P_{1})\Theta_{2}\right] \overline{X_{s}}.& \end{array} $$

We have

$$ u^{1}=M^{1}\overline X, \quad u^{2}=M^{2}\widetilde X+M^{3}\overline X $$


$$\begin{array}{@{}rcl@{}} M^{1}&:=-\left[\Lambda_{1}(P_{1})-\left(D^{1}\right)^{\prime}P_{1}D^{2}\Lambda_{2}^{-1}(P_{1})\left(D^{2}\right)^{\prime}P_{1}D^{1}\right]^{-1}\\ & \,\,\times \left[\left(B^{1}\right)^{\prime}P_{2}+\left(D^{1}\right)^{\prime}P_{1}C-\left(D^{1}\right)^{\prime}P_{1}D^{2}\Lambda_{2}^{-1}(P_{1})\Theta_{2}\right], \end{array} $$
$$\begin{array}{@{}rcl@{}} M^{2}&\!:=-\Lambda_{2}^{-1}(P_{1})\Theta_{1}, \end{array} $$
$$\begin{array}{@{}rcl@{}} M^{3}&\!:=-\Lambda_{2}^{-1}(P_{1})\left[\Theta_{2}+\left(D^{2}\right)^{\prime}P_{1}D^{1}M^{1}\right]. \end{array} $$

In view of (12) and (6), we have

$$\begin{array}{@{}rcl@{}} dp(s)&\!=\Dot{P}_{1}\widetilde{X}ds+P_{1}\left[A\widetilde{X}+B^{2} M^{2}\widetilde{X}\right]ds+ k^{\prime}{dW}_{s} \end{array} $$
$$\begin{array}{@{}rcl@{}} & +\Dot P_{2}\overline{X}ds+P_{2}\left[A\overline{X}+B^{1}M^{1}\overline X+B^{2}M^{3}\overline X\right]\, ds\\ &\,=-\left\{A_{s}^{\prime}\left(P_{1}(s)\widetilde{X_{s}}+P_{2}(s)\overline {X_{s}}\right)+\left(Q_{s}+C_{s}^{\prime}P_{1}(s)C_{s}\right)\left(\overline{X_{s}}+\widetilde{X_{s}}\right) \right.\\ & \left. +C_{s}^{\prime} P_{1}(s)\left[D_{s}^{1}M^{1}_{s}\overline{X_{s}}+D_{s}^{2} \left(M_{s}^{2}\widetilde {X_{s}}+M^{3}_{s}\overline{X_{s}}\right)\right] \right\}ds \\ &+ k^{\prime}(s){dW}_{s}. \end{array} $$

We expect the following system for (P1,P2):

$$\begin{array}{@{}rcl@{}} &\Dot{P}_{1}+P_{1}A+A^{\prime}P_{1}+C^{\prime}P_{1}C+Q\\ &-\left(P_{1}B^{2}+C^{\prime} P_{1}D^{2}\right)\Lambda_{2}^{-1}(P_{1})\left(P_{1}B^{2}+C^{\prime} P_{1}D^{2}\right)^{\prime}=0,\\ &P_{1}(T)=G \end{array} $$


$$\begin{array}{@{}rcl@{}} \Dot{P}_{2}+P_{2}A+A^{\prime}P_{2}+C^{\prime}P_{1}C+Q+C^{\prime}P_{1}D^{1}M^{1}+C^{\prime}P_{1}D^{2}M^{3}\\ +P_{2}B^{1}M^{1}+P_{2}B^{2}M^{3}=0, \quad \quad P_{2}(T)=G. \end{array} $$

The last equation can be rewritten into the following one:

$$\begin{array}{@{}rcl@{}} \Dot{P}_{2}+P_{2}\widetilde{A}(P_{1})+\widetilde{A}^{\prime}(P_{1})P_{2}+\widetilde{Q}(P_{1})-P_{2}\mathcal{N}(P_{1})P_{2}=0,\quad P_{2}(T)=G \end{array} $$

where for \(S\in \mathbb {S}^{n}_{+}\),

$${\begin{aligned} U(S)&:= S-SD^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}S,\\[-4pt] \widetilde{Q}(S)&:=Q+C^{\prime}U(S)C-C^{\prime}U(S)D^{1}\widehat{\Lambda}^{-1}(S)\left(D^{1}\right)^{\prime}U(S)C,\\[-4pt] \widetilde{A}(S)&:=A-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SC\\[-4pt] &-\left[B^{1}-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}\right]\widehat{\Lambda}^{-1}(S)\left(D^{1}\right)^{\prime}U(S)C,\\ \mathcal{N}(S)&:=B^{2}\Lambda_{2}^{-1}(S)\left(B^{2}\right)^{\prime}\\[-4pt] &+\left[B^{1}-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}\right]\widehat{\Lambda}^{-1}(S)\left[B^{1}-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}\right]^{\prime}. \end{aligned}} $$

We have the following representation for M1 and M2:

$$ {\begin{aligned} M^{1}&=-\widehat{\Lambda}^{-1}(P_{1})\left[\left(B^{1}\right)^{\prime}P_{1}+(D^{1})^{\prime}U(P_{1})C-\left(D^{1}\right)^{\prime}P_{1}D^{1}\Lambda_{2}^{-1}(P_{1})\left(B^{2}\right)^{\prime}P_{2}\right],\\[-4pt] M^{3}&=-\Lambda_{2}^{-1}(P_{1})\left\{\left(B^{2}\right)^{\prime}P_{2}+\left(D^{2}\right)^{\prime}P_{1}C\right.\\[-4pt] &\quad -(D^{2})^{\prime}P_{1}D^{1}\widehat{\Lambda}^{-1}(P_{1})\left[\left(B^{1}\right)^{\prime}P_{1}+\left(D^{1}\right)^{\prime}U(P_{1})C\right.\\[-4pt] &\quad \left.\left. -\left(D^{1}\right)^{\prime}P_{1}D^{1}\Lambda_{2}^{-1}(P_{1})\left(B^{2}\right)^{\prime}P_{2}\right] \right\}. \end{aligned}} $$

Lemma 1

For \(S\in \mathbb {S}^{n}_{+}\), we have \(\widetilde {Q}(S)\ge 0\).


First, we show that U(S)≥0. In fact, we have \(\left (\text {setting}~ \widehat {D^{2}}:=S^{1/2}D^{2}\right)\)

$$\begin{array}{@{}rcl@{}} U(S)&=S-S^{1/2}\widehat{D^{2}}\left[R^{2}+\left(\widehat{D^{2}}\right)^{\prime}\widehat{D^{2}}\right]^{-1}\left(\widehat{D^{2}}\right)^{\prime}S^{1/2}\\[-4pt] &\ge S-S^{1/2}IS^{1/2}=0. \end{array} $$

Here we have used the following well-known matrix inequality:

$$ D(R+D^{\prime}FD)^{-1}D^{\prime}\le F^{-1} $$

for \(D\in \mathbb {R}^{n\times m}\), and positive matrices \(F\in \mathbb {S}^{n}\) and \(R\in \mathbb {S}^{m}\).

Using again the inequality (35), we have \(\left (\text {setting}~ \widehat {D^{1}}:=[U(S)]^{1/2}D^{1}\right)\)

$$\begin{array}{@{}rcl@{}} \widetilde{Q}(S)&=&Q+C^{\prime}U(S)C\\[-4pt] &\,-\,&C^{\prime}U(S)D^{1}\!\left[\!R^{1}\,+\,\left(\!D^{1}\!\right)^{\prime}SD^{1}\,-\,\left(\!D^{1}\!\right)^{\prime}SD^{2}\Lambda_{2}^{-1}(S)\!\left(\!D^{2}\!\right)\!^{\prime}SD^{1}\!\right]^{-1}\!\left(\!D^{1}\!\right)\!^{\prime}U(\!S\!)C\\[-4pt] &=&Q+C^{\prime}U(S)C-C^{\prime}U(S)D^{1}\left[R^{1}+\left(D^{1}\right)^{\prime}U(S)D^{1}\right]^{-1}\left(D^{1}\right)^{\prime}U(S)C\\[-4pt] &=&Q\,+\,C^{\prime}U(S)C\,-\,C^{\prime}[U(S)]^{1/2}\widehat{D^{1}}\left[\!R^{1}\,+\,\left(\widehat{D^{1}}\right)^{\prime}\widehat{D^{1}}\right]^{-1}\left(\widehat {D^{1}}\right)^{\prime}[U(S)]^{1/2}C\\[-4pt] &\ge& Q+C^{\prime}U(S)C-C^{\prime}[U(S)]^{1/2}I [U(S)]^{1/2}C\ge 0. \end{array} $$

The proof is complete. □

3.2 Existence and uniqueness of optimal control

Theorem 2

Assume that R1≫0 and R2≫0. Then, there is a unique optimal control u, and Riccati Eqs. (30) and (32) have unique nonnegative solutions P1 and P2. The optimal control u=(u1∗,u2∗) has the following feedback form:

$$ u^{1*}=M^{1}\overline{X^{*}}, \quad u^{2*}=M^{2}\widetilde{X}^{*}+M^{3}\overline{X}^{*}=M^{2}X^{*}+\left(M^{3}-M^{2}\right)\overline{X}^{*} $$

where X is the state process corresponding to the optimal control u, \(\overline {X_{t}^{*}}:=\mathbb {E}\left [X_{t}^{*}\right ]\), and \(\widetilde {X_{t}^{*}}:=X_{t}^{*}-\overline {X_{t}^{*}}\) for t∈[0,T]. The optimal feedback system is given by

$$\begin{array}{@{}rcl@{}} X_{t}&\!\!\!\!\!\!\!\!\!=x_{0}+\int_{0}^{t}\left[\left(A+B^{2}M^{2}\right)X_{s}+\left(B^{1}M^{1}-B^{2}M^{2}+B^{2}M^{3}\right)\overline X_{s}\right]\, ds\\ &\,\,+\int_{0}^{t}\left[\left(C\,+\,D^{2}M^{2}\right)X_{s}\,+\,\left(D^{1}M^{1}\,-\,D^{2}M^{2}+D^{2}M^{3}\right)\overline X_{s}\right]\, {dW}_{s}, \quad t\ge 0. \end{array} $$

It is a mean-field stochastic differential equation. The expected optimal state \(\overline {X_{t}^{*}}\) is governed by the following ordinary differential equation:

$$ \overline X_{t}=x_{0}+\int_{0}^{t}\left(A+B^{1}M^{1}+B^{2}M^{3}\right)\overline X_{s}\, ds, \quad t\ge 0; $$

and \(\widetilde {X_{t}^{*}}\) is governed by the following stochastic differential equation:

$$ \begin{aligned} \widetilde X_{t}&=\int_{0}^{t}\left(A+B^{2}M^{2}\right)\widetilde X_{s}\, ds \\ &\quad+\int_{0}^{t}\left[\left(C+D^{2}M^{2}\right)\widetilde X_{s}+\left(C+D^{1}M^{1}+D^{2}M^{3}\right)\overline X_{s}\right]\, {dW}_{s}, \quad t\ge 0. \end{aligned} $$

The optimal value is given by

$$ J(u^{*})=\left\langle P_{2}(0)x_{0}, x_{0}\right\rangle. $$


Since R1 and R2 are uniformly positive, the cost functional J(u) is strictly convex in u and thus has a unique minimizer. Therefore, we have a unique optimal control. Define

$$ \widehat{u}^{1}:=M^{1}\overline {X^{*}}, \quad \widehat{u}^{2}:=M^{2}\widetilde{X^{*}}+M^{3}\overline{X^{*}}, \quad \widehat{u}:=\left(\widehat{u}^{1}, \widehat{u}^{2}\right) $$


$$\begin{array}{@{}rcl@{}} \widehat{p}:=P_{1}\widetilde{X^{*}}+P_{2}\overline{X^{*}},\quad \widehat{k}:=P_{1}\left(CX^{*}+D u^{*}\right). \end{array} $$

We can check that the pair \(\left (\widehat {p}, \widehat {k}\right)\) is an adapted solution to BSDE (6), and \(\left (\widehat {u}, \widehat {p}, \widehat {k}\right)\) satisfies the optimality condition. Hence, \(\widehat {u}\) is the optimal control u.

The formula (41) is derived from computation of \(\left \langle p_{s}, X^{*}_{s} \right \rangle \) with the Itô’s formula. All other assertions are obvious. □

Remark 1

The uniqueness of solution to the mean-field FBSDE (consisting of the four Eqs. (1) with (X,u)=(X,u) and (6)-(8)) can be obtained from uniqueness of optimal controls. In fact, as u is unique, the corresponding state process X is unique, and thus the solution (p,k) to the adjoint equation is unique.

It can also be proved in a direct way. In fact, if (X,u,p,k) is an alternative solution to the FBSDE and satisfies the optimality condition, then by setting

$${\delta p}:=p-\left(P_{1}\widetilde{X}+P_{2}\overline{X}\right),\quad {\delta k}:=k-P_{1}(CX+D u), $$

and by putting

$$p={\delta p}+P_{1}\widetilde{X}+P_{2}\overline{X},\quad k={\delta k}+P_{1}(CX+D u) $$

into (7) and (8), we have δpT=0 and the following two equalities

$$\begin{array}{@{}rcl@{}} \mathbb{E}\left\{\left(B^{1}\right)^{\prime}\left({\delta p}\,+\,P_{1}\widetilde{X}\,+\,P_{2}\overline{X}\right)+\left(D^{1}\right)^{\prime}\left[{\delta k}+P_{1}(CX+D u)\right]+R^{1}u^{1}\right\}&=0, \end{array} $$
$$\begin{array}{@{}rcl@{}} \left(B^{2}\right)^{\prime}\left({\delta p}+P_{1}\widetilde{X}+P_{2}\overline{X}\right)+\left(D^{2}\right)^{\prime}({\delta k}+P_{1}(CX+D u))+R^{2}u^{2}_{s}&=0. \end{array} $$

Therefore, we have

$$ \overline{u^{2}}=-\Lambda_{2}^{-1}(P_{1})\left[\left(B^{2}\right)^{\prime}\overline {\delta p}+\left(D^{2}\right)^{\prime}\overline {\delta k}+\Theta_{2} \overline X+D_{2}^{\prime}P_{1}D_{1}u^{1*}\right]. $$

In view of (44) and (45), we have

$$ u^{1}=L^{1}\overline{\delta p}+L^{2}\overline{\delta k}+M^{1}\overline{X} $$


$$ u^{2}=L^{3}\delta p+L^{4}\delta k+L^{5}\overline{\delta p}+L^{6}\overline{\delta k}+M^{2}\widetilde{X}+M^{3}\overline{X} $$


$$\begin{aligned} L^{1}&:=-\widehat{\Lambda}^{-1}(P_{1})\left[\left(B^{1}\right)^{\prime}-\left(D^{1}\right)^{\prime}P_{1}D^{2}\Lambda_{2}^{-1}(P_{1})\left(B^{2}\right)^{\prime}\right],\\ L^{2}&:=-\widehat{\Lambda}^{-1}(P_{1})\left[\left(D^{1}\right)^{\prime}-\left(D^{1}\right)^{\prime}P_{1}D^{2}\Lambda_{2}^{-1}(P_{1})\left(D^{2}\right)^{\prime}\right],\\ L^{3}&:=-\Lambda_{2}^{-1}(P_{1})\left(B^{2}\right)^{\prime},\\ L^{4}&:=-\Lambda_{2}^{-1}(P_{1})\left(D^{2}\right)^{\prime},\\ L^{5}&:=-\Lambda_{2}^{-1}(P_{1})\left(D^{2}\right)^{\prime}P_{1}D^{1}L^{1},\\ L^{6}&:=-\Lambda_{2}^{-1}(P_{1})\left(D^{2}\right)^{\prime}P_{1}D^{1}L^{2}. \end{aligned} $$

Define a new function f as follows:

$$\begin{array}{@{}rcl@{}} &f(s, p, k, P, K)\\ &:=\left[A^{\prime}_{s}+P_{1}(s)B^{2}_{s}L^{3}_{s}+C^{\prime}_{s}P_{1}(s)D^{2}_{s}L^{3}_{s}\right]p+\left[C^{\prime}_{s}+P_{1}(s)B^{2}_{s}L^{4}+C^{\prime}P_{1}(s)D^{2}_{s}L^{4}_{s}\right]k\\ &+\left[C^{\prime}_{s}P_{1}(s)D^{1}_{s}L^{1}_{s}+P_{2}(s)B^{1}_{s}L^{1}_{s}+P_{2}(s)B^{2}_{s}L^{3}_{s}-P_{1}(s)B^{2}_{s}L^{3}_{s}+P_{2}(s)B^{2}_{s}L^{5}_{s}\right.\\ &+\left.C^{\prime}_{s}P_{1}(s)D^{2}_{s}L^{5}_{s}\right]P+\left[C^{\prime}_{s}P_{1}(s)D^{1}_{s}L^{2}_{s}+P_{2}(s)B^{1}_{s}L^{2}_{s}+P_{2}(s)B^{2}_{s}L^{4}_{s} -P_{1}(s)B^{2}_{s}L^{4}_{s}\right.\\ &+\left.P_{2}(s)B^{2}_{s}L^{6}_{s}+C^{\prime}_{s}P_{1}(s)D^{2}_{s}L^{6}_{s}\right]K. \end{array} $$

Then (δp,δk) satisfies the following linear homogeneous BSDE of mean-field type:

$$ d\delta p_{s}=-f\left(s, \delta p_{s},\delta k_{s}, \overline{\delta p}_{s}, \overline{\delta k}_{s}\right)\, ds+\delta k_{s}\, {dW}_{s}, \quad \delta p_{T}=0. $$

In view of (Buckdahn et al. (2009), Theorem 3.1), it admits a unique solution (δp,δk)=(0,0). Therefore, X=X and u=u.

4 Particular cases

4.1 The classical optimal stochastic LQ case: B 1=0 and D 1=0.

In this case, let P1 be the unique nonnegative solution to Riccati Eq. (30). Then, P1 is also the solution of Riccati Eq. (32), and the optimal control reduces to the conventional feedback form.

4.2 The deterministic control of linear stochastic system with quadratic cost: B 2=0 and D 2=0.

In this case, B=B1 and D=D1, and Riccati Eq. (30) takes the following form (we write R=R1 for simplifying exposition):

$$\Dot{P}_{1}+P_{1}A+A^{\prime}P_{1}+C^{\prime}P_{1}C+Q=0,\quad P_{1}(T)=G, $$

which is a linear Liapunov equation. Riccati Eq. (32) takes the following form:

$$\Dot{P}_{2}+P_{2}\widetilde A+{\widetilde A}^{\prime}P_{2}+\widetilde Q -P_{2}B^{\prime}\left(R+D^{\prime}P_{1}D\right)^{-1}{BP}_{2}=0, \quad P_{2}(T)=G $$


$$\widetilde{A}:=A-B\left(R+D^{\prime}P_{1}D\right)^{-1}D^{\prime}P_{1}C $$


$$\widetilde{Q}:=Q+C^{\prime}P_{1}C-C^{\prime}P_{1}D\left(R+D^{\prime}P_{1}D\right)^{-1}D^{\prime}P_{1}C. $$

The optimal control takes the following feedback form:

$$u^{*}=-\left(R+D^{\prime}P_{1}D\right)^{-1}\left({BP}_{2}+D^{\prime}P_{1}C\right)\overline{X^{*}}. $$

5 Some solvable singular cases

In this section, we study the possibility of R1=0 or R2=0. We have

Theorem 3

Assume that R1≫0 and

$$ R^{2}\ge 0, \quad \left(D^{2}\right)^{\prime}D^{2}\gg 0, \quad G>0. $$

Then Riccati Eqs. (30) and (32) have unique nonnegative solutions P1≫0 and P2, respectively. The optimal control is unique and has the following feedback form:

$$ u^{1*}=M^{1}\overline{X^{*}}, \quad u^{2*}=M^{2}\widetilde{X^{*}}+M^{3}\overline{X^{*}}=M^{2}X^{*}+\left(M^{3}-M^{2}\right)\overline{X^{*}}. $$

The optimal feedback system and the optimal value take identical forms to those of Theorem 2.


In view of the conditions (49), the existence and uniqueness of solution P1≫0 to Riccati Eq. (30) can be found in Kohlmann and Tang (2003), and those of solution P2≥0 to Riccati Eqs. (30) comes from the fact that \(\widehat \Lambda (P_{1})\gg 0\) as a consequence of the condition that R1≫0.

Other assertions can be proved in an identical manner as Theorem 2. □

Theorem 4

Assume that R2≫0 and

$$ R^{1}\ge 0, \quad \left(D^{1}\right)^{\prime}D^{1}\gg 0, \quad \quad G>0. $$

Then Riccati Eqs. (30) and (32) have unique nonnegative solutions P1≫0 and P2, respectively. The optimal control is unique and has the following feedback form:

$$ u^{1*}=M^{1}\overline{X^{*}}, \quad u^{2*}=M^{2}\widetilde{X^{*}}+M^{3}\overline{X^{*}}=M^{2}X^{*}+\left(M^{3}-M^{2}\right)\overline {X^{*}}. $$

The optimal feedback system and the optimal value take identical forms to those of Theorem 2.


The existence and uniqueness of solution P1 to Riccati Eqs. (30) are well-known. In view of the condition G>0, we have P1≫0. We now prove those of solution P2≥0 to Riccati Eq. (30).

In view of the well-known matrix inverse formula:

$$ \left(A+BD^{-1}C\right)^{-1}=A^{-1}-A^{-1}B\left(D+CA^{-1}B\right)^{-1}CA^{-1} $$

for \(B\in \mathbb {R}^{n\times m}, C\in \mathbb {R}^{m\times n}\) and invertible matrices \(A\in \mathbb {R}^{n\times n}, D\in \mathbb {R}^{m\times m}\) such that A+BD−1C and D+CA−1B are invertible, we have the following identity:

$$\begin{array}{@{}rcl@{}} \widehat{\Lambda}(P_{1})&=R^{1}+\left(D^{1}\right)^{\prime}\left\{P_{1}-P_{1}D^{2}\left[R^{2}+\left(D^{2}\right)^{\prime} P_{1}D^{2}\right]^{-1}\left(D^{2}\right)^{\prime}P_{1}\right\}D^{1}\\ &=R^{1}+\left(D^{1}\right)^{\prime}\left[P_{1}^{-1}+D^{2}\left(R^{2}\right)^{-1}\left(D^{2}\right)^{\prime}\right]^{-1}D^{1}. \end{array} $$

Noting the condition (D1)D1≫0, we have \(\widehat {\Lambda }(P_{1})\gg 0\).

Other assertions can be proved in an identical manner as Theorem 2. □

6 The infinite time-horizontal case

In this section, we consider the time-invariant situation of all the coefficients A,B,C,D,Q and R in the linear controlled stochastic differential equation (SDE)

$$ {dX}_{s}=\left[{AX}_{s}+B^{1}u_{s}^{1}+B^{2}u_{s}^{2}\right]ds+ \left[{CX}_{s}+D^{1} u_{s}^{1}+D^{2} u_{s}^{2} \right]{dW}_{s}, t>0; X_{0}=x_{0}, $$

and the quadratic cost functional

$$ J(u) \triangleq \frac{1}{2}\mathbb{E}\int_{0}^{\infty}\left[ \left\langle {QX}_{s}, X_{s} \right\rangle + \left\langle R^{1}u_{s}^{1}, u_{s}^{1} \right\rangle + \left\langle R^{2}u_{s}^{2}, u_{s}^{2} \right\rangle \right]ds $$

for \(\left (u^{1}, u^{2}\right)\in L^{2}\left (0,\infty ; \mathbb {R}^{l_{1}}\right)\times \mathcal {L}^{2}_{\mathcal {F}}\left (0,\infty ; \mathbb {R}^{l_{2}}\right)\) and u:=((u1),(u2)).

The admissible class of controls for the deterministic controller u1 is \(L^{2}\left (0,\infty ; \mathbb {R}^{l_{1}}\right)\) and for the random controller u2 is \(\mathcal {L}^{2}_{\mathcal {F}}\left (0,\infty ; \mathbb {R}^{l_{2}}\right)\). For simplicity of subsequent exposition, we assume that Q>0.

Assumption 1

There is \(K\in \mathbb {R}^{l_{2}\times n}\) such that the unique solution X to the following linear matrix stochastic differential equation

$$ {dX}_{s}=\left(A+B^{2}K\right)X_{s}\, ds+ \left(C+D^{2} K\right)X_{s}{dW}_{s}, \quad t>0;\quad X_{0}=I, $$

lies in \(L^{2}_{\mathcal {F}}(0,\infty ; \mathbb R^{n\times n})\). That is, our linear control system (55) is stabilizable using only control u2.

Remark 2

By applying Itô’s formula to |Xs|2 and taking the expectation, it is straightforward to see that if there exists \(K\in \mathbb R^{l_{2}\times n}\) such that

$$\left(A+B^{2}K\right)+\left(A+B^{2}K\right)^{\prime}+\left(C+D^{2} K\right)^{\prime}\left(C+D^{2} K\right)<0, $$

then Assumption 1 is satisfied. In particular, if

$$A+A^{\prime}+C^{\prime}C<0, $$

then Assumption 1 is satisfied.

We have

Lemma 2

Assume that Q>0 and Assumption 1 and either of the following three sets of conditions hold true:

(i) R1>0 and R2>0; (ii) R1>0,R2≥0,(D2)D2>0, and G>0; and (iii) R1≥0,(D1)D1>0,R2>0, and G>0.

Then, Algebraic Riccati equation

$$\begin{array}{@{}rcl@{}} &P_{1}A+A^{\prime}P_{1}+C^{\prime}P_{1}C+Q\\ &-\left(P_{1}B^{2}+C^{\prime} P_{1}D^{2}\right)\Lambda_{2}^{-1}(P_{1})\left(P_{1}B^{2}+C^{\prime} P_{1}D^{2}\right)^{\prime}=0 \end{array} $$

has a unique positive solution P1, and Algebraic Riccati equation

$$ P_{2}\widetilde{A}(P_{1})+\widetilde{A}^{\prime}(P_{1})P_{2}+\widetilde{Q}(P_{1})-P_{2}\mathcal{N}(P_{1})P_{2}=0 $$

has a positive solution P2. Here for \(S\in \mathbb {S}^{n}_{+}\),

$$\begin{array}{@{}rcl@{}} U(S)&\!:= S-SD^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}S; \\ \widetilde{Q}(S)&\!:=Q+C^{\prime}U(S)C-C^{\prime}U(S)D^{1}\widehat{\Lambda}^{-1}(S)\left(D^{1}\right)^{\prime}U(S)C;\\ \widetilde{A}(S)&\!:=A-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SC\\ &\!-\left[B^{1}-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}\right]\widehat{\Lambda}^{-1}(S)\left(D^{1}\right)^{\prime}U(S)C;\\ \mathcal{N}(S)&\!:=B^{2}\Lambda_{2}^{-1}(S)\left(B^{2}\right)^{\prime}\\ &\,+\left[B^{1}-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}\right] \widehat{\Lambda}^{-1}(S)\left[B^{1}-B^{2}\Lambda_{2}^{-1}(S)\left(D^{2}\right)^{\prime}SD^{1}\right]^{\prime}. \end{array} $$


Existence and uniqueness of positive solution P1 to Algebraic Riccati Eq. (58) is well-known, and is referred to (Wu and Zhou (2001), Theorem 7.1, page 573). Now we prove the existence of positive solution to Algebraic Riccati Eq. (59). We use approximation method by considering finite time-horizontal Riccati equations.

For any T>0, let \(P_{1}^{T}\) and \(P_{2}^{T}\) be unique solutions to Riccati Eqs. (30) and (32), with G=0. It is well-known that \(P_{1}^{T}\) converges to the constant matrix P1 as T. We now show the convergence of \(P_{2}^{T}\). Firstly, \(P_{2}^{T}(t)\) is nondecreasing in T for any t≥0 due to the following representation formula: for \((t,x)\in [0,T]\times \mathbb R^{n},\)

$$ \left\langle P_{2}^{T}(t)x, x \right\rangle= \inf_{\substack{u^{1}\in L^{2}(t,T; \mathbb{R}^{l_{1}})\\ u^{2}\in \mathcal{L}^{2}_{\mathcal{F}}\left(t,T; \mathbb R^{l_{2}}\right)}}\frac{1}{2}\mathbb{E}^{t,x}\!\!\int_{t}^{T}\left[ \left\langle {QX}_{s}, X_{s} \right\rangle + \left\langle R^{1}u_{s}^{1}, u_{s}^{1} \right\rangle + \left\langle R^{2}u_{s}^{2}, u_{s}^{2} \right\rangle \right]ds, $$

whose proof is identical to that of the formula (41). From Assumption 1, it is straightforward to show that there is Ct>0 such that \(\left |P^{T}_{2}(t)\right |\le C_{t}\). Then \(P^{T}_{2}(t)\) converges to P2(t) as T. Furthermore, since all the coefficients are time-invariant and \(\left (P_{1}^{T}(T), P_{2}^{T}(T)\right)=0\) for any T>0, we have

$$ \left(P_{1}^{T+s}(t+s), P_{2}^{T+s}(t+s)\right)=\left(P_{1}^{T}(t), P_{2}^{T}(t)\right). $$

Taking the limit T yields that P2(t+s)=P2(t). Therefore, P2 is a constant matrix.

Taking the limit T in the integral form of Riccati Eq. (32), we show that P2 solves Algebraic Riccati Eq. (59).

Finally, in view of Q>0, we have \(P_{2}^{1}(0)>0\). Hence \(P_{2}\ge P_{2}^{1}(0)>0\). □

Theorem 5

Let Assumption 1 be satisfied. Assume that Q>0 and either of the following three sets of conditions holds true:

(i) R1>0 and R2>0; (ii) R1>0,R2≥0,(D2)D2>0, and G>0; and (iii) R1≥0,(D1)D1>0,R2>0, and G>0.

Let P1 be the unique positive solution to algebraic Eq. (58), and P2 a positive solution to the algebraic Eq. (59). Let (u,X) be an optimal pair with u=((u1∗),(u2∗)). Then the optimal control is unique and has the following feedback form:

$$ u^{1*}=M^{1}\overline{X^{*}}, \quad u^{2*}=M^{2}\widetilde{X^{*}}+M^{3}\overline{X^{*}}=M^{2}X^{*}+\left(M^{3}-M^{2}\right)\overline {X^{*}} $$

where \(\overline {X_{t}^{*}}:=\mathbb {E}\left [X^{*}_{t}\right ]\) and \(\widetilde {X_{t}^{*}}:=X_{t}^{*}-\overline {X_{t}^{*}}\) for t≥0. The optimal feedback system is given by

$$\begin{array}{@{}rcl@{}} X_{t}&=x_{0}+\int_{0}^{t}\left[\left(A+B^{2}M^{2}\right)X_{s}+\left(B^{1}M^{1}-B^{2}M^{2}+B^{2}M^{3}\right)\overline{X}_{s}\right]\, ds\\ &+\int_{0}^{t}\left[\left(C\,+\,D^{2}M^{2}\right)X_{s}\,+\,\left(D^{1}M^{1}\,-\,D^{2}M^{2}\,+\,D^{2}M^{3}\right)\overline{X}_{s}\right]\, {dW}_{s}, \quad t\ge 0. \end{array} $$

It is a mean-field stochastic differential equation. The expected optimal state \(\overline {X_{t}^{*}}\) is governed by the following ordinary differential equation:

$$ \overline{X}_{t}=x_{0}+\int_{0}^{t}\left(A+B^{1}M^{1}+B^{2}M^{3}\right)\overline{X}_{s}\, ds, \quad t\ge 0; $$

and \(\widetilde {X_{t}^{*}}\) is governed by the following stochastic differential equation:

$$\begin{array}{@{}rcl@{}} \widetilde{X}_{t}&=\int_{0}^{t}\left(A+B^{2}M^{2}\right)\widetilde{X}_{s}\, ds \\ &+\int_{0}^{t}\left[\left(C+D^{2}M^{2}\right)\widetilde{X}_{s}+\left(C+D^{1}M^{1}+D^{2}M^{3}\right)\overline{X}_{s}\right]\, {dW}_{s}, \quad t\ge 0. \end{array} $$

The optimal value is given by

$$ J\left(u^{*}\right)=\langle P_{2}\, x_{0}, x_{0}\rangle, $$

which implies the uniqueness of the positive solution to Algebraic Riccati Eq. (59).


The uniqueness of the optimal control is an immediate consequence of the strict convexity of the cost functional in both control variables u1 and u2. We now show that u is optimal.

Note that the constant positive matrix P2 solves the Riccati differential Eq. (32) with G:=P2. Define for T>0, \(\left (u^{1}, u^{2}\right)\in L^{2}\left (0,T; \mathbb {R}^{l_{1}}\right)\times \mathcal {L}^{2}_{\mathcal {F}}\left (0,T; \mathbb {R}^{l_{2}}\right)\), and u:=(u1,u2),

$$J^{T}(u):=\frac{1}{2}\mathbb{E} \int_{0}^{T}\left[ \left\langle {QX}_{s}, X_{s} \right\rangle + \left\langle R^{1}u_{s}^{1}, u_{s}^{1} \right\rangle + \left\langle R^{2}u_{s}^{2}, u_{s}^{2} \right\rangle \right]ds. $$

For any admissible pair (u1,u2), from Theorem 2, we have

$$\mathbb{E} \left\langle P_{2}\, X_{T}^{u}, X_{T}^{u}\right\rangle +J^{T}(u)\ge \langle P_{2}\, x_{0}, x_{0}\rangle. $$

Therefore, letting T, we have J(u)≥〈P2 x0,x0〉.

On the other hand, define \(p_{s}:=P_{1}\widetilde {X_{s}^{*}}+P_{2}\overline {X_{s}^{*}}\) for s≥0. Using Itô’s formula to compute the inner product 〈p,X〉, we have

$$ \mathbb{E}\left[\left\langle P_{1} \widetilde{X_{T}^{*}}+P_{2}\overline{X_{T}^{*}}, X^{*}_{T}\right\rangle\right]+ J^{T}\left(u^{*}|_{[0,T]}\right)=\langle P_{2}\, x_{0}, x_{0}\rangle, \quad \forall \, T>0. $$


$$\mathbb{E}\left[\left\langle P_{1}\widetilde{X_{T}^{*}}+P_{2}\overline{X_{T}^{*}}, X^{*}_{T}\right\rangle\right] =\mathbb{E}\left[\left\langle P_{1}\widetilde{X_{T}^{*}}, \widetilde{X_{T}^{*}}\right\rangle\right] +\mathbb{E}\left[\left\langle P_{2}\overline{X_{T}^{*}}, \overline{X}^{*}_{T}\right\rangle\right]\ge 0, $$

we have JT(u|[0,T])≤〈P2 x0,x0〉 for any T>0, and thus X is stable and u is admissible.

Passing to the limit T in (67), we have

$$ J\left(u^{*}\right)=\left\langle P_{2}\, x_{0}, x_{0}\right\rangle. $$

The proof is complete. □