1 Introduction

We are curious about the indefinite discrete-time mean-field LQ (MF-LQ) optimal control problems over infinite horizon. The system equation is the following stochastic difference equation (SDE):

$$ \textstyle\begin{cases} x_{k+1}=(Ax_{k}+\bar{A}\mathbb{E}x_{k}+Bu_{k}+\bar{B}\mathbb{E}u_{k}) +(Cx_{k}+ \bar{C}\mathbb{E}x_{k}+Du_{k}+\bar{D}\mathbb{E}u_{k})w_{k},\\ \quad k=0,1, \ldots , \\ x_{0}=\xi , \end{cases} $$
(1.1)

where \(x_{k}\in \mathbb{R}^{n}\) and \(u_{k}\in \mathbb{R}^{m}\) are the state, control processes, respectively. \(\mathbb{E}\) is the expectation operator and \(\{w_{k}\}_{k\geq 0}\) is a martingale difference sequence, defined on a complete filtered probability space \((\varOmega ,\mathfrak{F},\{\mathfrak{F}_{k}\}_{k\geq 0},P)\), in the sense that \(\mathbb{E}[w_{k}|\mathfrak{F}_{k}]=0\), \(\mathbb{E}[w_{k}^{2}|\mathfrak{F}_{k}]=\sigma ^{2}\), where \(\mathfrak{F}_{k}\) is the σ-field generated by \(\{\xi ,w_{0},\ldots ,w_{k-1}\}\), \(\mathfrak{F}_{0}=\{\emptyset ,\varOmega \}\). The initial values ξ and \(w_{k}\) are assumed to be independent of each other. \(A, \bar{A}, C, \bar{C} \in \mathbb{R}^{n\times n}\) and \(B, \bar{B}, D, \bar{D} \in \mathbb{R}^{n\times m}\) are given deterministic matrices. Indeed, system (1.1) is also regarded as a mean-field SDE (MF-SDE). Mean-field theory has been developed to investigate the collective behaviors owing to individuals’ mutual interactions in various physical and sociological dynamical systems. This problem combines the mean-field theory with the LQ stochastic optimal control (see [1, 2]).

Lately, the mean-field problems have made many constructive and significative applications in various fields of mathematical finance, statistical mechanics, games theory (see [3]), especially in stochastic optimal control (see [4]). Some representative works in the mean-field optimal control, to name a few, include Li and Liu [5], Ma and Liu [68]. It is noteworthy that the optimal control problems of MF-LQ have received considerable attention. With regard to continuous-time cases, Yong [9] studied LQ optimal control problems for mean-field stochastic differential equations by variational method and decoupling technique; the same author in [10] systematically investigated the open-loop and closed-loop equilibrium solutions for the time-inconsistency MF-LQ optimal control problems. Subsequently, Huang et al. [11] generalized the results of Yong [9] to infinite horizon.

Nevertheless, discrete-time optimal control problems are more relevant to biomedical, engineering, economic, operation research and optimizing complex technological problems, etc. Recently, Elliott et al. [12] formulated the finite horizon discrete-time MF-LQ optimal control problem as an operator stochastic LQ optimal control problem. Later, the same authors in [13] discussed the infinite horizon case. Ni et al. [14] considered the indefinite mean-field stochastic LQ optimal control problem with finite horizon. Moreover, Song and Liu [15] derived the necessary and sufficient solvability condition of the finite horizon MF-LQ optimal control problem. Specially, here it is worth mentioning that Zhang et al. [16] presented the necessary and sufficient stabilization conditions of the MF-LQ optimal control problem subject to system (1.1). Nevertheless, the stabilization results in [16] mainly rely on a restrictive condition, namely

$$\begin{aligned} Q\geq 0, \qquad Q+\bar{Q}\geq 0, \qquad R\geq 0, \qquad R+\bar{R}\geq 0. \end{aligned}$$

Indeed, it is a critical condition to study the MF-LQ optimal control problems. It is, therefore, natural to ask whether similar results can be derived if Q, , R, are just assumed to be symmetric, which is of particular and significant mathematical interest.

Inspired by the above arguments, in this paper we consider the following cost functional subject to system (1.1):

$$ \begin{aligned}[b] J(u,\xi )&=\mathbb{E}\sum_{k=0}^{\infty } \bigl[x_{k}'Qx_{k} +( \mathbb{E}x_{k})' \bar{Q}\mathbb{E}x_{k}+u_{k}'Ru_{k} \\ &\quad {}+(\mathbb{E}u_{k})'\bar{R} \mathbb{E}u_{k}+2x_{k}'Gu_{k}+2(\mathbb{E}x_{k})' \bar{G}\mathbb{E}u_{k}\bigr]. \end{aligned} $$
(1.2)

Here, the cost functional contains the explicitly correlative terms of state and control processes, namely \(G\neq 0\) and \(\bar{G}\neq 0\). More importantly, the weighting matrices Q, , R, are just symmetric, which is distinctly different from [16].

To the best of our knowledge, the study on the necessary and sufficient stabilization conditions of the discrete-time MF-LQ stochastic optimal control problems, especially with indefinite weighting matrices, is fairly scarce in the literature. Besides, the stabilization properties have not been investigated systematically. In this paper, we design the LQ optimal controller by means of the GAREs (see [17]), and we obtain the existence of the maximal solution to the original GAREs by introducing another one. Then, we show that the stabilizing solution is the maximal solution which is employed to present the optimal value. Furthermore, under the assumption of exact observability (see [18, 19]), we derive that the mean-field system is \(L^{2}\)-stabilizable if and only if the GAREs have a solution, which is also a maximal solution. Finally, we discuss the solvability of the GAREs by SDP method (see [2023]), and we establish the relations among the GAREs, the SDP, and the MF-LQ optimal control problems.

The remainder of the paper is organized as follows. The next section gives the problem formulation and preliminaries. Section 3 is devoted to studying the GAREs. In Sect. 4, we discuss the solvability and stabilization for the MF-LQ optimal control problems. Section 5 establishes the relations among the GAREs, the SDP, and the MF-LQ optimal control problems. A couple of numerical examples are given in Sect. 6 to illustrate our main results. Section 7 gives some concluding remarks.

Most notations adopted in the paper are considerably standard as follows. \(A>0\)/\(A\geq 0\) means A is strictly positive definite/ positive semi-definite. \(A'\) denotes the transpose of any matrix or vector A. \(B^{-1}\) stands for the inverse of real matrix B. \(\dim (A)\)/\(\mathbf{R}(A)\)/\(\mathbf{Ker}(A)\) is the dimension/rank/kernel of A. \(\mathbb{T}^{n}\) represents the \(n \times n\) symmetric matrix. Denote by \(\mathcal{X}\) the space of all \(\mathbb{R}^{n}\)-valued square-integrable random variables. Let \(\mathbb{N}=\{0,1,\ldots ,N\}\), \(\tilde{\mathbb{N}}_{l}=\{l,l+1,\ldots \}\), \(\tilde{\mathbb{N}}_{0}=\tilde{\mathbb{N}}=\{0,1,2,\ldots \}\).

2 Problem formulation and preliminaries

In this paper, we study the infinite horizon MF-LQ optimal control problems. Indeed, to make the problems meaningful, the infinite horizon solution also requires to guarantee the closed-loop stability, which is interestingly different from the finite horizon cases. We firstly introduce the admissible control set

$$ \mathscr{U}_{\infty }= \Biggl\{ u\Big|u_{k}\in \mathbb{R}^{m}, u_{k} \text{ is } \mathfrak{F}_{k}\text{-measurable}, \mathbb{E}\sum _{k=0}^{ \infty } \vert u_{k} \vert ^{2}< \infty \text{ and }J(\xi ,u)< \infty \Biggr\} . $$

For simplicity, system (1.1) is denoted by \([A, \bar{A}, B, \bar{B}; C, \bar{C}, D, \bar{D}]\). In addition, \([A, \bar{A}; C, \bar{C}]\) denotes \([A, \bar{A}, 0, 0; C, \bar{C}, 0, 0]\), and \([A; C]\) denotes \([A, 0, 0, 0; C, 0, 0, 0]\).

The infinite horizon MF-LQ optimal control problem to be solved can be stated as follows.

Problem \(\mathcal {A}\)

For any \(\xi \in \mathcal{X}\), find \(u^{\ast } \in \mathscr{U}_{\infty }\) such that

$$\begin{aligned} J\bigl(\xi , u^{\ast }\bigr)=\inf_{u\in \mathscr{U}_{\infty }} J(\xi , u) \equiv V(\xi ), \end{aligned}$$

where \(u^{\ast }\) is called an optimal control, and \(V(\cdot )\) is called the value function of Problem \(\mathcal{A}\).

Definition 2.1

For a matrix \(A\in \mathbb{R}^{n\times m}\), the Moore–Penrose inverse of A is defined to be the unique matrix \(A^{\dagger }\in \mathbb{R}^{m\times n}\) such that

  1. (i)

    \(AA^{\dagger }A=A\), \(A^{\dagger }AA^{\dagger }=A^{\dagger }\);

  2. (ii)

    \((AA^{\dagger })'=AA^{\dagger }\), \((A^{\dagger }A)'=A^{\dagger }A\).

Definition 2.2

System \([A, \bar{A}, B, \bar{B}; C, \bar{C}, D, \bar{D}]\) is called \(L^{2}\)-asymptotically stable if, for any \(\xi \in \mathcal{X}\), \(\lim_{k\to \infty } \mathbb{E}|x_{k}|^{2}=0\).

Definition 2.3

([24])

System \([A, \bar{A}, B, \bar{B}; C, \bar{C}, D, \bar{D}]\) is called closed-loop \(L^{2}\)-stabilizable if there exists a pair \((\mathcal{K}, \bar{\mathcal{K}}) \in \mathbb{R}^{m\times n}\times \mathbb{R}^{m\times n}\) such that, for any \(\xi \in \mathcal{X}\), the closed-loop system

$$\begin{aligned} \textstyle\begin{cases} x_{k+1}=[(A+B\mathcal{K})x_{k}+(\bar{A}+(B+\bar{B})\bar{\mathcal{K}}+ \bar{B}\mathcal{K})\mathbb{E}x_{k}] \\ \hphantom{x_{k+1}=}{}+[(C+D\mathcal{K})x_{k} +(\bar{C}+(D+\bar{D})\bar{ \mathcal{K}}+\bar{D}\mathcal{K})\mathbb{E}x_{k}]w_{k}, \quad k=0,1,2, \ldots , \\ x_{0}=\xi , \end{cases}\displaystyle \end{aligned}$$

is \(L^{2}\)-asymptotically stable. In this case, \(u_{k}=\mathcal{K}x_{k}+\bar{\mathcal{K}}\mathbb{E}x_{k} \) (\(k\in \tilde{\mathbb{N}}\)) is called the closed-loop \(L^{2}\)-stabilizer.

Definition 2.4

Consider the uncontrolled mean-field system

$$\begin{aligned} \textstyle\begin{cases} x_{k+1}=(Ax_{k}+\bar{A}\mathbb{E}x_{k})+(Cx_{k}+\bar{C}\mathbb{E}x_{k}) \omega _{k}, \\ Y_{k}=\mathbb{Q}^{1/2}\mathbb{X}_{k}, \end{cases}\displaystyle \end{aligned}$$
(2.1)

where Q=(Q00Q+Q¯) and Xk=(xkExkExk). System (2.1) (or \((A,\bar{A},C,\bar{C},\mathbb{Q}^{1/2})\), for short) is said to be exactly observable if, for any \(N\geq 0\),

$$ Y_{k}=0, \quad 0\leq k \leq N\quad \Rightarrow\quad x_{0}=0. $$

Lemma 2.1

(Schur’s lemma)

Let matrices\(M=M'\), \(R=R'\), andNbe given with appropriate dimensions. Then the following statements are equivalent:

  1. (i)

    \(M-NR^{-1}N'\geq \) (resp. >) 0.

  2. (ii)

    (MNNR) (resp. >) 0.

  3. (iii)

    (RNNM) (resp. >) 0.

In what follows, we make two assumptions.

(A1):

\([A, \bar{A}, B, \bar{B}; C, \bar{C}, D, \bar{D}]\) is closed-loop \(L^{2}\)-stabilizable.

(A2):

\((A, \bar{A}, C, \bar{C}, \mathbb{Q}^{1/2})\) is exactly observable.

We establish the following maximum principle which is the base to deriving the main results. Define

$$\begin{aligned} J_{N} =& \mathbb{E}\sum_{k=0}^{N} \bigl[x_{k}'Qx_{k} +( \mathbb{E}x_{k})' \bar{Q}\mathbb{E}x_{k}+u_{k}'Ru_{k} +(\mathbb{E}u_{k})'\bar{R} \mathbb{E}u_{k}+2x_{k}'Gu_{k} \\ &{}+2(\mathbb{E}x_{k})'\bar{G}\mathbb{E}u_{k} \bigr]+\mathbb{E}\bigl(x_{N+1}'P_{N+1}x_{N+1} \bigr) +(\mathbb{E}x_{N+1})'\bar{P}_{N+1} \mathbb{E}x_{N+1}, \end{aligned}$$
(2.2)

where \(P_{N+1}, \bar{P}_{N+1} \in \mathbb{T}^{n}\). The corresponding admissible control set is given as

$$ \mathscr{U}_{N}= \Biggl\{ (u_{0}, \ldots , u_{N})\Big|u_{k}\in \mathbb{R}^{m}, u_{k} \text{ is } \mathfrak{F}_{k}\text{-measurable}, \mathbb{E}\sum_{k=0}^{N} \vert u_{k} \vert ^{2}< \infty \text{ and }J_{N}< \infty \Biggr\} . $$

Proposition 2.1

(Maximum principle)

The general maximum principle for minimizing (2.2) is presented as

$$\begin{aligned} \begin{aligned}[b] 0&=\mathbb{E}\left \{ Ru_{k}+\bar{R}\mathbb{E}u_{k}+G'x_{k}+ \bar{G}' \mathbb{E}x_{k}+ \begin{pmatrix} B+\omega _{k}D \\ 0 \end{pmatrix}'\beta _{k}\right . \\ &\quad {}\left . +\mathbb{E} \left [ \begin{pmatrix} \bar{B}+\omega _{k}\bar{D} \\ B+\bar{B} \end{pmatrix}'\beta _{k} \right ] \Big|\mathfrak{F}_{k} \right \} , \end{aligned} \end{aligned}$$
(2.3)

where\(\beta _{k}\)satisfies

$$ \beta _{k-1}=\mathbb{E}\left \{ \begin{pmatrix} Qx_{k}+\bar{Q}\mathbb{E}x_{k}+Gu_{k}+\bar{G}\mathbb{E}u_{k} \\ 0 \end{pmatrix} + \begin{pmatrix} A+\omega _{k}C & \bar{A}+\omega _{k}\bar{C} \\ 0 & A+\bar{A} \end{pmatrix}' \beta _{k}\Big|\mathfrak{F}_{k} \right \} , $$
(2.4)

with the terminal condition

$$ \beta _{N}= \begin{pmatrix} P_{N+1} & \bar{P}_{N+1} \\ 0 & 0 \end{pmatrix} \begin{pmatrix} x_{N+1} \\ \mathbb{E}x_{N+1} \end{pmatrix}. $$
(2.5)

Proof

Denote

$$\begin{aligned}& x_{k+1}:= g^{k}(x_{k},u_{k}, \mathbb{E}x_{k},\mathbb{E}u_{k},\omega _{k}), \end{aligned}$$
(2.6)
$$\begin{aligned}& \mathbb{E}x_{k+1}=\mathbb{E}\bigl[g^{k}(x_{k},u_{k}, \mathbb{E}x_{k}, \mathbb{E}u_{k},\omega _{k})\bigr]:= h^{k}(x_{k},u_{k}, \mathbb{E}x_{k}, \mathbb{E}u_{k}), \end{aligned}$$
(2.7)
$$\begin{aligned}& J_{N}:=\mathbb{E} \Biggl\{ \psi (x_{N+1}, \mathbb{E}x_{N+1})+ \sum_{k=0}^{N}S^{k}(x_{k},u_{k}, \mathbb{E}x_{k},\mathbb{E}u_{k}) \Biggr\} . \end{aligned}$$
(2.8)

For any \(u_{k}\), \(\delta u_{k} \in \mathscr{U}_{N}\), \(\varepsilon \in (0,1)\), we have \(u_{k}^{\varepsilon }=u_{k}+\varepsilon \delta u_{k} \in \mathscr{U}_{N}\). By (2.8), we get that

$$\begin{aligned} \delta J_{N} =&\mathbb{E}\sum_{k=0}^{N} \bigl[S_{x_{k}}^{k}\delta x_{k}+S_{ \mathbb{E}x_{k}}^{k} \delta \mathbb{E}x_{k}+ S_{u_{k}}^{k} \varepsilon \delta u_{k}+S_{\mathbb{E}u_{k}}^{k} \varepsilon \delta \mathbb{E}u_{k}\bigr] \\ &{}+\mathbb{E}\{\psi _{x_{N+1}}\delta x_{N+1}+\psi _{\mathbb{E}x_{N+1}} \delta \mathbb{E}x_{N+1}\}+O\bigl(\varepsilon ^{2}\bigr) \\ =&\mathbb{E} \Biggl\{ \sum_{k=1}^{N} \bigl[S_{x_{k}}^{k}+\mathbb{E}S_{ \mathbb{E}x_{k}}^{k} \bigr]\delta x_{k} +\sum_{k=0}^{N} \bigl[S_{x_{k}}^{k}+ \mathbb{E}S_{\mathbb{E}x_{k}}^{k} \bigr]\varepsilon \delta u_{k} \\ &{}+[\psi _{x_{N+1}}+\mathbb{E}\psi _{\mathbb{E}x_{N+1}}]\delta x_{N+1} \Biggr\} +O\bigl(\varepsilon ^{2}\bigr). \end{aligned}$$

Combining (2.6) with (2.7), for \(\delta x_{k}=x_{k}^{\varepsilon }-x_{k}\), it is obtained that

$$\begin{aligned} \begin{pmatrix} \delta x_{k+1} \\ \delta \mathbb{E}x_{k+1} \end{pmatrix}= \begin{pmatrix} g_{x_{k}}^{k} & g_{\mathbb{E}x_{k}}^{k} \\ h_{x_{k}}^{k} & h_{\mathbb{E}x_{k}}^{k} \end{pmatrix} \begin{pmatrix} \delta x_{k} \\ \delta \mathbb{E}x_{k} \end{pmatrix}+ \begin{pmatrix} g_{u_{k}}^{k} & g_{\mathbb{E}u_{k}}^{k} \\ h_{u_{k}}^{k} & h_{\mathbb{E}u_{k}}^{k} \end{pmatrix} \begin{pmatrix} \varepsilon \delta u_{k} \\ \varepsilon \delta \mathbb{E}u_{k} \end{pmatrix}. \end{aligned}$$

Seeing that \(\delta x_{0}=\delta \mathbb{E}x_{0}=0\), it yields that

$$\begin{aligned} \begin{aligned} \delta x_{k+1} &= g_{x_{k}}^{k}\delta x_{k}+g_{u_{k}}^{k} \varepsilon \delta u_{k} +g_{\mathbb{E}x_{k}}^{k}\delta \mathbb{E}x_{k}+g_{ \mathbb{E}u_{k}}^{k}\varepsilon \delta \mathbb{E}u_{k} \\ &= \tilde{G}_{x}(k,0) \begin{pmatrix} \delta x_{0} \\ \delta \mathbb{E}x_{0} \end{pmatrix} +\sum _{l=0}^{k}\tilde{G}_{x}(k,l+1) \begin{pmatrix} g_{u_{l}}^{l} & g_{\mathbb{E}u_{l}}^{l} \\ h_{u_{l}}^{l} & h_{\mathbb{E}u_{l}}^{l} \end{pmatrix} \begin{pmatrix} \varepsilon \delta u_{l} \\ \varepsilon \delta \mathbb{E}u_{l} \end{pmatrix} \\ &= \sum_{l=0}^{k} \tilde{G}_{x}(k,l+1) \begin{pmatrix} g_{u_{l}}^{l} \\ h_{u_{l}}^{l} \end{pmatrix} \varepsilon \delta u_{l}+\sum_{l=0}^{k} \tilde{G}_{x}(k,l+1) \begin{pmatrix} g_{\mathbb{E}u_{l}}^{l} \\ h_{\mathbb{E}u_{l}}^{l} \end{pmatrix} \varepsilon \delta \mathbb{E}u_{l}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned}& g_{x_{k}}^{k}=\frac{\partial g^{k}}{\partial x_{k}}, \qquad g_{u_{k}}^{k}= \frac{\partial g^{k}}{\partial u_{k}}, \qquad g_{\mathbb{E}x_{k}}^{k}= \frac{\partial g^{k}}{\partial \mathbb{E}x_{k}}, \qquad g_{\mathbb{E}u_{k}}^{k}= \frac{\partial g^{k}}{\partial \mathbb{E}u_{k}}, \\& h_{x_{k}}^{k}= \frac{\partial h^{k}}{\partial x_{k}}, \qquad h_{u_{k}}^{k}= \frac{\partial h^{k}}{\partial u_{k}},\qquad h_{\mathbb{E}x_{k}}^{k}= \frac{\partial h^{k}}{\partial \mathbb{E}x_{k}}, \qquad h_{\mathbb{E}u_{k}}^{k}= \frac{\partial h^{k}}{\partial \mathbb{E}u_{k}}, \\& S_{x_{k}}^{k}= \frac{\partial S^{k}}{\partial x_{k}}, \qquad S_{\mathbb{E}x_{k}}^{k}= \frac{\partial S^{k}}{\partial \mathbb{E}x_{k}}, \qquad S_{u_{k}}^{k}= \frac{\partial S^{k}}{\partial u_{k}}, \qquad S_{\mathbb{E}u_{k}}^{k}= \frac{\partial S^{k}}{\partial \mathbb{E}u_{k}}, \\& \psi _{x_{N+1}}= \frac{\partial \psi (x_{N+1},\mathbb{E}x_{N+1})}{\partial x_{N+1}}, \qquad \psi _{\mathbb{E}x_{N+1}}= \frac{\partial \psi (x_{N+1},\mathbb{E}x_{N+1})}{\partial \mathbb{E}x_{N+1}}, \\& \tilde{G}_{x}(k,k+1)= \begin{pmatrix} I_{n} & 0 \end{pmatrix}, \qquad \tilde{G}_{x}(k,l)= \begin{pmatrix} g_{x_{k}}^{k} & g_{\mathbb{E}x_{k}}^{k} \end{pmatrix} \tilde{g}_{x_{k-1}}^{k-1}\cdots \tilde{g}_{x_{l}}^{l}, \\& \tilde{g}_{x_{l}}^{l}= \begin{pmatrix} g_{x_{l}}^{l} & g_{\mathbb{E}x_{l}}^{l} \\ h_{x_{l}}^{l} & h_{\mathbb{E}x_{l}}^{l} \end{pmatrix}, \quad l=0,\ldots ,k, k=0,\ldots ,N. \end{aligned}$$

Consequently,

$$\begin{aligned} \delta J_{N} =&\mathbb{E} \Biggl\{ \mathcal{G}(N+1,N)\varepsilon \delta u_{N} +\sum_{l=0}^{N-1} \mathcal{G}(l+1,N)\varepsilon \delta u_{l} \Biggr\} +O\bigl( \varepsilon ^{2}\bigr) \\ =&\mathbb{E} \Biggl\{ \mathbb{E}\bigl[\mathcal{G}(N+1,N)|\mathfrak{F}_{N} \bigr] \varepsilon \delta u_{N} +\sum_{l=0}^{N-1} \mathbb{E}\bigl[\mathcal{G}(l+1,N)| \mathfrak{F}_{l}\bigr]\varepsilon \delta u_{l} \Biggr\} +O\bigl(\varepsilon ^{2}\bigr), \end{aligned}$$

where

$$\begin{aligned} \mathcal{G}(N+1,N) =&[\psi _{x_{N+1}}+\mathbb{E}\psi _{\mathbb{E}x_{N+1}}]g_{u_{N}}^{N}+ \mathbb{E}\bigl[(\psi _{x_{N+1}} +\mathbb{E}\psi _{\mathbb{E}x_{N+1}})g_{ \mathbb{E}u_{N}}^{N} \bigr] +S_{u_{N}}^{N}+\mathbb{E}S_{\mathbb{E}u_{N}}^{N}, \\ \mathcal{G}(l+1,N) =&[\psi _{x_{N+1}}+\mathbb{E}\psi _{x_{N+1}}] \tilde{G}_{x}(N,l+1) \begin{pmatrix} g_{u_{l}}^{l} \\ h_{u_{l}}^{l} \end{pmatrix}+S_{u_{l}}^{l}+ \mathbb{E}S_{\mathbb{E}u_{l}}^{l} \\ &{}+\mathbb{E}\left \{ (\psi _{x_{N+1}}+\mathbb{E}\psi _{\mathbb{E}x_{N+1}}) \tilde{G}_{x}(N,l+1) \begin{pmatrix} g_{\mathbb{E}u_{l}}^{l} \\ h_{\mathbb{E}u_{l}}^{l} \end{pmatrix} \right \} \\ &{}+\sum_{k=l+1}^{N} \bigl(S_{x_{k}}^{k}+\mathbb{E}S_{\mathbb{E}x_{k}}^{k} \bigr) \tilde{G}_{x}(k-1,l+1) \begin{pmatrix} g_{u_{l}}^{l} \\ h_{u_{l}}^{l} \end{pmatrix} \\ &{}+\mathbb{E}\left \{ \sum_{k=l+1}^{N} \bigl(S_{x_{k}}^{k}+\mathbb{E}S_{ \mathbb{E}x_{k}}^{k} \bigr)\tilde{G}_{x}(k-1,l+1) \begin{pmatrix} g_{\mathbb{E}u_{l}}^{l} \\ h_{\mathbb{E}u_{l}}^{l} \end{pmatrix} \right \} . \end{aligned}$$

Hence, the maximum principle can be written as

$$\begin{aligned}& 0=\mathbb{E}\bigl[\mathcal{G}(N+1,N)|\mathfrak{F}_{N}\bigr], \quad \text{a.s.}, \end{aligned}$$
(2.9)
$$\begin{aligned}& 0=\mathbb{E}\bigl[\mathcal{G}(l+1,N)|\mathfrak{F}_{l}\bigr], \quad l=0, \ldots , N-1, \text{ a.s}. \end{aligned}$$
(2.10)

Furthermore, we shall show that (2.3)–(2.5) are equivalent to (2.9)–(2.10). Using (2.3)–(2.5), we can immediately get that the maximum principle can be reformulated as

$$\begin{aligned}& 0= \mathbb{E}\left \{ \bigl(S_{u_{k}}^{k} \bigr)'+ \mathbb{E}\bigl(S_{\mathbb{E}u_{k}}^{k} \bigr)'+ \begin{pmatrix} g_{u_{k}}^{k} \\ h_{u_{k}}^{k} \end{pmatrix}' \beta _{k}+\mathbb{E} \left [ \begin{pmatrix} g_{\mathbb{E}u_{k}}^{k} \\ h_{\mathbb{E}u_{k}}^{k} \end{pmatrix}' \beta _{k} \right ]\Big|\mathfrak{F}_{k}\right \} , \end{aligned}$$
(2.11)
$$\begin{aligned}& \beta _{k-1}=\mathbb{E}\left \{ \begin{pmatrix} I_{n} \\ 0 \end{pmatrix}\bigl(S_{x_{k}}^{k}+ \mathbb{E}S_{\mathbb{E}x_{k}}^{k}\bigr)' +\bigl( \tilde{g}_{x_{k}}^{k}\bigr)' \beta _{k}|\mathfrak{F}_{k}\right \} , \quad k=0, \ldots ,N, \end{aligned}$$
(2.12)
$$\begin{aligned}& \beta _{N}= \begin{pmatrix} \psi _{x_{N+1}}'+\mathbb{E}\psi _{\mathbb{E}x_{N+1}}' \\ 0 \end{pmatrix}. \end{aligned}$$
(2.13)

Indeed, adding (2.13) to (2.11) and for \(k=N\), we have

$$ \begin{aligned} &\mathbb{E} \bigl\{ \bigl(S_{u_{N}}^{N}\bigr)'+ \mathbb{E}\bigl(S_{\mathbb{E}u_{N}}^{N}\bigr)' + \bigl(g_{u_{N}}^{N}\bigr)'( \psi _{x_{N+1}} +\mathbb{E}\psi _{\mathbb{E}x_{N+1}})'\\ &\quad {}+\mathbb{E} \bigl[\bigl(g_{ \mathbb{E}u_{N}}^{N}\bigr)'(\psi _{x_{N+1}} +\mathbb{E}\psi _{\mathbb{E}x_{N+1}})'\bigr] | \mathfrak{F}_{N} \bigr\} =0, \end{aligned} $$

which is exactly (2.9). Furthermore, by (2.12)–(2.13), we derive

$$\begin{aligned} \beta _{k-1}=\mathbb{E} \Biggl\{ \sum_{j=k}^{N} \tilde{G}_{x}'(j-1,k) \bigl(S_{x_{j}}^{j} +\mathbb{E}S_{\mathbb{E}x_{j}}^{j}\bigr)'+ \tilde{G}_{x}'(N,k) (\psi _{x_{N+1}} + \mathbb{E}\psi _{\mathbb{E}x_{N+1}})'\Big|\mathfrak{F}_{k} \Biggr\} . \end{aligned}$$
(2.14)

Combining (2.3) with (2.14), it follows that

$$\begin{aligned} 0 =& \mathbb{E} \left \{\bigl(S_{u_{k}}^{k}+ \mathbb{E}S_{Eu_{k}}^{k}\bigr)'+ \sum _{j=k+1}^{N} \begin{pmatrix} g_{u_{k}}^{k} \\ h_{u_{k}}^{k} \end{pmatrix}' \bigl[\tilde{G}_{x}'(j-1,k+1) \bigl(S_{x_{j}}^{j}+ \mathbb{E}S_{ \mathbb{E}x_{j}}^{j}\bigr)'\bigr]\right . \\ &{}\left .+ \begin{pmatrix} g_{u_{k}}^{k} \\ h_{u_{k}}^{k} \end{pmatrix}' \bigl[ \tilde{G}_{x}'(N,k+1) (\psi _{x_{N+1}}+ \mathbb{E}\psi _{ \mathbb{E}x_{N+1}})'\bigr] \right \} \\ &{}+\mathbb{E}\left \{ \sum_{j=k+1}^{N} \begin{pmatrix} g_{\mathbb{E}u_{k}}^{k} \\ h_{\mathbb{E}u_{k}}^{k} \end{pmatrix}' \bigl[ \tilde{G}_{x}'(j-1,k+1) \bigl(S_{x_{j}}^{j}+ \mathbb{E}S_{ \mathbb{E}x_{j}}^{j}\bigr)'\bigr]\right \} \\ &{}+\mathbb{E}\left \{ \begin{pmatrix} g_{\mathbb{E}u_{k}}^{k} \\ h_{\mathbb{E}u_{k}}^{k} \end{pmatrix}' \bigl[\tilde{G}_{x}'(N,k+1) (\psi _{x_{N+1}}+ \mathbb{E}\psi _{ \mathbb{E}x_{N+1}})'\bigr] \Big|\mathfrak{F}_{k} \right \} , \quad k=0,\ldots ,N, \end{aligned}$$

which is exactly (2.10). The proof is completed. □

Remark 2.1

Compared with most of previous works, the maximum principle for MF-LQ optimal control problem was based on the mean-field backward stochastic differential equation (see [25, 26]), while Proposition 2.1 provides a convenient calculation method and can be reduced to the standard stochastic LQ case.

3 GAREs

In this section, we present several results about the GAREs which play a key role in deriving our main results. Now, we introduce the following GAREs:

$$\begin{aligned} \textstyle\begin{cases} P_{k}=Q+A'P_{k+1}A+\sigma ^{2}C'P_{k+1}C -[M_{k}^{(1)}]^{\prime }[\rho _{k}^{(1)}]^{ \dagger }M_{k}^{(1)}, \\ \bar{P}_{k}=\bar{Q}+\sigma ^{2}C'P_{k+1}\bar{C}+\sigma ^{2}\bar{C}'P_{k+1}C +\sigma ^{2}\bar{C}'P_{k+1}\bar{C}+A'P_{k+1}\bar{A}+\bar{A}'P_{k+1}A \\ \hphantom{\bar{P}_{k}=}{}+ \bar{A}'P_{k+1}\bar{A}+(A+\bar{A})'\bar{P}_{k+1}(A +\bar{A})+[M_{k}^{(1)}]'[\rho _{k}^{(1)}]^{ \dagger }M_{k}^{(1)} -[M_{k}^{(2)}]'[\rho _{k}^{(2)}]^{\dagger }M_{k}^{(2)}, \\ \rho _{k}^{(i)}\geq 0, \qquad \rho _{k}^{(i)}[\rho _{k}^{(i)}]^{\dagger }M_{k}^{(i)}=M_{k}^{(i)}, \quad i=1,2, \end{cases}\displaystyle \end{aligned}$$
(3.1)

where

$$\begin{aligned} \textstyle\begin{cases} \rho _{k}^{(1)}=B'P_{k+1}B+\sigma ^{2}D'P_{k+1}D+R, \\ M_{k}^{(1)}=B'P_{k+1}A+\sigma ^{2}D'P_{k+1}C+G', \\ \rho _{k}^{(2)}=(B+\bar{B})'(P_{k+1}+\bar{P}_{k+1})(B+\bar{B}) + \sigma ^{2}(D+\bar{D})'P_{k+1}(D+\bar{D})+R+\bar{R}, \\ M_{k}^{(2)}=(B+\bar{B})'(P_{k+1}+\bar{P}_{k+1})(A+\bar{A}) +\sigma ^{2}(D+ \bar{D})'P_{k+1}(C+\bar{C})+G'+\bar{G}'. \end{cases}\displaystyle \end{aligned}$$

Definition 3.1

GAREs (3.1) are said to be solvable if \(\rho _{k}^{(i)}[\rho _{k}^{(i)}]^{\dagger }M_{k}^{(i)}=M_{k}^{(i)}\), \(i=1,2\), are satisfied for \(k\in \mathbb{N}\).

Motivated by [27] of Ait Rami et al. (2002), we denote

$$\begin{aligned}& \boldsymbol{\varGamma }:=\left \{ (P_{\star },\bar{P}_{\star })= \bigl(P'_{\star }, \bar{P}'_{\star } \bigr)\Big| \begin{pmatrix} W^{(i)} & (S^{(i)})' \\ S^{(i)} & \lambda ^{(i)} \end{pmatrix}\geq 0, i=1,2\right \} , \\& \bar{\boldsymbol{\varGamma }}:=\left \{ (P_{\star },\bar{P}_{\star })= \bigl(P'_{ \star },\bar{P}'_{\star } \bigr)\Big| \begin{pmatrix} W^{(i)} & (S^{(i)})' \\ S^{(i)} & \lambda ^{(i)} \end{pmatrix}\geq 0, \mathbf{Ker}\bigl(\lambda ^{(i)}\bigr)\subseteq \Delta ^{(i)}, i=1,2 \right \} . \end{aligned}$$

Here, \(\Delta ^{(1)}=\mathbf{Ker}B\cap \mathbf{Ker}D\), \(\Delta ^{(2)}=\mathbf{Ker}(B+\bar{B})\cap \mathbf{Ker}(D+\bar{D})\), and

$$\begin{aligned} \textstyle\begin{cases} W^{(1)}=Q+A'P_{\star }A+\sigma ^{2}C'P_{\star }C-P_{\star }, \\ \lambda ^{(1)}=R+B'P_{\star }B+\sigma ^{2}D'P_{\star }D, \\ S^{(1)}=B'P_{\star }A+\sigma ^{2}D'P_{\star }C+G', \\ W^{(2)}=Q+\bar{Q}+\sigma ^{2}(C+\bar{C})'P_{\star }(C+\bar{C})+ \sigma ^{2}(A+\bar{A})'(P_{\star }+\bar{P}_{\star })(A+\bar{A}) -P_{ \star }-\bar{P}_{\star }, \\ \lambda ^{(2)}=(B+\bar{B})'(P_{\star }+\bar{P}_{\star })(B+\bar{B})+ \sigma ^{2}(D+\bar{D})'P_{\star }(D+\bar{D})+R+\bar{R}, \\ S^{(2)}=(B+\bar{B})'(P_{\star }+\bar{P}_{\star })(A+\bar{A}) +\sigma ^{2}(D+ \bar{D})'P_{\star }(C+\bar{C})+G'+\bar{G}'. \end{cases}\displaystyle \end{aligned}$$

Theorem 3.1

If\(\boldsymbol{\varGamma }\neq \emptyset \), then for any terminal value\((P_{N+1},\bar{P}_{N+1})=(\hat{P},\check{P})\in \boldsymbol{\varGamma }\), GAREs (3.1) are solvable and converge to the following GAREs:

$$\begin{aligned} \textstyle\begin{cases} P=Q+A'PA+\sigma ^{2}C'PC-[M^{(1)}]'[\rho ^{(1)}]^{\dagger }M^{(1)}, \\ \bar{P}=\bar{Q}+A'P\bar{A}+\bar{A}'PA+\bar{A}'P\bar{A} +\sigma ^{2}C'P \bar{C}+\sigma ^{2}\bar{C}'PC+\sigma ^{2}\bar{C}'P\bar{C} \\ \hphantom{\bar{P}=}{}+(A+\bar{A})'\bar{P}(A+\bar{A}) +[M^{(1)}]'[\rho ^{(1)}]^{ \dagger }M^{(1)} -[M^{(2)}]'[\rho ^{(2)}]^{\dagger }M^{(2)}, \\ \rho ^{(i)}\geq 0, \qquad \rho ^{(i)}[\rho ^{(i)}]^{\dagger }M^{(i)}=M^{(i)}, \quad i=1,2, \end{cases}\displaystyle \end{aligned}$$
(3.2)

where\(\rho ^{(i)}\), \(M^{(i)}\)are given by

$$\begin{aligned} \textstyle\begin{cases} \rho ^{(1)}=B'PB+\sigma ^{2}D'PD+R, \\ M^{(1)}=B'PA+\sigma ^{2}D'PC+G', \\ \rho ^{(2)}=(B+\bar{B})'(P+\bar{P})(B+\bar{B}) +\sigma ^{2}(D+\bar{D})'P(D+ \bar{D})+R+\bar{R}, \\ M^{(2)}=(B+\bar{B})'(P+\bar{P})(A+\bar{A}) +\sigma ^{2}(D+\bar{D})'P(C+ \bar{C})+G'+\bar{G}'. \end{cases}\displaystyle \end{aligned}$$

Proof

For any \((\hat{P},\check{P})\in \boldsymbol{\varGamma }\), define the new GAREs (NGAREs)

$$\begin{aligned} \textstyle\begin{cases} T_{k}=A'T_{k+1}A+\sigma ^{2}C'T_{k+1}C -[H_{k}^{(1)}]'[\alpha _{k}^{(1)}]^{ \dagger }H_{k}^{(1)}+W^{(1)}, \\ \bar{T}_{k}=\sigma ^{2}C'T_{k+1}\bar{C}+\sigma ^{2}\bar{C}'T_{k+1}(N)C +\sigma ^{2}\bar{C}'T_{k+1}\bar{C}+A'T_{k+1}\bar{A} \\ \hphantom{\bar{T}_{k}=}{}+\bar{A}'T_{k+1}A + \bar{A}^{\prime }T_{k+1}\bar{A}+(A+\bar{A})'\bar{T}_{k+1}(A +\bar{A}) \\ \hphantom{\bar{T}_{k}=}{}+[H_{k}^{(1)}]'[\alpha _{k}^{(1)}]^{ \dagger }H_{k}^{(1)} -[H_{k}^{(2)}]'[\alpha _{k}^{2}]^{\dagger }H_{k}^{(2)}+W^{(2)}-W^{(1)}, \\ \alpha _{k}^{(i)}\geq 0, \qquad \alpha _{k}^{(i)}[\alpha _{k}^{(i)}]^{ \dagger }H_{k}^{(i)}=H_{k}^{(i)}, \quad i=1,2, \end{cases}\displaystyle \end{aligned}$$
(3.3)

where \(T_{N+1}=\bar{T}_{N+1}=0\) and

$$\begin{aligned} \textstyle\begin{cases} \alpha _{k}^{(1)}=B'T_{k+1}B+\sigma ^{2}D'T_{k+1}D+\lambda ^{(1)}, \\ H_{k}^{(1)}=B'T_{k+1}A+\sigma ^{2}D'T_{k+1}C+S^{(1)}, \\ \alpha _{k}^{(2)}=(B+\bar{B})'(T_{k+1}+\bar{T}_{k+1})(B+\bar{B}) + \sigma ^{2}(D+\bar{D})'T_{k+1}(D+\bar{D})+\lambda ^{(2)}, \\ H_{k}^{(2)}=(B+\bar{B})'(T_{k+1}+\bar{T}_{k+1})(A+\bar{A}) +\sigma ^{2}(D+ \bar{D})'T_{k+1}(C+\bar{C})+S^{(2)}. \end{cases}\displaystyle \end{aligned}$$
(3.4)

The corresponding new cost functional is given by

$$\begin{aligned} \bar{J}_{T}(l,\xi ;u) :=&\mathbb{E}\sum _{k=l}^{N} \left \{ \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix} \right . \\ &{}+\left . \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix} \right \} \\ \geq& 0. \end{aligned}$$
(3.5)

To make the time horizon N specific in the finite horizon MF-LQ problem, we denote \(T_{k}\), \(\bar{T}_{k}\) in (3.3) as \(T_{k}(N)\), \(\bar{T}_{k}(N)\). Set

$$\begin{aligned} V(l,\xi )=\mathbb{E}\bigl[x_{l}'T_{l}(N)x_{l} \bigr] +(\mathbb{E}x_{l})'\bar{T}_{l}(N) \mathbb{E}x_{l}, \end{aligned}$$
(3.6)

where \(x_{l}=\xi \). According to (3.5), for \(l_{1}< l_{2}\), we get

$$\begin{aligned} T_{l_{1}}(N)\geq T_{l_{2}}(N), \qquad T_{l_{1}}(N)+\bar{T}_{l_{1}}(N)\geq T_{l_{2}}(N)+ \bar{T}_{l_{2}}(N). \end{aligned}$$
(3.7)

Seeing the time-invariance of the coefficient matrices, it is obtained that \(T_{l}(N)=T_{0}(N-l)\), \(\bar{T}_{l}(N)=\bar{T}_{0}(N-l)\). Combining (3.5) with (3.6), for any \(x_{0}=\xi \in \mathcal{X}\) and any stabilizing controller \(u_{k}=\mathcal{L}x_{k}+\bar{\mathcal{L}}\mathbb{E}x_{k}\), it follows that

$$\begin{aligned} \begin{aligned}[b] &\mathbb{E}\bigl[x_{0}'T_{0}(N-l)x_{0} \bigr] +(\mathbb{E}x_{0})'\bar{T}_{0}(N-l) \mathbb{E}x_{0} \\ &\quad \leq \mathbb{E}\sum_{k=0}^{N-l} \left \{ \begin{pmatrix} \mathbb{E}x_{k} \\ (\mathcal{L}+\bar{\mathcal{L}})\mathbb{E}x_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{k} \\ (\mathcal{L}+\bar{\mathcal{L}})\mathbb{E}x_{k} \end{pmatrix} \right . \\ &\qquad {}+\left . \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ \mathcal{L}(x_{k}-\mathbb{E}x_{k}) \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ \mathcal{L}(x_{k}-\mathbb{E}x_{k}) \end{pmatrix} \right \}\\ &\quad \leq c\mathbb{E}\sum_{k=0}^{N-l} \vert x_{k} \vert ^{2}, \end{aligned} \end{aligned}$$
(3.8)

where \(c>0\) is a constant. Selecting \(x_{0}\in \mathbb{R}^{n}\), we claim that, for any N, l,

$$\begin{aligned} x_{0}'\bigl[T_{0}(N-l)+ \bar{T}_{0}(N-l)\bigr]x_{0}< \infty . \end{aligned}$$
(3.9)

Meanwhile, let \(x_{0}=\varphi \delta \) with \(\varphi \in \mathbb{R}^{n}\) and \(P(\delta =-1)=P(\delta =1)=\frac{1}{2}\). By virtue of (3.8), we obtain

$$\begin{aligned} \mathbb{E}\bigl[(x_{0}-\mathbb{E}x_{0})'T_{l}(N) (x_{0}-\mathbb{E}x_{0})\bigr]= \varphi 'T_{0}(N-l)\varphi < \infty . \end{aligned}$$
(3.10)

From (3.7) and (3.9)–(3.10), we get

$$\begin{aligned} \lim_{l\to -\infty }T_{l}(N)=\lim_{N-l\to \infty }T_{0}(N-l)=C_{1}, \qquad \lim_{l\to -\infty }\bar{T}_{l}(N)=\lim _{N-l\to \infty }\bar{T}_{0}(N-l)= \bar{C}_{1}. \end{aligned}$$

Here, \(C_{1}\), \(\bar{C}_{1}\) are bounded. Taking \(l\rightarrow -\infty \) and letting \(P_{k}=T_{k}+\hat{P}\), \(\bar{P}_{k}=\bar{T}_{k}+\check{P}\), then \(P_{k}\), \(\bar{P}_{k}\) increase with respect to k, and \((P_{k},\bar{P_{k}})\) converges to \((P,\bar{P})\) with \((P,\bar{P})\) satisfying GAREs (3.2). □

Remark 3.1

In view of the regular conditions in (3.3), we have

$$\begin{aligned}& \bigl[H^{(1)}_{k}\bigr]'\bigl[\alpha ^{(1)}_{k}\bigr]^{\dagger }H^{(1)}_{k} =-\bigl[H^{(1)}_{k}\bigr]' \mathcal{L}_{k}-\mathcal{L}_{k}'H^{(1)}_{k}- \mathcal{L}_{k}'\alpha ^{(1)}_{k} \mathcal{L}_{k}, \\& \bigl[H^{(2)}_{k}\bigr]'\bigl[\alpha ^{(2)}_{k}\bigr]^{\dagger }H^{(2)}_{k}=- \bigl[H^{(2)}_{k}\bigr]'( \mathcal{L}_{k}+\bar{\mathcal{L}}_{k}) -( \mathcal{L}_{k}+\bar{ \mathcal{L}}_{k})'H^{(2)}_{k} -(\mathcal{L}_{k}+\bar{\mathcal{L}}_{k})' \alpha ^{(2)}_{k}(\mathcal{L}_{k}+\bar{ \mathcal{L}}_{k}), \end{aligned}$$

where \(\mathcal{L}_{k}\), \(\bar{\mathcal{L}}_{k}\) satisfy

$$\begin{aligned} \mathcal{L}_{k}=-\bigl[\alpha ^{(1)}_{k} \bigr]^{\dagger }H^{(1)}_{k}, \qquad \bar{ \mathcal{L}}_{k}=-\bigl[\alpha ^{(2)}_{k} \bigr]^{\dagger }H^{(2)}_{k}+\bigl[\alpha ^{(1)}_{k}\bigr]^{ \dagger }H^{(1)}_{k}. \end{aligned}$$
(3.11)

Besides,

$$\begin{aligned} \begin{aligned} &T_{k}=\mathcal{Q}+\mathcal{A}'T_{k+1} \mathcal{A}+\sigma ^{2} \mathcal{C}'T_{k+1} \mathcal{C}, \\ &T_{k}+\bar{T}_{k}=\bar{\mathcal{Q}}+\bar{ \mathcal{A}}'(T_{k+1}+\bar{T}_{k+1}) \bar{ \mathcal{A}} +\sigma ^{2}\bar{\mathcal{C}}'T_{k+1} \bar{ \mathcal{C}}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned}& \mathcal{A}=A+B\mathcal{L}_{k}, \qquad \bar{\mathcal{A}}=A+ \bar{A}+(B+\bar{B}) ( \mathcal{L}_{k}+\bar{\mathcal{L}}_{k}), \\& \mathcal{C}=C+D\mathcal{L}_{k}, \qquad \bar{\mathcal{C}}=C+ \bar{C}+(D+\bar{D}) ( \mathcal{L}_{k}+\bar{\mathcal{L}}_{k}), \\& \mathcal{Q}_{k}=W^{(1)}+\mathcal{L}_{k}' \lambda ^{(1)}\mathcal{L}_{k}+\bigl[S^{(1)} \bigr]' \mathcal{L}_{k}+\mathcal{L}'_{k}S^{(1)} = \begin{pmatrix} I & \mathcal{L}'_{k} \end{pmatrix} \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} I \\ \mathcal{L}_{k} \end{pmatrix}\geq 0, \\& \begin{aligned} \bar{\mathcal{Q}}_{k}&=W^{(2)}+(\mathcal{L}_{k}+ \bar{\mathcal{L}}_{k})' \lambda ^{(2)}( \mathcal{L}_{k}+\bar{\mathcal{L}}_{k}) + \bigl[S^{(2)}\bigr]'( \mathcal{L}_{k}+\bar{ \mathcal{L}}_{k})+(\mathcal{L}_{k}+\bar{ \mathcal{L}}_{k})'S^{(2)} \\ &= \begin{pmatrix} I & \mathcal{L}'_{k}+\bar{\mathcal{L}}'_{k} \end{pmatrix} \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} I \\ \mathcal{L}_{k}+\bar{\mathcal{L}}_{k} \end{pmatrix}\geq 0. \end{aligned} \end{aligned}$$

Proposition 3.1

Under\(\boldsymbol{\varGamma }\neq \emptyset \), for NGAREs (3.3) with the terminal conditions\(T_{N+1}(N)=\bar{T}_{N+1}(N)=0\), assume that\(\alpha _{k}^{(i)}[\alpha _{k}^{(i)}]^{\dagger }H_{k}^{(i)}=H_{k}^{(i)}\), \(i=1,2\). Then the optimal controller is designed as\(u_{k}^{\star }=\mathcal{L}_{k}x_{k}+\bar{\mathcal{L}}_{k}\mathbb{E}x_{k}\), where\(\mathcal{L}_{k}\), \(\bar{\mathcal{L}}_{k}\)are given as (3.11), and the optimal value is derived by

$$\begin{aligned} \bar{J}_{T}^{\ast }(0,\xi ;u)=\mathbb{E} \bigl[x_{0}'T_{0}(N)x_{0} \bigr]+( \mathbb{E}x_{0})'\bar{T}_{0}(N) \mathbb{E}x_{0}. \end{aligned}$$

Proof

Denote

$$\begin{aligned} \bar{J}(k) :=&\mathbb{E}\sum_{j=k}^{N} \bigl[x_{j}'W^{(1)}x_{j} +( \mathbb{E}x_{j})'\bigl(W^{(2)}-W^{(1)} \bigr)\mathbb{E}x_{j}+u_{j}'\lambda ^{(1)}u_{j} +(\mathbb{E}u_{j})' \bigl(\lambda ^{(2)}-\lambda ^{(1)}\bigr) \mathbb{E}u_{j} \\ &{}+2x_{j}'\bigl(S^{(1)} \bigr)'u_{j}+2(\mathbb{E}x_{j})' \bigl(S^{(2)}-S^{(1)}\bigr)' \mathbb{E}u_{j} \bigr], \end{aligned}$$

let \(k=N\), by \(\boldsymbol{\varGamma }\neq \emptyset \), it yields that

$$\begin{aligned} \bar{J}(N) =&\mathbb{E}\left \{ \begin{pmatrix} \mathbb{E}x_{N} \\ \mathbb{E}u_{N} \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{N} \\ \mathbb{E}u_{N} \end{pmatrix} \right . \\ &{}+\left . \begin{pmatrix} x_{N}-\mathbb{E}x_{N} \\ u_{N}-\mathbb{E}u_{N} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} x_{N}-\mathbb{E}x_{N} \\ u_{N}-\mathbb{E}u_{N} \end{pmatrix} \right \} \\ =&\mathbb{E}\bigl[(u_{N}-\mathbb{E}u_{N})' \alpha _{N}^{(1)}(u_{N}- \mathbb{E}u_{N})\bigr] +(\mathbb{E}u_{N})' \alpha _{N}^{(2)}\mathbb{E}u_{N} \geq 0. \end{aligned}$$
(3.12)

Indeed, if \(\mathbb{E}u_{N}=0\), \(u_{N}\neq 0\), then \(\bar{J}(N)=\mathbb{E}[u_{N}'\alpha _{N}^{(1)}u_{N}]>0\); further, we have \(\alpha _{N}^{(1)}>0\). Similarly, if \(u_{N}=\mathbb{E}u_{N}\neq 0\), we have \(\alpha _{N}^{(2)}>0\). Using the maximum principle in Proposition 2.1, it follows that

$$\begin{aligned}& \mathbb{E} \left \{ \begin{pmatrix} x_{k} \\ \mathbb{E}x_{k} \end{pmatrix}' \beta _{k-1} - \begin{pmatrix} x_{k+1} \\ \mathbb{E}x_{k+1} \end{pmatrix}' \beta _{k} \right \} \\& \quad =\mathbb{E} \left \{ \begin{pmatrix} x_{k} \\ \mathbb{E}x_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)}x_{k}+(W^{(2)}-W^{(1)})\mathbb{E}x_{k}+(S^{(1)})'u_{k}+(S^{(2)}-S^{(1)})' \mathbb{E}u_{k} \\ 0 \end{pmatrix} \right . \\& \qquad {}-\left . \begin{pmatrix} x_{k} \\ \mathbb{E}x_{k} \end{pmatrix}' \begin{pmatrix} A+\omega _{k}C & \bar{A}+\omega _{k}\bar{C} \\ 0 & A+\bar{A} \end{pmatrix}'\beta _{k}\right . \\& \qquad \left .{} + \begin{pmatrix} x_{k} \\ \mathbb{E}x_{k} \end{pmatrix}'\mathbb{E} \left [ \begin{pmatrix} A+\omega _{k}C & \bar{A}+\omega _{k}\bar{C} \\ 0 & A+\bar{A} \end{pmatrix}'\beta _{k}\Big| \mathfrak{F}_{k} \right ]\right . \\& \qquad \left .{}-u_{k}' \begin{pmatrix} B+\omega _{k}D \\ 0 \end{pmatrix}' \beta _{k} -u_{k}'\mathbb{E} \left [ \begin{pmatrix} \bar{B}+\omega _{k}\bar{D} \\ B+\bar{B} \end{pmatrix}'\beta _{k} \right ] \right \} \\& \quad =\mathbb{E} \bigl[x_{k}'W^{(1)}x_{k}+( \mathbb{E}x_{k})'\bigl(W^{(2)}-W^{(1)} \bigr) \mathbb{E}x_{k}+u_{k}'\lambda ^{(1)}u_{k} +(\mathbb{E}u_{k})' \bigl( \lambda ^{(2)}-\lambda ^{(1)}\bigr) \mathbb{E}u_{k} \\& \qquad {}+2x_{k}'\bigl(S^{(1)} \bigr)'u_{k}+2(\mathbb{E}x_{k})' \bigl(S^{(2)}-S^{(1)}\bigr)' \mathbb{E}u_{k} \bigr]. \end{aligned}$$

Adding from \(k=l+1\) to \(k=N\) on both sides of the above equation, we obtain

$$\begin{aligned}& \mathbb{E} \left [ \begin{pmatrix} x_{l+1} \\ \mathbb{E}x_{l+1} \end{pmatrix}'\beta _{l}-x_{N+1}'T_{N+1}x_{N+1} -(\mathbb{E}x_{N+1})'\bar{T}_{N+1} \mathbb{E}x_{N+1} \right ] \\& \quad =\mathbb{E}\sum_{k=l+1}^{N} \bigl[ x_{k}'W^{(1)}x_{k}+( \mathbb{E}x_{k})'\bigl(W^{(2)}-W^{(1)} \bigr) \mathbb{E}x_{k} +u_{k}'\lambda ^{(1)}u_{k}+(\mathbb{E}u_{k})' \bigl( \lambda ^{(2)}-\lambda ^{(1)}\bigr) \mathbb{E}u_{k} \\& \qquad {}+2x_{k}'\bigl(S^{(1)} \bigr)'u_{k}+2(\mathbb{E}x_{k})' \bigl(S^{(2)}-S^{(1)}\bigr)' \mathbb{E}u_{k} \bigr]. \end{aligned}$$

Furthermore,

$$\begin{aligned} \bar{J}(l) =&\mathbb{E}\sum_{k=l+1}^{N} \bigl[x_{k}'W^{(1)}x_{k}+( \mathbb{E}x_{k})'\bigl(W^{(2)}-W^{(1)} \bigr)\mathbb{E}x_{k} +u_{k}'\lambda ^{(1)}u_{k}+( \mathbb{E}u_{k})' \bigl(\lambda ^{(2)}-\lambda ^{(1)}\bigr) \mathbb{E}u_{k} \\ &{}+2x_{k}'\bigl(S^{(1)} \bigr)'u_{k}+2(\mathbb{E}x_{k})' \bigl(S^{(2)}-S^{(1)}\bigr)' \mathbb{E}u_{k} \bigr]+ \mathbb{E} \bigl[x_{l}'W^{(1)}x_{l}+( \mathbb{E}x_{l})'\bigl(W^{(2)}-W^{(1)} \bigr) \mathbb{E}x_{l} \\ &{}+2x_{k}'\bigl(S^{(1)} \bigr)'u_{k}+2(\mathbb{E}x_{k})' \bigl(S^{(2)}-S^{(1)}\bigr)' \mathbb{E}u_{k} +u_{l}'\lambda ^{(1)}u_{l} +(\mathbb{E}u_{l})' \bigl( \lambda ^{(2)}-\lambda ^{(1)}\bigr) \mathbb{E}u_{l} \bigr] \\ =&\mathbb{E} \left [x_{l}'W^{(1)}x_{l}+( \mathbb{E}x_{l})'\bigl(\lambda ^{(2)}- \lambda ^{(1)}\bigr)\mathbb{E}x_{l} +u_{l}' \lambda ^{(1)}u_{l}+(\mathbb{E}u_{l})' \bigl( \lambda ^{(2)}-\lambda ^{(1)}\bigr) \mathbb{E}u_{l}\vphantom{\begin{pmatrix} x_{l+1} \\ \mathbb{E}x_{l+1} \end{pmatrix}} \right . \\ &{}\left .+2x_{l}'\bigl(S^{(1)} \bigr)'u_{l}+2(\mathbb{E}x_{l})' \bigl(S^{(2)}-S^{(1)}\bigr)' \mathbb{E}u_{l}+ \begin{pmatrix} x_{l+1} \\ \mathbb{E}x_{l+1} \end{pmatrix}' \beta _{l} \right ] \\ =&\mathbb{E}\bigl[x_{l}'T_{l}(N)x_{l} \bigr]+(\mathbb{E}x_{l})'\bar{T}_{l}(N) \mathbb{E}x_{l} +\bigl[\mathbb{E}u_{l}-( \mathcal{L}_{l}+\bar{\mathcal{L}}_{l}) \mathbb{E}x_{l}\bigr]'\alpha _{l}^{(2)} \bigl[\mathbb{E}u_{l}-(\mathcal{L}_{l}+ \bar{ \mathcal{L}}_{l})\mathbb{E}x_{l}\bigr] \\ &{}+\mathbb{E} \bigl\{ \bigl[u_{l}-\mathbb{E}u_{l}- \mathcal{L}_{l}(x_{l}- \mathbb{E}x_{l}) \bigr]'\alpha _{l}^{(1)} \bigl[u_{l}-\mathbb{E}u_{l}- \mathcal{L}_{l}(x_{l}- \mathbb{E}x_{l})\bigr] \bigr\} . \end{aligned}$$

Now we prove \(\alpha _{l}^{(1)}>0\) and \(\alpha _{l}^{(2)}>0\). Let \(x_{l}=0\),

$$\begin{aligned} \bar{J}(l)=\mathbb{E} \bigl[(u_{l}-\mathbb{E}u_{l})' \alpha _{l}^{(1)}(u_{l}- \mathbb{E}u_{l}) +(\mathbb{E}u_{l})' \alpha _{l}^{(2)}\mathbb{E}u_{l} \bigr], \end{aligned}$$

then for any \(u_{l}\neq 0\), \(\bar{J}(l)>0\). Following the discussion of (3.12), it implies that \(\alpha _{l}^{(1)}>0\) and \(\alpha _{l}^{(2)}>0\). Namely, \(\alpha _{k}^{(1)}\geq 0\), \(\alpha _{k}^{(2)}\geq 0\) for \(k\geq 0\). Furthermore,

$$\begin{aligned} \begin{gathered} \bar{J}_{T}(0,\xi ;u)\\ \quad =\mathbb{E}\sum _{k=0}^{N} \bigl\{ \bigl[u_{k}- \mathbb{E}u_{k}-\mathcal{L}_{k}(x_{k} - \mathbb{E}x_{k})\bigr]'\alpha _{k}^{(1)} \bigl[u_{k}- \mathbb{E}u_{k}-\mathcal{L}_{k}(x_{k}- \mathbb{E}x_{k})\bigr] \bigr\} + \mathbb{E}\bigl[x_{0}'T_{0}(N)x_{0} \bigr] \\ \qquad {}+\mathbb{E}\sum_{k=0}^{N}\bigl[ \mathbb{E}u_{k}-(\mathcal{L}_{k}+\bar{ \mathcal{L}}_{k})\mathbb{E}x_{k}\bigr]' \alpha _{k}^{(2)}\bigl[\mathbb{E}u_{k}-( \mathcal{L}_{k}+\bar{\mathcal{L}}_{k}) \mathbb{E}x_{k}\bigr] +(\mathbb{E}x_{0})' \bar{T}_{0}(N)\mathbb{E}x_{0}. \end{gathered} \end{aligned}$$

Therefore, \(\bar{J}_{T}^{\ast }(0,\xi ;u)=\mathbb{E}[x_{0}'T_{0}(N)x_{0}]+( \mathbb{E}x_{0})'\bar{T}_{0}(N)\mathbb{E}x_{0}\). The proof is completed. □

Generally speaking, the uniqueness of the solution to the GAREs is not guaranteed. Next, we shall focus on the properties of the maximal solution and its relation with the stabilizing solution.

Definition 3.2

A solution of GAREs (3.2) is called the maximal solution, denoted by \((P^{\ast }, \bar{P}^{\ast })\), if for any solution \((P, \bar{P})\) of GAREs (3.2), \(P^{\ast } \geq P\), \(\bar{P}^{\ast } \geq \bar{P}\) hold.

By Definition 3.2, it is clear that the maximal solution must be unique if it exists.

Definition 3.3

([13])

A solution \((P, \bar{P})\) of GAREs (3.2) is called a stabilizing solution if the closed-loop control

$$\begin{aligned} u_{k}=-\bigl[\rho ^{(2)}\bigr]^{\dagger }M^{(2)} \mathbb{E}x_{k}-\bigl[\rho ^{(1)}\bigr]^{ \dagger }M^{(1)}(x_{k}- \mathbb{E}x_{k}), \quad k\in \tilde{\mathbb{N}}, \end{aligned}$$

stabilizes the mean-field system (1.1).

Now, we introduce a compact form of GAREs (3.2). Let P˜=(P+P¯00P), then

$$\begin{aligned} \textstyle\begin{cases} \tilde{P}=\tilde{Q}+A_{1}'\tilde{P}A_{1}+C_{1}'\tilde{P}C_{1} +C_{2}' \tilde{P}C_{2}-M'\rho ^{\dagger }M, \\ \rho \geq 0, \qquad \rho \rho ^{\dagger }M-M=0, \end{cases}\displaystyle \end{aligned}$$
(3.13)

where

$$\begin{aligned} \textstyle\begin{cases} M=B_{1}'\tilde{P}A_{1}+\sigma ^{2}D_{1}'\tilde{P}C_{1} +\sigma ^{2}D_{2}' \tilde{P}C_{2}+\tilde{G}', \\ \rho =\tilde{R}+B_{1}'\tilde{P}B_{1}+\sigma ^{2}D_{1}'\tilde{P}D_{1} + \sigma ^{2}D_{2}'\tilde{P}D_{2}, \end{cases}\displaystyle \end{aligned}$$

with

$$\begin{aligned}& \tilde{R}= \begin{pmatrix} R+\bar{R} & 0 \\ 0 & R \end{pmatrix}, \qquad \tilde{Q}= \begin{pmatrix} Q+\bar{Q} & 0 \\ 0 & Q \end{pmatrix}, \qquad \tilde{G}= \begin{pmatrix} G+\bar{G} & 0 \\ 0 & G \end{pmatrix}, \\& A_{1}= \begin{pmatrix} A+\bar{A} & 0 \\ 0 & A \end{pmatrix}, \qquad B_{1}= \begin{pmatrix} B+\bar{B} & 0 \\ 0 & B \end{pmatrix}, \qquad C_{1}= \begin{pmatrix} 0 & 0 \\ 0 & C \end{pmatrix}, \\& C_{2}= \begin{pmatrix} 0 & 0 \\ C+\bar{C} & 0 \end{pmatrix}, \qquad D_{1}= \begin{pmatrix} 0 & 0 \\ 0 & D \end{pmatrix}, \qquad D_{2}= \begin{pmatrix} 0 & 0 \\ D+\bar{D} & 0 \end{pmatrix}. \end{aligned}$$

By considering GARE (3.13), we can immediately show the existence of the maximal solution to GAREs (3.2).

Proposition 3.2

Suppose that GAREs (3.2) and GARE (3.13) have the maximal solutions and the stabilizing solutions, then the following statements hold.

  1. (i)

    \(\tilde{P}^{\ast }=\operatorname{diag} \{ P^{\ast }+{\bar{P}}^{\ast }, {P}^{ \ast } \} \)is the maximal solution to GARE (3.13) if and only if\(({P}^{\ast }, {\bar{P}}^{\ast })\)is the maximal solution to GAREs (3.2).

  2. (ii)

    \(\tilde{P}=\operatorname{diag} \{ P+\bar{P}, P \} \)is a stabilizing solution to GARE (3.13) if and only if\((P, \bar{P})\)is the stabilizing solution to GAREs (3.2).

Proof

(i) It is clear.

(ii) Seeing that ρM=([ρ(2)]M(2)00[ρ(1)]M(1)) and resorting to the method in [13, Lemma 5.1], \(\{\nu _{k}=-\rho ^{\dagger }M, k\in \tilde{\mathbb{N}}\}\) is a stabilizing control of the following system:

$$\begin{aligned} \textstyle\begin{cases} y_{k+1}=(A_{1}y_{k}+B_{1}\nu _{k})+(C_{1}y_{k}+D_{1}\nu _{k})\varphi _{k} +(C_{2}y_{k}+D_{2}\nu _{k})\theta _{k}, \quad k\in \tilde{\mathbb{N}}, \\ y_{0}=\tau , \end{cases}\displaystyle \end{aligned}$$

where \(\varphi =\{\varphi _{k}, k\in \tilde{\mathbb{N}}\}\) and \(\theta =\{\theta _{k}, k\in \tilde{\mathbb{N}}\}\) are two mutually independent martingale difference sequences if and only if \(\{u_{k}=-[\rho ^{(2)}]^{\dagger }M^{(2)}\mathbb{E}x_{k}-[\rho ^{(1)}]^{ \dagger }M^{(1)}(x_{k}-\mathbb{E}x_{k}), k\in \tilde{\mathbb{N}}\}\) is a stabilizing control of (1.1). Thus, the conclusion holds. □

Proposition 3.3

If\(\boldsymbol{\varGamma }\neq \emptyset \)and (A1) are satisfied, then a stabilizing solution to GARE (3.13) is the maximal solution.

Proof

Let \(z=( z_{1}' \ z_{2}' \ z_{3}' \ z_{4}' )'\in \mathbb{R}^{2(n+m)}\), \(z_{1}, z_{3}\in \mathbb{R}^{n}\), \(z_{2}, z_{4}\in \mathbb{R}^{m}\), we get that

$$\begin{aligned}& z' \begin{pmatrix} \tilde{Q}+A_{1}'\tilde{P}_{\star }A_{1}+C_{1}'\tilde{P}_{\star }C_{1} +C_{2}' \tilde{P}_{\star }C_{2} & M' \\ M & \rho \end{pmatrix}z \\& \quad =z_{1}'\bigl[M^{(2)} \bigr]'\bigl[\rho ^{(2)}\bigr]^{\dagger }M^{(2)}z_{1} +z_{1}'\bigl[M^{(2)}\bigr]'z_{3} +z_{3}'\bigl[M^{(2)}\bigr]z_{1}+z_{3}' \bigl[\rho ^{(2)}\bigr]z_{3} \\& \qquad {}+z_{2}'\bigl[M^{(1)} \bigr]'\bigl[\rho ^{(1)}\bigr]^{\dagger }M^{(1)}z_{2} +z_{2}'\bigl[M^{(1)}\bigr]'z_{4}+z_{4}' \bigl[M^{(1)}\bigr]'z_{2} +z_{4}'\bigl[\rho ^{(1)} \bigr]z_{4} \\& \quad = \begin{pmatrix} z_{1} \\ z_{3} \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} z_{1} \\ z_{3} \end{pmatrix} + \begin{pmatrix} z_{2} \\ z_{4} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} z_{2} \\ z_{4} \end{pmatrix}, \end{aligned}$$

thus, it immediately yields that

$$ \boldsymbol{\varGamma }= \left \{(P_{\star },\bar{P}_{\star })= \bigl(P'_{\star }, \bar{P}'_{\star } \bigr)\Big| \begin{pmatrix} \tilde{Q}+A_{1}'\tilde{P}_{\star }A_{1}+C_{1}'\tilde{P}_{\star }C_{1} +C_{2}' \tilde{P}_{\star }C_{2} & M' \\ M & \rho \end{pmatrix}\geq 0 \right \}. $$

Then, by the method in [28, Theorem 5.3.1] of Damn (2004), and [29] of Abou-Kandil et al. (2003). This statement is standard due to the space limitations, here we omit the proof. □

Corollary 3.1

If\(\boldsymbol{\varGamma }\neq \emptyset \)and (A1) are satisfied, then the following statements hold.

  1. (i)

    GAREs (3.2) admit at most one stabilizing solution.

  2. (ii)

    If there is\((P_{\star }, P_{\star }+\bar{P}_{\star })\)such that(W(i)(S(i))S(i)λ(i))>0, \(i=1,2\), then GAREs (3.2) admit a stabilizing solution.

Proof

(i) Combining Propositions 3.2 with 3.3 and noticing the uniqueness of the maximal solution, we could immediately get the result.

(ii) We could resort to the method in [30, Theorem 4.3] of Ait Rami et al. (2001). Owing to the space limitations, here we omit the proof. □

4 MF-LQ optimal control

Now, we return to the infinite horizon stochastic MF-LQ optimal control problems.

Theorem 4.1

If\(\boldsymbol{\varGamma } \neq \emptyset \)and (A1) are satisfied, then Problem\(\mathcal{A}\)has an optimal control

$$\begin{aligned} u_{k}=-\bigl[\bigl(\rho ^{(2)} \bigr)^{\ast }\bigr]^{\dagger }\bigl[M^{(2)} \bigr]^{\ast }\mathbb{E}x_{k}-\bigl[\bigl( \rho ^{(1)}\bigr)^{\ast }\bigr]^{\dagger } \bigl[M^{(1)}\bigr]^{\ast } (x_{k}- \mathbb{E}x_{k}), \quad k\in \tilde{\mathbb{N}}, \end{aligned}$$
(4.1)

and the optimal value is given by

$$\begin{aligned} V(x_{0})=\mathbb{E}\bigl(x_{0}'P^{\ast }x_{0} \bigr) +(\mathbb{E}x_{0})'\bar{P}^{ \ast } \mathbb{E}x_{0}. \end{aligned}$$
(4.2)

Here, \((P^{\ast }, \bar{P}^{\ast })\)is the maximal solution to GAREs (3.2), and\((\rho ^{(i)})^{\ast }\), \((M^{(i)})^{\ast }\) (\(i=1,2\)) are those in Theorem 3.1with\((P, \bar{P})\)replaced by\((P^{\ast }, \bar{P}^{\ast })\).

Proof

By means of completing the squares, it yields that

$$\begin{aligned} J(\xi ,u) =&\mathbb{E}\sum_{k=0}^{\infty } \bigl\{ (x_{k}-\mathbb{E}x_{k})' \bigl\{ Q+A'P^{\ast }A +\sigma ^{2}C'P^{\ast }C -\bigl[\bigl(M^{(1)}\bigr)^{\ast }\bigr]' \bigl[\bigl( \rho ^{(1)}\bigr)^{\ast }\bigr]^{\dagger } \bigl[M^{(1)}\bigr]^{\ast }-P^{\ast } \bigr\} \\ &{}\times (x_{k}-\mathbb{E}x_{k}) + \bigl\{ u_{k}-\mathbb{E}u_{k}+\bigl[\bigl(\rho ^{(1)}\bigr)^{ \ast }\bigr]^{\dagger } \bigl[M^{(1)}\bigr]^{\ast }(x_{k}- \mathbb{E}x_{k}) \bigr\} ' \bigl( \rho ^{(1)}\bigr)^{\ast } \\ &{}\times \bigl\{ u_{k}-\mathbb{E}u_{k} +\bigl[\bigl( \rho ^{(1)}\bigr)^{\ast }\bigr]^{ \dagger } \bigl[M^{(1)}\bigr]^{\ast }(x_{k}- \mathbb{E}x_{k}) \bigr\} \\ &{} + \bigl\{ \mathbb{E}u_{k} + \bigl[\bigl(\rho ^{(2)}\bigr)^{\ast }\bigr]^{\dagger } \bigl[M^{(2)}\bigr]^{\ast } \mathbb{E}x_{k} \bigr\} '\bigl(\rho ^{(2)}\bigr)^{\ast } \\ &{}\times \bigl\{ \mathbb{E}u_{k} +\bigl[\bigl(\rho ^{(2)}\bigr)^{\ast }\bigr]^{\dagger } \bigl[M^{(2)}\bigr]^{ \ast }\mathbb{E}x_{k} \bigr\} +(\mathbb{E}x_{k})' \bigl\{ Q+\bar{Q}+ \sigma ^{2}(C+\bar{C})'P^{\ast }(C+\bar{C}) \\ &{}+(A+\bar{A})'\bigl(P^{\ast }+\bar{P}^{\ast } \bigr) (A+\bar{A})-P^{\ast }-\bar{P}^{ \ast } -\bigl[ \bigl(M^{(2)}\bigr)^{\ast }\bigr]'\bigl[\bigl( \rho ^{(2)}\bigr)^{\ast }\bigr]^{\dagger } \bigl[M^{(2)}\bigr]^{ \ast } \bigr\} \mathbb{E}x_{k} \bigr\} \\ &{}+\mathbb{E}\bigl[(x_{0}-\mathbb{E}x_{0})'P^{\ast }(x_{0}- \mathbb{E}x_{0})\bigr] +(\mathbb{E}x_{0})' \bigl(P^{\ast }+\bar{P}^{\ast }\bigr)\mathbb{E}x_{0} \\ &{}-\lim_{N\to \infty }\mathbb{E}\bigl[(x_{N}- \mathbb{E}x_{N})'P^{\ast }(x_{N}- \mathbb{E}x_{N})\bigr] -\lim_{N\to \infty }( \mathbb{E}x_{N})'\bigl(P^{\ast }+ \bar{P}^{\ast }\bigr)\mathbb{E}x_{N} \\ =&\mathbb{E}\bigl[(x_{0}-\mathbb{E}x_{0})'P^{\ast }(x_{0}- \mathbb{E}x_{0})\bigr] +(\mathbb{E}x_{0})' \bigl(P^{\ast }+\bar{P}^{\ast }\bigr)\mathbb{E}x_{0} \\ &{}+ \mathbb{E}\sum_{k=0}^{\infty } \bigl\{ u_{k}-\mathbb{E}u_{k}+\bigl[\bigl(\rho ^{(1)}\bigr)^{\ast } \bigr]^{\dagger }\bigl[M^{(1)}\bigr]^{\ast }(x_{k}- \mathbb{E}x_{k}) \bigr\} '\bigl(\rho ^{(1)} \bigr)^{\ast }\\ &{}\times \bigl\{ u_{k}-\mathbb{E}u_{k}+ \bigl[\bigl( \rho ^{(1)}\bigr)^{\ast }\bigr]^{\dagger } \bigl[M^{(1)}\bigr]^{\ast }(x_{k}- \mathbb{E}x_{k}) \bigr\} \\ &{}+\mathbb{E}\sum_{k=0}^{\infty } \bigl\{ \bigl\{ \mathbb{E}u_{k} +\bigl[\bigl( \rho ^{(2)} \bigr)^{\ast }\bigr]^{\dagger }\bigl[M^{(2)} \bigr]^{\ast }\mathbb{E}x_{k} \bigr\} '\bigl( \rho ^{(2)}\bigr)^{\ast } \bigl\{ \mathbb{E}u_{k}+ \bigl[\bigl(\rho ^{(2)}\bigr)^{ \ast }\bigr]^{\dagger } \bigl[M^{(2)}\bigr]^{\ast }\mathbb{E}x_{k} \bigr\} \bigr\} . \end{aligned}$$

Since \(\rho ^{(1)}\), \(\rho ^{(2)}\geq 0\), we can immediately get that \(J(\xi ,u)\geq \mathbb{E}(x_{0}'P^{\ast }x_{0})+(\mathbb{E}x_{0})' \bar{P}^{\ast }\mathbb{E}x_{0}\). Thus,

$$\begin{aligned} V(x_{0})=\inf_{u\in \mathscr{U}_{\infty }}J(\xi ,u) \geq \mathbb{E}\bigl(x_{0}'P^{ \ast }x_{0} \bigr)+(\mathbb{E}x_{0})'\bar{P}^{\ast } \mathbb{E}x_{0}. \end{aligned}$$
(4.3)

Next, we shall show \(V(x_{0})=\inf_{u\in \mathscr{U}_{\infty }}J(\xi ,u)\leq \mathbb{E}(x_{0}'P^{ \ast }x_{0})+(\mathbb{E}x_{0})'\bar{P}^{\ast }\mathbb{E}x_{0}\) by the standard perturbation method. For a positive decreasing sequence \(\{\varepsilon _{i}, i=0, 1, 2,\ldots \}\), we consider the GAREs with Q, R replaced by \(Q+\varepsilon _{i}I\), \(R+\varepsilon _{i}I\), respectively. And the corresponding stabilizing solution to the GAREs is written as \((P_{\varepsilon _{i}}, \bar{P}_{\varepsilon _{i}})\), which is the maximal solution. Indeed, \(P_{\varepsilon _{0}}\geq \cdots \geq P_{\varepsilon _{i}}\geq P_{ \varepsilon _{i+1}}\geq \cdots \geq P\), \(\bar{P}_{\varepsilon _{0}} \geq \cdots \geq \bar{P}_{\varepsilon _{i}}\geq \bar{P}_{\varepsilon _{i+1}} \geq \cdots \geq \bar{P}\). Moreover, \(\lim_{i\to \infty }P_{ \varepsilon _{i}}=P\), \(\lim_{i\to \infty }\bar{P}_{\varepsilon _{i}}= \bar{P}\). Set \(u_{k}^{i}=-[\rho _{i}^{(2)}]^{\dagger }M_{i}^{(2)}\mathbb{E}x_{k} -[ \rho _{i}^{(1)}]^{\dagger }M_{i}^{(1)}(x_{k}-\mathbb{E}x_{k})\), then

$$\begin{aligned} \begin{aligned} V(x_{0})&\leq \mathbb{E}\sum_{k=0}^{\infty } \bigl[x_{k}'(Q+\varepsilon _{i}I)x_{k}+( \mathbb{E}x_{k})'\bar{Q}\mathbb{E}x_{k}+2x_{k}'Gu_{k} \\ &\quad {}+2(\mathbb{E}x_{k})'\bar{G}\mathbb{E}u_{k} +u_{k}'(R+\varepsilon _{i}I)u_{k}+( \mathbb{E}u_{k})'\bar{R}\mathbb{E}u_{k} \bigr] \\ &=\mathbb{E}\bigl[(x_{0}-\mathbb{E}x_{0})'P_{\varepsilon _{i}}(x_{0}- \mathbb{E}x_{0})\bigr] +(\mathbb{E}x_{0})'(P_{\varepsilon _{i}}+ \bar{P}_{ \varepsilon _{i}})\mathbb{E}x_{0}. \end{aligned} \end{aligned}$$

Let \(i\to \infty \), we have

$$\begin{aligned} V(x_{0}) \leq \mathbb{E}\bigl(x_{0}'P^{\ast }x_{0} \bigr)+(\mathbb{E}x_{0})'\bar{P}^{ \ast } \mathbb{E}x_{0}. \end{aligned}$$
(4.4)

Combining (4.3) with (4.4), we have \(V(x_{0})=\mathbb{E}(x_{0}'P^{\ast }x_{0})+(\mathbb{E}x_{0})'\bar{P}^{ \ast }\mathbb{E}x_{0}\). Meanwhile, the optimal controller \(u_{k}\) satisfies

$$\begin{aligned} \textstyle\begin{cases} u_{k}-\mathbb{E}u_{k}+[(\rho ^{(1)})^{\ast }]^{\dagger }(M^{(1)})^{ \ast }(x_{k}-\mathbb{E}x_{k})=0, \\ \mathbb{E}u_{k}+[(\rho ^{(2)})^{\ast }]^{\dagger }(M^{(2)})^{\ast } \mathbb{E}x_{k}=0. \end{cases}\displaystyle \end{aligned}$$

Thus, \(u_{k}=-[(\rho ^{(2)})^{\ast }]^{\dagger }[M^{(2)}]^{\ast }\mathbb{E}x_{k} -[(\rho ^{(1)})^{\ast }]^{\dagger }[M^{(1)}]^{\ast }(x_{k}-\mathbb{E}x_{k})\). □

Remark 4.1

In Theorem 4.1, we show that the optimal value of Problem \(\mathcal{A}\) can be presented by virtue of the maximal solution to GAREs (3.2).

A classical problem in optimal control theory is the so-called LQ stabilization problem. Now, we consider the stabilization of the indefinite discrete-time MF-LQ control problem.

Problem \(\mathcal {B}\)

Find a \(\mathfrak{F}_{k}\)-measurable \(u_{k}\) to minimize the cost functional (1.2), simultaneously, to stabilize the mean-field system (1.1).

Definition 4.1

NGAREs (3.3) are said to have a positive semi-definite (definite) solution if they admit \(T\geq 0\), \(T+\bar{T}\geq 0\) (\(T>0\), \(T+\bar{T}> 0\)) satisfying NGAREs (3.3).

Theorem 4.2

If\(\bar{\boldsymbol{\varGamma }}\neq \emptyset \)and (A2) are satisfied, then system (1.1) is\(L^{2}\)-stabilizable if and only if there exists a solution to GAREs (3.2), which is also the maximal solution. In this case, the optimal stabilizing solution and the optimal value can be designed as (4.1)(4.2), respectively.

Proof

Necessity: Under (A2) and \(\bar{\boldsymbol{\varGamma }}\neq \emptyset \), assume that system (1.1) is \(L^{2}\)-stabilizable, we shall show that GAREs (3.2) have a solution \((P,\bar{P})\), which is also a maximal solution. Notice that Remark 3.1 and \(T_{N+1}(N)=\bar{T}_{N+1}(N)=0\), by virtue of the induction, we can immediately get that \(T_{k}(N)\geq 0\), \(\bar{T}_{k}(N)\geq 0\) for \(0\leq k\leq N\).

Step 1. We shall state that \(\alpha _{k}^{(i)}[\alpha _{k}^{(i)}]^{\dagger }H_{k}^{(i)}=H_{k}^{(i)}\), \(i=1,2\). Since \(\alpha _{k}^{(1)}\geq 0\), we obtain that

$$\begin{aligned} {}[\alpha _{k}^{(1)}] ^{\dagger }= \begin{pmatrix} U_{k}^{1} & U_{k}^{2} \end{pmatrix} \begin{pmatrix} V_{k}^{-1} & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} U_{k}^{1} & U_{k}^{2} \end{pmatrix}', \end{aligned}$$

in which \(( U_{k}^{1} \ U_{k}^{2} )\) is an orthogonal matrix, \(V_{k}>0\) and \(\dim (V_{k})= \mathbf{R}(\alpha _{k}^{(1)})\), and the columns of the matrix \(U_{k}^{2}\) form a basis of \(\mathbf{Ker}(\alpha _{k}^{(1)})\). Combining \(\lambda ^{(1)}\geq 0\) with (3.4), we derive \(\mathbf{Ker}(\alpha _{k}^{(1)}) \in \mathbf{Ker}(\lambda ^{(1)})\). Moreover, in view of \(\alpha _{k}^{(1)}U_{k}^{2}(U_{k}^{2})'=0\), then \(\lambda ^{(1)}U_{k}^{2}(U_{k}^{2})'=0\). We can easily calculate

$$\begin{aligned}& \bigl[A'T_{k+1}(N)B+\sigma ^{2}C'T_{k+1}(N)D+ \bigl(S^{(1)}\bigr)' \bigr] \bigl[I- \alpha _{k}^{(1)}\bigl(\alpha _{k}^{(1)} \bigr)^{\dagger } \bigr] \\& \quad = \bigl[A'\bigl(T_{k+1}(N)+P_{\star } \bigr)B+\sigma ^{2}C'\bigl(T_{k+1}(N)+P_{\star } \bigr)D \bigr]U_{k}^{2}\bigl(U_{k}^{2} \bigr)'. \end{aligned}$$

Meanwhile, since \(\mathbf{Ker}(\Delta ^{(1)})\subseteq (\mathbf{Ker}B\cap \mathbf{Ker}D)\), we deduce that \(\alpha _{k}^{(1)}[\alpha _{k}^{(1)}]^{\dagger }H_{k}^{(1)}=H_{k}^{(1)}\). Similarly, \(\alpha _{k}^{(2)}[\alpha _{k}^{(2)}]^{\dagger }H_{k}^{(2)}=H_{k}^{(2)}\) follows.

Step 2. We shall prove that \(T_{k}(N)\), \(\bar{T}_{k}(N)\) are convergent. Using Proposition 3.1, it is clear that the optimal controller and optimal cost value of (3.5) subject to (1.1) are, respectively, designed by

$$\begin{aligned} \begin{gathered} u_{k}^{\star }=-\alpha _{k}^{(1)}H_{k}^{(1)}x_{k}+ \bigl[\alpha _{k}^{(1)}H_{k}^{(1)} -\alpha _{k}^{(2)}H_{k}^{(2)} \bigr]\mathbb{E}x_{k}, \\ \bar{J}_{T}^{\ast }(0,\xi ;u)=\mathbb{E} \bigl[x_{0}'T_{0}(N)x_{0} \bigr]+( \mathbb{E}x_{0})'\bar{T}_{0}(N) \mathbb{E}x_{0}. \end{gathered} \end{aligned}$$
(4.5)

Accordingly, for any N, it yields that

$$ \mathbb{E}\bigl[x_{0}'T_{0}(N)x_{0} \bigr]+(\mathbb{E}x_{0})'\bar{T}_{0}(N) \mathbb{E}x_{0}\leq \mathbb{E}\bigl[x_{0}'T_{0}(N+1)x_{0} \bigr]+(\mathbb{E}x_{0})' \bar{T}_{0}(N+1) \mathbb{E}x_{0}. $$

Let \(x_{0}\neq 0\), \(\mathbb{E}x_{0}=0\), then \(T_{0}(N)\leq T_{0}(N+1)\). Similarly, let \(x_{0}\neq 0\), \(\mathbb{E}x_{0}=x_{0}\), we have \(T_{0}(N)+\bar{T}_{0}(N)\leq T_{0}(N+1)+\bar{T}_{0}(N+1)\). Namely, \(T_{0}(N)\), \(T_{0}(N)+\bar{T}_{0}(N)\) increase with respect to N.

Next, we state that \(T_{0}(N)\), \(T_{0}(N)+\bar{T}_{0}(N)\) are bounded. Since system (1.1) is \(L^{2}\)-stabilizable, there exists \(u_{k}=\mathcal{L}x_{k}+\bar{\mathcal{L}}\mathbb{E}x_{k}\) such that system (1.1) satisfies \(\lim_{k\to +\infty }\mathbb{E}(x_{k}'x_{k})=0\). Notice that

$$\begin{aligned} (\mathbb{E}x_{k})' \mathbb{E}x_{k} +\mathbb{E}\bigl[(x_{k}- \mathbb{E}x_{k})'(x_{k}- \mathbb{E}x_{k})\bigr]=\mathbb{E}\bigl(x_{k}'x_{k} \bigr), \end{aligned}$$
(4.6)

thus, \(\lim_{k\to +\infty }(\mathbb{E}x_{k})'\mathbb{E}x_{k}=0\). Resorting to [30, Lemma 4.1], we obtain \(\mathbb{E}\sum_{k=0}^{\infty }(x_{k}'x_{k})< +\infty \). Using (4.6), we get \(\sum_{k=0}^{\infty }(\mathbb{E}x_{k})'\mathbb{E}x_{k}< +\infty \). On the other hand, noticing \(\bar{\boldsymbol{\varGamma }}\neq \emptyset \), there exists a constant b such that (W(1)(S(1))S(1)λ(1))bI, (W(2)(S(2))S(2)λ(2))bI. Hence, we claim that

$$\begin{aligned} \bar{J}_{T}^{\ast }(0,\xi ;u) \leq& \bar{J}\\ :=&\mathbb{E} \sum_{k=0}^{ \infty } \left [ \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix} \right . \\ &{}+\left . \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix} \right ] \\ \leq &\mathbb{E}\sum_{k=0}^{\infty } \left [ \begin{pmatrix} \mathbb{E}x_{k} \\ (\mathcal{L}+\bar{\mathcal{L}})\mathbb{E}x_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{k} \\ (\mathcal{L}+\bar{\mathcal{L}})\mathbb{E}x_{k} \end{pmatrix} \right . \\ &{}+\left . \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ \mathcal{L}(x_{k}-\mathbb{E}x_{k}) \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ \mathcal{L}(x_{k}-\mathbb{E}x_{k}) \end{pmatrix} \right ]\\ \leq& bc \mathbb{E}\bigl(x_{0}'x_{0}\bigr). \end{aligned}$$

By (4.5), we have

$$ \mathbb{E}\bigl[x_{0}'T_{0}(N)x_{0} \bigr]+(\mathbb{E}x_{0})'\bar{T}_{0}(N) \mathbb{E}x_{0}\leq b c \mathbb{E}\bigl(x_{0}'x_{0} \bigr). $$

Let \(\mathbb{E}x_{0}=0\), we have \(0\leq T_{0}(N)\leq b c I\). Similarly, let \(x_{0}=\mathbb{E}x_{0}\), we get \(0\leq T_{0}(N)+\bar{T}_{0}(N)\leq b c I\). Consequently, \(T_{0}(N)\) and \(T_{0}(N)+\bar{T}_{0}(N)\) are bounded. Furthermore, \(T_{0}(N)\) and \(T_{0}(N)+\bar{T}_{0}(N)\) are convergent. Observe the time-invariance, we derive

$$ \lim_{N\rightarrow \infty }T_{k}(N)=\lim_{N \rightarrow \infty }T_{0}(N-k)=T, \qquad \lim_{N\rightarrow \infty }\bar{T}_{k}(N)=\lim _{N\rightarrow \infty }\bar{T}_{0}(N-k)= \bar{T}. $$

In this case, \(\lim_{N\rightarrow \infty }\alpha _{k}^{(i)}=\alpha ^{(i)}\), \(\lim_{N \rightarrow \infty }H_{k}^{(i)}=H^{(i)}\), \(i=1,2\). Consequently, we deduce that \((T,\bar{T})\) is the solution to the following NGAREs:

$$\begin{aligned} \textstyle\begin{cases} T=A'TA+\sigma ^{2}C'TC-[H^{(1)}]'[\alpha ^{(1)}]^{\dagger }H^{(1)}+W^{(1)}, \\ \bar{T}=\sigma ^{2}C'T\bar{C}+\sigma ^{2}\bar{C}'TC +\sigma ^{2}\bar{C}'T \bar{C}+A'T\bar{A}+\bar{A}'TA+\bar{A}'T\bar{A} \\ \hphantom{\bar{T}=}{}+(A+\bar{A})'\bar{T}(A +\bar{A})+[M^{(1)}]'[\alpha ^{(1)}]^{ \dagger }M^{(1)} -[H^{(2)}]'[\alpha ^{2}]^{\dagger }H^{(2)}+W^{(2)}-W^{(1)}, \\ \alpha ^{(i)}\geq 0, \qquad \alpha ^{(i)}[\alpha ^{(i)}]^{\dagger }H^{(i)}=H^{(i)}, \quad i=1,2, \end{cases}\displaystyle \end{aligned}$$
(4.7)

where

$$\begin{aligned} \textstyle\begin{cases} \alpha ^{(1)}=B'TB+\sigma ^{2}D'TD+\rho ^{(1)}, \\ H^{(1)}=B'TA+\sigma ^{2}D'TC+M^{(1)}, \\ \alpha ^{(2)}=(B+\bar{B})'(T+\bar{T})(B+\bar{B}) +\sigma ^{2}(D+\bar{D})'T(D+ \bar{D})+\rho ^{(2)}, \\ H^{(2)}=(B+\bar{B})'(T+\bar{T})(A+\bar{A}) +\sigma ^{2}(D+\bar{D})'T(C+ \bar{C})+M^{(2)}. \end{cases}\displaystyle \end{aligned}$$

Step 3. We shall prove \(T>0\), \(\bar{T}>0\). In view of \(T_{k}(N)\geq 0\), \(\bar{T}_{k}(N)\geq 0\), we get that \(T\geq 0\), \(\bar{T}\geq 0\). Specifically, suppose it does not hold. Seeing that \(\mathbb{E}(x_{0}'x_{0})=\mathbb{E}(\mathbb{X}_{0}'\mathbb{X}_{0})\), then there is \(\mathbb{X}_{0}\neq 0\) such that \(\mathbb{E}(\mathbb{X}_{0}'\tilde{T}\mathbb{X}_{0})=0\) with \(\tilde{T}=\operatorname{diag}(T,T+\bar{T})\). Define the Lyapunov function candidate as

$$ V_{T}(k,x_{k})=\mathbb{E}\bigl[(x_{k}- \mathbb{E}x_{k})'T(x_{k}- \mathbb{E}x_{k})\bigr] +(\mathbb{E}x_{k})'(T+ \bar{T})\mathbb{E}x_{k}\geq 0. $$

Using (4.7), it follows that

$$\begin{aligned}& V_{T}(k,x_{k})-V_{T}(k+1,x_{k+1}) \\& \quad = \mathbb{E} \bigl[(x_{k}-\mathbb{E}x_{k})' \bigl(W^{(1)}+\mathcal{L}' \lambda ^{(1)} \mathcal{L}\bigr) (x_{k}-\mathbb{E}x_{k})\\& \qquad {}+( \mathbb{E}x_{k})'\bigl[W^{(2)}+( \mathcal{L} +\bar{\mathcal{L}})'\bigl(\lambda ^{(1)}+ \lambda ^{(2)}\bigr) ( \mathcal{L}+\bar{\mathcal{L}})\bigr] \mathbb{E}x_{k} \bigr] \\& \quad = \mathbb{E} \left [\mathbb{X}_{k}' \begin{pmatrix} \mathcal{Q} 0 \\ 0 \bar{\mathcal{Q}} \end{pmatrix} \mathbb{X}_{k} \right ], \end{aligned}$$

where

$$\begin{aligned} \textstyle\begin{cases} \mathcal{Q}=W^{(1)}+\mathcal{L}'\lambda ^{(1)}\mathcal{L}+[S^{(1)}]' \mathcal{L}+\mathcal{L}'S^{(1)}, \\ \bar{\mathcal{Q}}=W^{(2)}+(\mathcal{L}+\bar{\mathcal{L}})'\lambda ^{(2)}( \mathcal{L}+\bar{\mathcal{L}}) +[S^{(2)}]'(\mathcal{L}+\bar{ \mathcal{L}})+(\mathcal{L}+\bar{\mathcal{L}})'S^{(2)}. \end{cases}\displaystyle \end{aligned}$$

According to Remark 3.1, we have \(\mathcal{Q}\geq 0\), \(\bar{\mathcal{Q}}\geq 0\); further, it is obtained that

$$\begin{aligned} V_{T}(k,x_{k})-V_{T}(k+1,x_{k+1})= \mathbb{E} \left [\mathbb{X}_{k}' \begin{pmatrix} \mathcal{Q} & 0 \\ 0 & \bar{\mathcal{Q}} \end{pmatrix} \mathbb{X}_{k} \right ]:=\mathbb{E}\bigl( \mathbb{X}_{k}'\tilde{ \mathcal{Q}} \mathbb{X}_{k}\bigr) \geq 0. \end{aligned}$$
(4.8)

Adding from 0 to N on both sides of (4.8), then

$$\begin{aligned} 0\leq \mathbb{E}\sum_{k=0}^{N}\bigl( \mathbb{X}_{k}'\tilde{ \mathcal{Q}} \mathbb{X}_{k}\bigr) =\mathbb{E}\bigl(\mathbb{X}_{0}' \tilde{T} \mathbb{X}_{0}\bigr) -\mathbb{E}\bigl(\mathbb{X}_{N+1}' \tilde{T}\mathbb{X}_{N+1}\bigr)=- \mathbb{E}\bigl(\mathbb{X}_{N+1}' \tilde{T}\mathbb{X}_{N+1}\bigr)\leq 0. \end{aligned}$$

Consequently,

$$\begin{aligned} \tilde{\mathcal{Q}}^{1/2}\mathbb{X}_{k}=0, \quad k\geq 0. \end{aligned}$$
(4.9)

Resorting to the method of [31, Theorem 4, Proposition 1], we get that the exact observability of system \((A, \bar{A}, C, \bar{C}, \mathbb{Q}^{1/2})\) is equivalent to the exact observability of the following system:

$$\begin{aligned} \textstyle\begin{cases} \mathbb{X}_{k+1}=\mathbb{A}\mathbb{X}_{k} +\mathbb{C} \mathbb{X}_{k}w_{k}, \\ \mathbb{Y}_{k}=\tilde{\mathcal{Q}}^{1/2}\mathbb{X}_{k}. \end{cases}\displaystyle \end{aligned}$$

Here,

$$ \begin{gathered} \mathbb{A}= \begin{pmatrix} A+B\mathcal{L}& 0 \\ 0 & A+C+(B+D)(\mathcal{L}+\bar{\mathcal{L}}) \end{pmatrix}, \\ \mathbb{C}= \begin{pmatrix} \bar{A}+\bar{B}\mathcal{L} & \bar{A}+\bar{C}+(\bar{B}+\bar{D})( \mathcal{L}+\bar{\mathcal{L}}) \\ 0 & 0 \end{pmatrix}. \end{gathered} $$

Thus, by (4.9), we get \(\mathbb{X}_{0}=0\), which is contradiction. In summary, \(T>0\) and \(T+\bar{T}>0\) hold.

Step 4. From Theorem 3.1, we see \(P_{k}(N)=T_{k}(N)+\hat{P}\), \(\bar{P}_{k}(N)=\bar{T}_{k}(N)+\check{P}\). Recall the convergences of \(P_{k}(N)\), \(\bar{P}_{k}(N)\), then \(P=T+\hat{P}\), \(\bar{P}=\bar{T}+\check{P}\). Besides, combining the arbitrariness of , with \(T, \bar{T}>0\), we derive \(P\geq \hat{P}\), \(\bar{P}\geq \check{P}\), namely \((P,\bar{P})\) is the maximal solution to GAREs (3.2).

Sufficiency. Under (A2) and \(\bar{\boldsymbol{\varGamma }}\neq \emptyset \), assume that GAREs (3.2) have a solution, we shall prove that system (1.1) is \(L^{2}\)-stabilizable. Following the proof of the necessity, we claim that if GAREs (3.2) have a solution \((P,\bar{P})\), then NGAREs (4.7) have a positive definite solution \((T,\bar{T})\). In addition, \(P=T+\hat{P}\), \(\bar{P}=\bar{T}+\check{P}\). Notice that \(\mathcal{K}=\mathcal{L}\), \(\bar{\mathcal{K}}=\bar{\mathcal{L}}\), the stabilization of system (1.1) with \(u_{k}=\mathcal{K}x_{k}+\bar{\mathcal{K}}\mathbb{E}x_{k}\) is equivalent to the stabilization of system (1.1) with \(u_{k}=\mathcal{L}x_{k}+\bar{\mathcal{L}}\mathbb{E}x_{k}\). Together with Remark 3.1, (4.8) can be reformulated as

$$\begin{aligned} \begin{aligned} V_{T}(k,x_{k})-V_{T}(k+1,x_{k+1}) &=\mathbb{E} \left \{ \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix} \right . \\ &\quad {}+\left . \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix} \right \}\\ &\geq 0, \end{aligned} \end{aligned}$$

which implies that \(V_{T}(k,x_{k})\) is decreasing with respect to k. Along with \(V_{T}(k,x_{k})\geq 0\), we can deduce that \(V_{T}(k,x_{k})\) is convergent. Adding from m to \(m+N\) on both sides of the above equation and taking limitation, we get

$$\begin{aligned} 0 =&\lim_{m\rightarrow \infty }\mathbb{E}\sum_{k=m}^{m+N} \left [ \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(1)} & (S^{(1)})' \\ S^{(1)} & \lambda ^{(1)} \end{pmatrix} \begin{pmatrix} \mathbb{E}x_{k} \\ \mathbb{E}u_{k} \end{pmatrix} \right . \\ &{}+\left . \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix}' \begin{pmatrix} W^{(2)} & (S^{(2)})' \\ S^{(2)} & \lambda ^{(2)} \end{pmatrix} \begin{pmatrix} x_{k}-\mathbb{E}x_{k} \\ u_{k}-\mathbb{E}u_{k} \end{pmatrix} \right ]. \end{aligned}$$

By a time-shift, it yields that

$$\begin{aligned} 0 \geq &\lim_{m\rightarrow \infty }\mathbb{E}\bigl[x_{m}'T_{m}(m+N)x_{m} \bigr] + \lim_{m\rightarrow \infty }\bigl[(\mathbb{E}x_{m})' \bar{T}_{m}(m+N) \mathbb{E}x_{m}\bigr] \\ =&\lim_{m\rightarrow \infty }\mathbb{E}\bigl[(x_{m}- \mathbb{E}x_{m})'T_{0}(N) (x_{m}- \mathbb{E}x_{m})\bigr] +\lim_{m\rightarrow \infty } \bigl[(\mathbb{E}x_{m})'\bigl(T_{0}(N)+ \bar{T}_{0}(N)\bigr)\mathbb{E}x_{m}\bigr]\geq 0. \end{aligned}$$

We further obtain

$$\begin{aligned} \lim_{m\to +\infty } \mathbb{E}\bigl[(x_{m}- \mathbb{E}x_{m})'(x_{m}- \mathbb{E}x_{m})\bigr]=0, \qquad \lim_{m\to +\infty } ( \mathbb{E}x_{m})' \mathbb{E}x_{m}=0, \end{aligned}$$

hence, \(\lim_{m\to +\infty } \mathbb{E}(x_{m}'x_{m})=0\). Namely, system (1.1) is \(L^{2}\)-stabilizable. Using Theorem 4.1, the optimal controller and optimal value can be designed as (4.1)–(4.2), respectively. □

Remark 4.2

Our results extend and improve the ones in [16]. Besides, Theorem 4.2 makes it clear that the solvability of GAREs (3.2) with indefinite weighting matrices is equivalent to the solvability of NGAREs (4.7) with positive semi-definite weighting matrices. Simultaneously, it also indicates that the stabilization problems with indefinite weighting matrices can be reduced to a positive semi-definite case. So to speak, these conclusions will give us fresh ideas to consider the indefinite MF-LQ optimal control problems, especially to consider their stabilization problems.

Remark 4.3

Theorem 4.2 presents the necessary and sufficient stabilization condition for the indefinite MF-LQ optimal control problem, while for most of previous works, stabilization was the precondition for the indefinite control problems. In other words, their conclusions were only to discuss the existence of the stabilizing solution to the GAREs based on the assumption of the stabilization.

5 Characterizing MF-LQ problem via SDP

In this section, we present some results with respect to the SDP problem. Meanwhile, we establish the relations among the GAREs, the SDP, and the MF-LQ optimal control problems.

Definition 5.1

Let a vector \(a=(a_{1}, a_{2}, \ldots , a_{m})'\in \mathbb{R}^{m}\) and matrices \(F_{0}, F_{1}, \ldots , F_{m} \in \mathbb{T}^{n}\) be given. The following optimization problem:

$$\begin{aligned}& \begin{gathered} \min\quad a'x, \\ \text{subject to}\quad F(x)=F_{0}+\sum_{i=1}^{m}x_{i}F_{i} \geq 0, \end{gathered} \end{aligned}$$
(5.1)

is called a SDP. Besides, the dual problem of SDP (5.1) is defined as

$$\begin{aligned}& \max\quad {-}\mathbf{Tr}(F_{0}Z), \\& \text{subject to}\quad Z \in \mathbb{T}^{n},\qquad \mathbf{Tr}(ZF_{i})=a_{i}, \quad i=1,2, \ldots ,m, Z\geq 0. \end{aligned}$$

Specifically, we consider the following SDP problem:

$$\begin{aligned}& \begin{gathered} \max\quad \mathbf{Tr}(P)+\mathbf{Tr}(\bar{P}), \\ \text{subject to}\quad (P, \bar{P})\in \boldsymbol{\varGamma }. \end{gathered} \end{aligned}$$
(5.2)

Theorem 5.1

A unique optimal solution is admitted to SDP problem (5.2), which is also the maximal solution to GAREs (3.2).

Proof

Let \((P_{\ast },\bar{P}_{\ast })\) be an optimal solution to SDP problem (5.2). In order to show that it is indeed a maximal solution, denote

$$\begin{aligned} \textstyle\begin{cases} \mathcal{K}_{1}=-(R+B'P_{\ast }B+\sigma ^{2}D'P_{\ast }D)^{\dagger } (B'P_{ \ast }A+\sigma ^{2}D'P_{\ast }C+G^{\prime }), \\ \mathcal{K}_{2}=-[R+\bar{R}+(B+\bar{B})'(P_{\ast }+\bar{P}_{\ast })(B+ \bar{B}) +\sigma ^{2}(D+\bar{D})'P_{\ast }(D+\bar{D})]^{\dagger } \\ \hphantom{\mathcal{K}_{2}=}{}\times [(B+\bar{B})'(P_{\ast }+\bar{P}_{\ast })(A+\bar{A}) + \sigma ^{2}(D+\bar{D})'P_{\ast }(C+\bar{C})+G'+\bar{G}']. \end{cases}\displaystyle \end{aligned}$$

By a simple calculation, we have

$$\begin{aligned} \textstyle\begin{cases} (A+B\mathcal{K}_{1})'P_{\ast }(A+B\mathcal{K}_{1}) +\sigma ^{2}(C+D \mathcal{K}_{1})'P_{\ast }(C+D\mathcal{K}_{1})+\mathcal{K}_{1}'G' +G \mathcal{K}_{1} \\ \quad =P_{\ast }-Q-\mathcal{K}_{1}'R\mathcal{K}_{1}, \\ (G'+\bar{G}')\mathcal{K}_{2} +\mathcal{K}_{2}'(G+\bar{G}) +\sigma ^{2}(C+ \bar{C}+D\mathcal{K}_{2}+\bar{D}\mathcal{K}_{2})'P_{\ast } (C+\bar{C}+D \mathcal{K}_{2}+\bar{D}\mathcal{K}_{2}) \\ \qquad {}+(A+\bar{A}+B\mathcal{K}_{2}+\bar{B}\mathcal{K}_{2})'(P_{\ast }+\bar{P}_{ \ast }) (\bar{A}+A+B\mathcal{K}_{2}+\bar{B}\mathcal{K}_{2})\\ \quad =P_{\ast }+ \bar{P}_{\ast }-Q-\bar{Q}-\mathcal{K}_{2}'(R+\bar{R})\mathcal{K}_{2}. \end{cases}\displaystyle \end{aligned}$$

On the other hand, \(u_{\ast }=\mathcal{K}_{1}x_{k}+(\mathcal{K}_{2}-\mathcal{K}_{1}) \mathbb{E}x_{k}\) is a stabilizing control. Following the proof of [11, Theorem 6.7], it follows that \((P_{\ast },\bar{P}_{\ast })\) is the upper bound of the set Γ; in other words, \((P_{\ast },\bar{P}_{\ast })\) is the maximal solution. Furthermore, the uniqueness of the solution to SDP (5.2) follows from the maximality. The proof is completed. □

Corollary 5.1

The following statements are equivalent: (i) \(\boldsymbol{\varGamma }\neq \emptyset \); (ii) There is a solution to GAREs (3.2).

Besides, while either (i) or (ii) holds, GAREs (3.2) have a maximal solution\((P^{\ast },\bar{P}^{\ast })\), which is the unique optimal solution to SDP problem (5.2).

Corollary 5.2

Let (A1) and\(\boldsymbol{\varGamma } \neq \emptyset \)hold, then Problem\(\mathcal{B}\)admits an optimal control\(u_{k}=\mathcal{K}^{\ast }x_{k}+\bar{\mathcal{K}}^{\ast }\mathbb{E}x_{k}\), \(k \in \tilde{\mathbb{N}}\), and the optimal value is given as\(V(x_{0})=\mathbb{E}(x_{0}'P^{\ast }x_{0})+(\mathbb{E}x_{0})'\bar{P}^{ \ast }\mathbb{E}x_{0}\). Here, \((P^{\ast }, \bar{P}^{\ast })\)is the maximal solution to GAREs (3.2), which is the unique optimal solution to SDP problem (5.2).

6 Numerical results

In this section, we give some numerical examples to illustrate our main results.

Example 6.1

Consider system (1.1) and cost functional (1.2) with

$$\begin{aligned}& A= \begin{pmatrix} 1.0 & 0.5 \\ 0.0 & 0.8 \end{pmatrix}, \qquad \bar{A}= \begin{pmatrix} 0.5 & 0.6 \\ 0.0 & 0.8 \end{pmatrix}, \qquad B= \begin{pmatrix} 1.0 & 1.0 \\ 0.0 & 1.0 \end{pmatrix}, \\& \bar{B}= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}; \qquad C= \begin{pmatrix} 1.0 & 0.6 \\ 0.0 & 1.0 \end{pmatrix}, \qquad \bar{C}= \begin{pmatrix} 1.0 & 0.0 \\ 0.4 & 1.0 \end{pmatrix}, \\& D= \begin{pmatrix} 1.0 & 1.0 \\ 0.0 & 1.0 \end{pmatrix}, \qquad \bar{D}= \begin{pmatrix} 1.0 & 0.0 \\ 0.5 & 0.6 \end{pmatrix}; \qquad G= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}, \\& \bar{G}= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}, \qquad R= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}, \qquad \bar{R}= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}; \\& Q= \begin{pmatrix} 3.0000 & 0.0000 \\ 0.0000 & 2.8476 \end{pmatrix}, \qquad \bar{Q}= \begin{pmatrix} 3.5845 & 0.3870 \\ 0.3870 & 2.9894 \end{pmatrix}. \end{aligned}$$

After running the calculation of the SDP theory via Matlab software, we obtain that

$$ P^{\ast }= \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix}, \qquad \bar{P}^{\ast }= \begin{pmatrix} 3 & 0 \\ 0 & 3 \end{pmatrix}. $$

Furthermore, we have

$$\begin{aligned}& W^{(1)}= \begin{pmatrix} 5 & 2.2 \\ 2.2 & 5.3476 \end{pmatrix}, \qquad \lambda ^{(1)}= \begin{pmatrix} 5 & 4 \\ 8 & 9 \end{pmatrix}, \qquad S^{(1)}= \begin{pmatrix} 4 & 5.8 \\ 8 & 11.6 \end{pmatrix}, \\& W^{(2)}= \begin{pmatrix} 21.1545 & 12.637 \\ 12.637 & 28.407 \end{pmatrix}, \qquad \lambda ^{(2)}= \begin{pmatrix} 30.5 & 15.6 \\ 15.6 & 34.12 \end{pmatrix}, \qquad S^{(2)}= \begin{pmatrix} 25.8 & 17.4 \\ 7.828 & 30.5 \end{pmatrix}. \end{aligned}$$

It is clear that \(\mathbf{Ker}(\lambda ^{(1)})\subseteq \mathbf{Ker}B\cap \mathbf{Ker}D\), \(\mathbf{Ker}(\lambda ^{(2)})\subseteq \mathbf{Ker}(B+\bar{B})\cap \mathbf{Ker}(D+\bar{D})\). Using Lemma 2.1, it is easy to see (W(i)(S(i))S(i)λ(i))0, \(i=1,2\). In other words, \(\bar{\boldsymbol{\varGamma }}\neq \emptyset \). Thus, by Theorem 4.2, we deduce that system (1.1) is \(L^{2}\)-stabilizable, and the optimal controller is designed by

$$\begin{aligned} u_{k}=\mathcal{K}^{\ast }x_{k}+ \bar{\mathcal{K}}^{\ast }\mathbb{E}x_{k}, \quad k\in \tilde{ \mathbb{N}}, \end{aligned}$$
(6.1)

where

$$ \mathcal{K}^{\ast }= \begin{pmatrix} -1.0000 & 0.2552 \\ 0.000 & -0.8690 \end{pmatrix}, \qquad \bar{\mathcal{K}}^{\ast }= \begin{pmatrix} 0.1631 & -0.3057 \\ 0.0081 & -0.0194 \end{pmatrix}. $$

A curve of \(\mathbb{E}|x_{k}|^{2}\) under control (6.1) is shown in Fig. 1. As expected, the curve is convergent.

Figure 1
figure 1

Curve of \(\mathbb{E}|x_{k}|^{2}\) with initial state \(x_{0}=[-5\ 20]^{\prime }\)

Example 6.2

Consider system (1.1) and cost functional (1.2) with

$$\begin{aligned}& A= \begin{pmatrix} 1.0 & 2 \\ 0.0 & 2.5 \end{pmatrix}, \qquad \bar{A}= \begin{pmatrix} 2.0 & 0.5 \\ 0.0 & 0.5 \end{pmatrix}, \qquad B= \begin{pmatrix} 1.0 & 1.0 \\ 0.0 & 1.0 \end{pmatrix}, \qquad \bar{B}= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}; \\& C= \begin{pmatrix} 1.0 & 0.5 \\ 0.0 & 1.0 \end{pmatrix}, \qquad \bar{C}= \begin{pmatrix} 1.0 & 0.0 \\ 0.5 & 1.0 \end{pmatrix}, \qquad D= \begin{pmatrix} 2.0 & 1.0 \\ 0.0 & 2.0 \end{pmatrix}, \qquad \bar{D}= \begin{pmatrix} 1.0 & 0.0 \\ 0.5 & 2.0 \end{pmatrix}; \\& G= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}, \qquad \bar{G}= \begin{pmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{pmatrix}, \qquad R= \begin{pmatrix} -1.0 & 0.0 \\ 0.0 & -1.0 \end{pmatrix}, \\& \bar{R}= \begin{pmatrix} -1.0 & 0.0 \\ 0.0 & -1.0 \end{pmatrix};\qquad Q= \begin{pmatrix} 3.2667 & -1.0333 \\ -1.0333 & 0.5667 \end{pmatrix}, \qquad \bar{Q}= \begin{pmatrix} 6.3067 & -2.3170 \\ -2.3170 & -3.1531 \end{pmatrix}. \end{aligned}$$

By virtue of SDP theory, we have

$$ P^{\ast }= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \qquad \bar{P}^{\ast }= \begin{pmatrix} 4 & 0 \\ 0 & 4 \end{pmatrix}. $$

Furthermore,

$$\begin{aligned}& W^{(1)}= \begin{pmatrix} 1 & 2.5 \\ 2.5 & 10.5 \end{pmatrix}, \qquad \lambda ^{(1)}= \begin{pmatrix} 4 & 3 \\ 3 & 6 \end{pmatrix}, \qquad S^{(1)}= \begin{pmatrix} 4 & 3 \\ 2 & 8 \end{pmatrix}, \\& W^{(2)}= \begin{pmatrix} 53.8234 & 36.1497 \\ 36.1497 & 41.6636 \end{pmatrix}, \qquad \lambda ^{(2)}= \begin{pmatrix} 27.25 & 15 \\ 15 & 40 \end{pmatrix}, \qquad S^{(2)}= \begin{pmatrix} 38.25 & 27.5 \\ 19 & 73 \end{pmatrix}. \end{aligned}$$

Similar to Example 6.1, it is easy to verify \(\bar{\boldsymbol{\varGamma }}=\emptyset \). In this case,

$$ \mathcal{K}^{\ast }= \begin{pmatrix} -1.2000 & 0.4000 \\ 0.2667 & -1.5333 \end{pmatrix}, \qquad \bar{\mathcal{K}}^{\ast }= \begin{pmatrix} -0.2393 & -0.7526 \\ -0.2019 & 0.3406 \end{pmatrix}. $$

According to Theorem 4.2, we know that system (1.1) is not \(L^{2}\)-stabilizable. A curve of \(\mathbb{E}|x_{k}|^{2}\) under control (6.1) is shown in Fig. 2. As expected, the curve is not convergent.

Figure 2
figure 2

Curve of \(\mathbb{E}|x_{k}|^{2}\) with initial state \(x_{0}=[15\ 0.1]^{\prime }\)

To further illustrate Theorem 4.2, we give the following two examples as well.

Example 6.3

Consider system (1.1) and cost functional (1.2) with

$$\begin{aligned}& A=1.2, \qquad \bar{A}=0.3, \qquad B=0.5, \qquad \bar{B}=0.1, \qquad C=1, \qquad \bar{C}=0.5, \\& D=0.8, \qquad \bar{D}=0.2;\qquad Q=2, \qquad \bar{Q}=1, \qquad R=1, \qquad \bar{R}=1, \\& G=1, \qquad \bar{G}=1, \qquad \sigma ^{2}=1, \qquad x_{0} \sim N(0, 1). \end{aligned}$$

Likewise, by virtue of the SDP theory, we deduce that \(P^{\ast }=1.5625\), \(\bar{P}^{\ast }=0.7722\). Then

$$ \begin{gathered} W^{(1)}=4.25, \qquad \lambda ^{(1)}=2.390625, \qquad S^{(1)}=3.1875, \\ W^{(2)}=9.434,\qquad\lambda ^{(2)}=8.815575, \qquad S^{(2)}=6.44498. \end{gathered} $$

Since \(\bar{\boldsymbol{\varGamma }}\neq \emptyset \), by Theorem 4.2, there is a unique optimal controller to stabilize system (1.1); meanwhile, to minimize cost functional (1.2), the optimal controller is given as \(u_{k}=-1.3333x_{k}-0.1305\mathbb{E}x_{k}\), \(k\geq 0\). According to the optimal controller, the simulation of the system state and the curve of \(\mathbb{E}|x_{k}|^{2}\) are shown in Fig. 3. As expected, the curve is convergent.

Figure 3
figure 3

Curve of \(\mathbb{E}|x_{k}|^{2}\) with initial state \(x_{0}\sim N(0,1)\)

Example 6.4

Consider system (1.1) and cost functional (1.2) with

$$\begin{aligned}& A=2, \qquad \bar{A}=1,\qquad B=2, \qquad \bar{B}=1, \qquad C=1, \qquad \bar{C}=1, \\& D=-1,\qquad \bar{D}=1;\qquad Q=2, \qquad \bar{Q}=2, \qquad R=1, \qquad \bar{R}=1, \\& G=1, \qquad \bar{G}=1, \qquad \sigma ^{2}=1, \qquad x_{0} \sim N(0, 1). \end{aligned}$$

By GAREs (3.2), we get that P has two different negative roots \(P=-0.1604\), \(P=-0.5670\), and has a negative root \(\bar{P}=-0.1548\). Then we have the following two sets of solutions:

$$\begin{aligned}& W^{(1)}_{1}=1.3584, \qquad \lambda ^{(1)}_{1}=0.198, \qquad S^{(1)}_{1}=0.6792, \\& W^{(2)}_{1}=1.249, \qquad \lambda ^{(2)}_{1}=-0.8368, \qquad S^{(2)}_{1}=-0.8368; \\& W^{(1)}_{2}=-0.835, \qquad \lambda ^{(1)}_{2}=-1.835, \qquad S^{(1)}_{2}=-0.701, \\& W^{(2)}_{2}=-4.0424, \qquad \lambda ^{(2)}_{2}=-4.4962, \qquad S^{(2)}_{2}=-4.4962. \end{aligned}$$

In the above two cases, \(\bar{\boldsymbol{\varGamma }}=\emptyset \). Furthermore, when \(P=-0.1604\), we can get \(\mathcal{K}=-0.1027\), \(\bar{\mathcal{K}}=-0.5975\). Similarly, when \(P=-0.5670\), we get \(\mathcal{K}=-1.2863\), \(\bar{\mathcal{K}}=-19.0322\). Thus, the controllers are presented as \(u_{k}=-0.1027x_{k}-0.5975\mathbb{E}x_{k}\), \(u_{k}=-1.2863x_{k}-19.0322 \mathbb{E}x_{k}\), respectively. Simulation results for the curve of \(\mathbb{E}|x_{k}|^{2}\) with the corresponding optimal controller are shown as in Fig. 4 and Fig. 5, respectively. As expected, the curves are not convergent.

Figure 4
figure 4

Curve of \(\mathbb{E}|x_{k}|^{2}\) with initial state \(x_{0}\sim N(0,1)\)

Figure 5
figure 5

Curve of \(\mathbb{E}|x_{k}|^{2}\) with initial state \(x_{0}\sim N(0,1)\)

7 Concluding remarks

We have investigated the exact observability of a linear stochastic time-invariant system in this work. How to extend various definitions to the linear stochastic time-varying system is a meaningful topic that merits further discussions. Compared with the time-invariant system, defining the exact observability for the time-varying stochastic system is much more difficult and sophisticated. In addition, the necessary and sufficient stabilization conditions also deserve to be systematically studied. Thus, we attempt to discuss the linear time-varying system deeply in the future.