1 Introduction and main result

Let us consider the control problem

$$\begin{aligned} {\mathbf {x}}' = A_0(t)\,{\mathbf {x}}+ B_0(t)\,{\mathbf {u}}\end{aligned}$$
(1.1)

for \({\mathbf {x}}\in {\mathbb {R}}^n\) and \({\mathbf {u}}\in {\mathbb {R}}^m\), the quadratic form (or supply rate)

$$\begin{aligned} {\mathcal {Q}}(t,{\mathbf {x}},{\mathbf {u}}):=\frac{1}{2}\left( \langle \,{\mathbf {x}}, G_0(t)\,{\mathbf {x}}\,\rangle +2\,\langle \,{\mathbf {x}},g_0(t)\,{\mathbf {u}}\,\rangle +\langle \,{\mathbf {u}}, R_0(t)\,{\mathbf {u}}\,\rangle \right) , \end{aligned}$$
(1.2)

and a point \({\mathbf {x}}_0\in {\mathbb {R}}^n\). We represent by \({\mathcal {P}}_{{\mathbf {x}}_0}\) the set of pairs \(({\mathbf {x}},{\mathbf {u}}):[0,\infty )\rightarrow {\mathbb {R}}^n\times {\mathbb {R}}^m\) of measurable functions satisfying (1.1) with \({\mathbf {x}}(0)={\mathbf {x}}_0\), and consider the problem of minimizing the quadratic functional

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0}:{\mathcal {P}}_{{\mathbf {x}}_0}\rightarrow {\mathbb {R}}\cup \{\pm \infty \}, \qquad ({\mathbf {x}},{\mathbf {u}})\mapsto \int _0^{\infty } {\mathcal {Q}}(t,{\mathbf {x}}(t),{\mathbf {u}}(t))\,\mathrm{d}t. \end{aligned}$$
(1.3)

The functions \(A_0\), \(B_0\), \(G_0\), \(g_0\), and \(R_0\) are assumed to be bounded and uniformly continuous on \({\mathbb {R}}\), with values in the sets of real matrices of the appropriate dimensions; \(G_0\) and \(R_0\) are symmetric, with \(R_0(t)\ge \rho I_m\) for a common \(\rho >0\) and all \(t\in {\mathbb {R}}\); and \(\langle \,\cdot ,\cdot \,\rangle \) represents the Euclidean inner product in \({\mathbb {R}}^n\) or \({\mathbb {R}}^m\). A pair \(({\mathbf {x}},{\mathbf {u}})\in {\mathcal {P}}_{{\mathbf {x}}_0}\) is admissible for \({\mathcal {I}}_{{\mathbf {x}}_0}\) if \(({\mathbf {x}},{\mathbf {u}})\in L^2([0,\infty ),{\mathbb {R}}^n)\times L^2([0,\infty ,{\mathbb {R}}^m)\). That is, \({\mathbf {u}}:[0,\infty )\rightarrow {\mathbb {R}}^m\) belongs to \(L^2([0,\infty ),{\mathbb {R}}^m)\), \({\mathbf {x}}:[0,\infty )\rightarrow {\mathbb {R}}^n\) solves (1.1) for this control with \({\mathbf {x}}(0)={\mathbf {x}}_0\), and \({\mathbf {x}}\) belongs to \(L^2([0,\infty ),{\mathbb {R}}^n)\). In particular, \({\mathcal {I}}_{{\mathbf {x}}_0}({\mathbf {x}},{\mathbf {u}})\in {\mathbb {R}}\) if the pair \(({\mathbf {x}},{\mathbf {u}})\) is admissible.

The questions we will consider are classical in control theory: the existence of admissible pairs and, if so, the possibility of minimizing the functional \({\mathcal {I}}_{{\mathbf {x}}_0}({\mathbf {x}},{\mathbf {u}})\) evaluated on the set of these pairs. The existence of a finite minimum is not a trivial question even if admissible pairs exist: since no assumption is made on the sign of \({\mathcal {Q}}\), the infimum can be \(-\,\infty \).

This infinite horizon problem was considered for T-periodic coefficients \(A_0,B_0\), \(G_0,g_0\) and \(R_0\) by Yakubovich in [26, 27], where he explains the origin of the problem and summarizes the results previously known, providing numerous references. Under the hypothesis of existence of at least an admissible pair for any \({\mathbf {x}}_0\in {\mathbb {R}}^n\), it is proved in [26, 27] the equivalence between the solvability of the minimization problem for any \({\mathbf {x}}_0\in {\mathbb {R}}^n\) and other seven conditions which are formulated in terms of the properties of a 2n-dimensional periodic linear Hamiltonian system which is provided by the coefficients of the minimization problem:

$$\begin{aligned} \left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] ' =H_0(t)\,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \end{aligned}$$
(1.4)

with

$$\begin{aligned} H_0:=\left[ \begin{array}{cc} A_0-B_0\,R_0^{-1}g_0^{{\mathrm{T}}}&{}\quad B_0\,R_0^{-1}B_0^{{\mathrm{T}}}\\ G_0-g_0\,R_0^{-1}g_0^{{\mathrm{T}}} &{}\quad -A_0^{{\mathrm{T}}}+g_0\,R_0^{-1}B_0^{{\mathrm{T}}} \end{array}\right] . \end{aligned}$$

Among all these equivalences, one of the most meaningful reads as follows: there exists a unique admissible pair providing the minimum value for \({\mathcal {I}}_{{\mathbf {x}}_0}\) for any \({\mathbf {x}}_0\in {\mathbb {R}}^n\) if and only, in Yakubovich’s words, the frequency condition and the nonoscillation condition are satisfied. That is, if (1.4) admits exponential dichotomy on \({\mathbb {R}}\) and, in addition, the Lagrange plane \(l^+\) composed by those initial data \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \) giving rise to a solution bounded at \(+\,\infty \) admits a basis whose vectors are the columns of a matrix \(\left[ \begin{array}{c} I_n \\ M^+\end{array}\right] \). Here, \(I_n\) is the identity \(n\times n\) matrix, and \(M^+\) turns out to be a symmetric matrix (since \(l^+\) is a Lagrange plane). In addition, the minimizing pair \((\widetilde{{\mathbf {x}}},\widetilde{{\mathbf {u}}})\) can be obtained from the solution \(\left[ \begin{array}{c} \widetilde{{\mathbf {x}}}(t)\\ \widetilde{{\mathbf {y}}}(t)\end{array}\right] \) of (1.4) with initial data \(\left[ \begin{array}{c} {\mathbf {x}}_0\\ M^+{\mathbf {x}}_0\end{array}\right] \) via the feedback rule

$$\begin{aligned} \widetilde{{\mathbf {u}}}(t)=R_0^{-1}(t)\,B_0^{{\mathrm{T}}}(t)\,\widetilde{{\mathbf {y}}}(t)- R_0^{-1}(t)\,g_0^{{\mathrm{T}}}(t)\,\widetilde{{\mathbf {x}}}(t), \end{aligned}$$
(1.5)

and the value of the minimum is \({\mathcal {I}}_{{\mathbf {x}}_0}(\widetilde{{\mathbf {x}}},\widetilde{{\mathbf {u}}})=-(1/2)\,{\mathbf {x}}_0^{{\mathrm{T}}}M^+{\mathbf {x}}_0\).

In this paper we go deeper in the analysis of these problems: we will establish conditions including the existence of exponential dichotomy under which it is possible to characterize the set of those \({\mathbf {x}}_0\in {\mathbb {R}}^n\) for which there exist admissible pairs, in terms of a relation between \({\mathbf {x}}_0\) and \(l^+\); we will check that for \({\mathbf {x}}_0\) in this set, the minimum is finite; and we will determine the value of the minimum as well as an admissible pair on which it is attained.

But we will not limit ourselves to the periodic case. Yakubovich Frequency Theorem was later extended to the general nonautonomous case of bounded and uniformly continuous coefficients: in Fabbri et al. [4], six equivalences where proved in the case of recurrent coefficients; in Johnson and Núñez [12], the theorem was proved in the general (non-necessarily recurrent) case; and in Chapter 7 of Johnson et al. [14] the list of eight equivalences was completed, adding two more ones related to the rotation number. When dealing with this general case, the problem is analyzed by including it in a family of problems of the same type, by means of the so-called hull or Bebutov construction, which we will recall in Sect. 2.1. This procedure provides the following families of control systems and functionals:

$$\begin{aligned}&{\mathbf {x}}'=\,A(\omega {\cdot }t)\,{\mathbf {x}}+B(\omega {\cdot }t)\,{\mathbf {u}}, \end{aligned}$$
(1.6)
$$\begin{aligned}&{\mathcal {Q}}_\omega (t,{\mathbf {x}},{\mathbf {u}}):=\,\frac{1}{2}\left( \langle \,{\mathbf {x}}, G(\omega {\cdot }t)\,{\mathbf {x}}\,\rangle +2\langle \,{\mathbf {x}},g(\omega {\cdot }t)\,{\mathbf {u}}\,\rangle +\langle \,{\mathbf {u}}, R(\omega {\cdot }t)\,{\mathbf {u}}\,\rangle \right) , \end{aligned}$$
(1.7)
$$\begin{aligned}&{\mathcal {I}}_{{\mathbf {x}}_0,\omega }:{\mathcal {P}}_{{\mathbf {x}}_0,\omega }\rightarrow {\mathbb {R}}\cup \{\pm \infty \},\qquad ({\mathbf {x}},{\mathbf {u}})\mapsto \int _0^{\infty } {\mathcal {Q}}_\omega (t,{\mathbf {x}}(t),{\mathbf {u}}(t))\,\mathrm{d}t \end{aligned}$$
(1.8)

for \(\omega \in \varOmega \) and \({\mathbf {x}}_0\in {\mathbb {R}}^n\). Here, \(\varOmega \) is a compact metric space admitting a continuous flow \(\sigma :{\mathbb {R}}\times \varOmega \rightarrow \varOmega ,\,(t,\omega )\mapsto \sigma (t,\omega )=:\omega {\cdot }t\); A, B, G, g, and R are bounded and uniformly continuous matrix-valued functions on \(\varOmega \); G and R are symmetric with \(R>0\) (which ensures that R is positively bounded from below, since \(\varOmega \) is compact); and \({\mathcal {P}}_{{\mathbf {x}}_0,\omega }\) is the set of measurable pairs \(({\mathbf {x}},{\mathbf {u}}):[0,\infty )\rightarrow {\mathbb {R}}^n\times {\mathbb {R}}^m\) solving (1.6) for \(\omega \) with \({\mathbf {x}}(0)={\mathbf {x}}_0\). The admissible pairs are defined for each \(\omega \in \varOmega \) and \({\mathbf {x}}_0\in {\mathbb {R}}^n\) as for single problem (1.1), and their existence is guaranteed by the following hypothesis (see, e.g., Proposition 7.4 of [14]):

  1. H

    There exists a continuous \(m\times n\) matrix-valued function \(K_0:\varOmega \rightarrow {\mathbb {M}}_{m\times n}({\mathbb {R}})\) such that the family of linear systems

    $$\begin{aligned} {\mathbf {x}}'=(A(\omega {\cdot }t)+ B(\omega {\cdot }t)\,K_0(\omega {\cdot }t))\,{\mathbf {x}},\quad \omega \in \varOmega , \end{aligned}$$

    is of Hurwitz type at \(+\,\infty \).

The definition of the Hurwitz character of the family, related to the concept of exponential dichotomy, is given in Sect. 2. Under this condition, the equivalences are formulated in [4] in terms of the properties of the family of linear Hamiltonian systems

$$\begin{aligned} \left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] '= H(\omega {\cdot }t)\,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array} \right] ,\quad \omega \in \varOmega \end{aligned}$$
(1.9)

given by

$$\begin{aligned} H:=\left[ \begin{array}{cc} A-B\,R^{-1}g^{{\mathrm{T}}}&{}\quad B\,R^{-1}B^{{\mathrm{T}}}\\ G-g\,R^{-1}g^{{\mathrm{T}}} &{}\quad -A^{{\mathrm{T}}} +g\,R^{-1}B^{{\mathrm{T}}} \end{array}\right] . \end{aligned}$$
(1.10)

We will use the notation (1.9)\({}_\omega \) to refer to the system of the family corresponding to the element \(\omega \) of \(\varOmega \), and we will make the same with the remaining equations defined along the orbits of \(\varOmega \). The results of [4, 12] show, in particular, the equivalence of the following situations if H holds:

  1. (1)

    The family of linear Hamiltonian systems (1.9) admits an exponential dichotomy over \(\varOmega \), and the (symmetric) Weyl matrix-valued function \(M^+\) globally exists; that is, each one of the systems of the family admits an exponential dichotomy on \({\mathbb {R}}\) and for any \(\omega \in \varOmega \) the Lagrange plane \(l^+(\omega )\) of the solutions bounded at \(+\,\infty \) admits as basis the column vectors of a matrix \(\left[ \begin{array}{c} I_n\\ M^+(\omega )\end{array}\right] \) (see Sect. 2).

  2. (2)

    The minimization problem is solvable for each \(\omega \in \varOmega \) and \({\mathbf {x}}_0\in {\mathbb {R}}^n\).

In addition, in this case the minimizing pair \((\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )\) comes from the solution \(\left[ \begin{array}{c}\widetilde{{\mathbf {x}}}_\omega (t)\\ \widetilde{{\mathbf {y}}}_\omega (t)\end{array}\right] \) of (1.9)\(_\omega \) with initial data \(\left[ \begin{array}{c} {\mathbf {x}}_0\\ M^+(\omega )\,{\mathbf {x}}_0\end{array}\right] \) via the feedback rule

$$\begin{aligned} \widetilde{{\mathbf {u}}}_\omega (t)= R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,\widetilde{{\mathbf {y}}}_\omega (t)- R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,\widetilde{{\mathbf {x}}}_\omega (t), \end{aligned}$$
(1.11)

and the value of the minimum is \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }(\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )=-\,(1/2)\,{\mathbf {x}}_0^{{\mathrm{T}}}M^+(\omega )\,{\mathbf {x}}_0\). And, in fact, H holds when there is exponential dichotomy and \(M^+\) globally exists. (We point out that rule (1.11)\(_\omega \) provides a pair “state, control” solving (1.6)\(_\omega \) whenever we have a solution of (1.9)\(_\omega \).)

Among the remaining equivalences, we want to call attention to another one, formulated in terms of the rotation number of family (1.9), and which holds when \(\varOmega \) admits an ergodic measure m with total support. If so, and always under hypothesis H, the previous situation is equivalent to

  1. (3)

    the rotation number of family (1.9) for m is zero.

Now, we will formulate our main result. It is fundamental to note that hypothesis H is not in force: otherwise, the assumptions of the theorem would imply the global solvability of the family of minimization problems. Recall that we have represented by \(l^+(\omega )\) the Lagrange plane of the solutions bounded at \(+\,\infty \) in the case of exponential dichotomy over \(\varOmega \) of family (1.9). And recall also that the conditions assumed on ABGg and R (described after (1.8)) are in force.

Theorem 1.1

Let us assume that \(\varOmega \) is minimal, that there exists \(\omega _0\in \varOmega \) such that the \(n\times m\) matrix \(B(\omega _0)\) has full rank, that the family of systems (1.9) admits exponential dichotomy over \(\varOmega \), and that there exists a \(\sigma \)-ergodic measure on \(\varOmega \) for which the corresponding rotation number is zero. Let \(l^+(\omega )\) be the Lagrange plane of the solutions of (1.6)\(_\omega \) bounded at \(+\,\infty \). And let us fix \(\omega \in \varOmega \) and \({\mathbf {x}}_0\in {\mathbb {R}}^n\). Then,

  1. (i)

    there exist admissible pairs \(({\mathbf {x}},{\mathbf {u}})\) for the functional \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\) given by (1.8)\(_\omega \) if and only if there exists \({\mathbf {y}}_0\in {\mathbb {R}}^n\) such that \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+(\omega )\).

  2. (ii)

    In this case, the infimum of \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\) is finite. In addition, if the columns of the \(2n\times n\) matrix \(\left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \) are a basis of \(l^+(\omega )\) and \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] =\left[ \begin{array}{c} L_{\omega ,1}\,{\mathbf {c}}\\ L_{\omega ,2}\,{\mathbf {c}}\end{array}\right] \) for a vector \({\mathbf {c}}\in {\mathbb {R}}^n\), then the infimum is given by \(-(1/2)\,{\mathbf {c}}^{{\mathrm{T}}} L^{{\mathrm{T}}}_{\omega ,2}\,L_{\omega ,1}\,{\mathbf {c}}\), and a minimizing pair \((\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )\in {\mathcal {P}}_{{\mathbf {x}}_0,\omega }\) is obtained from the solution \(\left[ \begin{array}{c}\widetilde{{\mathbf {x}}}_\omega (t)\\ \widetilde{{\mathbf {y}}}_\omega (t)\end{array}\right] \) of (1.9)\(_\omega \) with initial data \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \) via the feedback rule (1.11)\(_\omega \).

  3. (iii)

    If the situation in (i) does not hold, then \({\mathcal {I}}_{{\mathbf {x}}_0}({\mathbf {x}},{\mathbf {u}})=\infty \) for any pair \(({\mathbf {x}},{\mathbf {u}})\in {\mathcal {P}}_{{\mathbf {x}}_0,\omega }\).

Section 2 contains the notions and basic properties required to fully understand the hypotheses and statements of Theorem 1.1, whose proof is given in Sect. 3. In that section, we will also analyze the hypotheses; we will explain how these hypotheses can be formulated for the initial problem, for which we give a less general version of the main theorem; and we will show autonomous and nonautonomous scenarios in which admissible pairs do not always exist.

2 Preliminaries

All the contents of this preliminary section can be found in Johnson et al. [14], together with a quite exhaustive list of references for the origin of the results here summarized.

Let us fix some notation. The set of \(d\times m\) matrices with entries in the real line \({\mathbb {R}}\) is represented by \({\mathbb {M}}_{d\times m}({\mathbb {R}})\). As usual, \({\mathbb {R}}^d:={\mathbb {M}}_{d\times 1}({\mathbb {R}})\), and \(A^{{\mathrm{T}}}\) is the transpose of the matrix A. The subset \({\mathbb {S}}_d({\mathbb {R}})\subset {\mathbb {M}}_{d\times d}({\mathbb {R}})\) is composed by the symmetric matrices. The expressions \(M>0\), \(M\ge 0\), \(M<0\), and \(M\le 0\) for \(M\in {\mathbb {S}}_d({\mathbb {R}})\) mean that M is positive definite, positive semidefinite, negative definite, and negative semidefinite, respectively. If \(M:\varOmega \rightarrow {\mathbb {S}}_d({\mathbb {R}})\) is a map, \(M>0\) means that \(M(\omega )>0\) for all the elements \(\omega \in \varOmega \), and \(M<0\), \(M\ge 0\), and \(M\le 0\) have the analogous meaning. It is also obvious what \(M_1>M_2\), \(M_1\ge M_2\), \(M_1<M_2\), and \(M_1\le M_2\) mean. We represent by \(I_d\) and \(0_d\) the identity and zero \(d\times d\) matrices, by \({\mathbf {0}}\) the null vector of \({\mathbb {R}}^d\) for all d, and by \(\left\| \cdot \right\| \) the Euclidean norm in \({\mathbb {R}}^d\).

A real Lagrange plane is an n-dimensional linear subspace of \({\mathbb {R}}^{2n}\) such that \([{\mathbf {x}}_1^{{\mathrm{T}}}\,{\mathbf {y}}_1^{{\mathrm{T}}}]\,J\,\left[ \begin{array}{c}{\mathbf {x}}_2\\ {\mathbf {y}}_2\end{array}\right] =0\) for any pair of its elements \(\left[ \begin{array}{c}{\mathbf {x}}_1\\ {\mathbf {y}}_1\end{array}\right] \) and \(\left[ \begin{array}{c}{\mathbf {x}}_2\\ {\mathbf {y}}_2\end{array}\right] \), where \(J:=\left[ \begin{array}{cc} 0_n&{}-I_n\\ I_n&{}0_n\end{array}\right] \). A Lagrange plane l is represented by \(\left[ \begin{array}{c} L_1\\ L_2\end{array}\right] \) (which we represent as \(l\equiv \left[ \begin{array}{c} L_1\\ L_2\end{array}\right] \)) if the column vectors of the matrix form a basis of the n-dimensional linear space l. In this case, \(L_2^{{\mathrm{T}}}L_1=L_1^{{\mathrm{T}}}L_2\). Note that it can be also represented by \(\left[ \begin{array}{c} I_n\\ M\end{array}\right] \) if and only if \(\det L_1\ne 0\), in which case the matrix \(M=L_2L_1^{-1}\) is symmetric.

The next concepts and properties are basic in topological dynamics and measure theory. A (real and continuous) global flow on a complete metric space \(\varOmega \) is a continuous map \(\sigma :{\mathbb {R}}\times \varOmega \rightarrow \varOmega ,\; (t,\omega )\mapsto \sigma (t,\omega )\) such that \(\sigma _0=\text {Id}\) and \(\sigma _{s+t}=\sigma _t\circ \sigma _s\) for each \(s,t\in {\mathbb {R}}\), where \(\sigma _t(\omega )=\sigma (t,\omega )\). The \(\sigma \)-orbit of a point \(\omega \in \varOmega \) is the set \(\{\sigma _t(\omega )\,|\;t\in {\mathbb {R}}\}\). A subset \(\varOmega _1\subset \varOmega \) is \(\sigma \)-invariant if \(\sigma _t(\varOmega _1)=\varOmega _1\) for every \(t\in {\mathbb {R}}\). A \(\sigma \)-invariant subset \(\varOmega _1\subset \varOmega \) is minimal if it is compact and does not contain properly any other compact \(\sigma \)-invariant set. And the continuous flow \((\varOmega ,\sigma )\) is minimal if \(\varOmega \) itself is minimal.

Let m be a normalized Borel measure on \(\varOmega \), i.e., a finite regular measure defined on the Borel subsets of \(\varOmega \) and with \(m(\varOmega )=1\). The measure m is \(\sigma \)-invariant if \(m(\sigma _t(\varOmega _1))=m(\varOmega _1)\) for every Borel subset \(\varOmega _1\subset \varOmega \) and every \(t\in {\mathbb {R}}\). A \(\sigma \)-invariant measure m is \(\sigma \)-ergodic if \(m(\varOmega _1)=0\) or \(m(\varOmega _1)=1\) for every \(\sigma \)-invariant subset \(\varOmega _1\subset \varOmega \). A real continuous flow \((\varOmega ,\sigma )\) admits at least an ergodic measure whenever \(\varOmega \) is compact. And the topological support of m, \({{\,\mathrm{Supp}\,}}m\), is the complement of the largest open set \(O\subset \varOmega \) for which \(m(O)=0\). We say that m has total support if its topological support is \(\varOmega \). If \(\varOmega \) is minimal, then any \(\sigma \)-ergodic measure has total support.

In the rest of the paper, \((\varOmega ,\sigma )\) will be a minimal continuous global flow on a compact metric space, and we will denote \(\omega {\cdot }t:=\sigma (t,\omega )\). We will work with families of linear systems of type (1.9) depending on continuous matrix-valued functions ABGg and R with the properties defined in the Introduction. Since \(H:\varOmega \rightarrow \mathfrak {sp}(n,{\mathbb {R}})\), where

$$\begin{aligned} \mathfrak {sp}(n,{\mathbb {R}}):=\{H\in {\mathbb {M}}_{2n\times 2n}({\mathbb {R}})\,|\;H^{{\mathrm{T}}} J+JH=0_{2n}\} \end{aligned}$$

for \(J=\left[ \begin{array}{cc} 0_n&{}-I_n\\ I_n&{}0_n\end{array}\right] \), the systems of the family are said to be Hamiltonian. Let \(U_{\mathrm{H}}(t,\omega )\) denote the fundamental matrix solution of system (1.9)\(_\omega \) with \(U(0,\omega )=I_{2n}\). The family (1.9) induces a real continuous global flow on the linear bundle \(\varOmega \times {\mathbb {R}}^{2n}\),

$$\begin{aligned} \tau _{\mathrm{H}}:{\mathbb {R}}\times \varOmega \times {\mathbb {R}}^{2n}\rightarrow \varOmega \times {\mathbb {R}}^{2n},\quad \left( t,\omega ,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \right) \mapsto \left( \omega {\cdot }t,U_{\mathrm{H}}(t,\omega )\,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \right) . \end{aligned}$$
(2.1)

An equivalent assertion can be done for any family of linear systems

$$\begin{aligned} {\mathbf {w}}'=S(\omega {\cdot }t)\,{\mathbf {w}},\quad \omega \in \varOmega \end{aligned}$$
(2.2)

for \({\mathbf {w}}\in {\mathbb {R}}^d\) if \(S:\varOmega \rightarrow {\mathbb {M}}_{d\times d}({\mathbb {R}})\) is continuous. We represent the corresponding fundamental matrix solution as \(U_S:{\mathbb {R}}\times \varOmega \rightarrow {\mathbb {M}}_{d\times d}({\mathbb {R}})\), and the flow that it provides as \(\tau _S:{\mathbb {R}}\times \varOmega \times {\mathbb {R}}^d\rightarrow \varOmega \times {\mathbb {R}}^d\).

In the rest of this section, we recall some basic concepts and some associated properties related to families of forms (2.2) and (1.9). Some of them are directly related to the statements of our main result (as in the case of the exponential dichotomy and the rotation number), while other ones are used as tools in its proof (as it happens with the uniform weak disconjugacy property).

We begin with the exponential dichotomy of a family of linear systems over \(\varOmega \), which is one of the main hypotheses of our main theorem.

Definition 2.1

The family (2.2) has exponential dichotomy over \(\varOmega \) if there exist constants \(\eta \ge 1\) and \(\beta >0\) and a splitting \(\varOmega \times {\mathbb {R}}^d=L^+\oplus L^-\) of the bundle into the Whitney sum of two closed subbundles such that

  • \(L^+\) and \(L^-\) are invariant under the flow \(\tau _S\) induced by (2.2) on \(\varOmega \times {\mathbb {R}}^d\); that is, if \((\omega ,{\mathbf {w}})\) belongs to \(L^+\) (or to \(L^-\)), so does \((\omega {\cdot }t,U_S(t,\omega )\,{\mathbf {w}})\) for all \(t\in {\mathbb {R}}\) and \(\omega \in \varOmega \).

  • \(\left\| U_S(t,\omega )\,{\mathbf {w}}\right\| \le \eta \,{\mathrm{e}}^{-\beta t}\left\| {\mathbf {w}}\right\| \quad \) for every \(t\ge 0\) and \((\omega ,{\mathbf {w}})\in L^+\).

  • \(\left\| U_S(t,\omega )\,{\mathbf {w}}\right\| \le \eta \,{\mathrm{e}}^{\beta t}\left\| {\mathbf {w}}\right\| \quad \;\;\,\) for every \(t\le 0\) and \((\omega ,{\mathbf {w}})\in L^-\).

In the case that \(L^+={\mathbb {R}}^d\), family (2.2) is of Hurwitz type at \(+\,\infty \).

In general, we will omit the words “over \(\varOmega \)” when the family (2.2) has exponential dichotomy, since no confusion arises. Let us summarize in the next list of remarks some well-known fundamental properties satisfied by a family of linear systems which has exponential dichotomy.

Remarks 2.2

  1. 1.

    If \(\varOmega \) is minimal (as we assume in Theorem 1.1), the exponential dichotomy of the family (2.2) over \(\varOmega \) is equivalent to the exponential dichotomy on \({\mathbb {R}}\) of anyone of its systems (see, e.g., [2] for the definition of this classical concept). This property is proved in [18, Theorem 2 and Section 3]. In addition, the exponential dichotomy of the family is equivalent to the unboundedness of any nontrivial solution of anyone of the systems, as proved in Theorem 10.2 of [20].

  2. 2.

    Suppose that the family (2.2) has exponential dichotomy. There exists \(\delta >0\) such that if \(T:\varOmega \rightarrow {\mathbb {M}}_{d\times d}({\mathbb {R}})\) is a continuous map and \(\max _{\omega \in \varOmega }\left\| S(\omega )-T(\omega )\right\| <\delta \) (where \(\left\| \cdot \right\| \) is representing the Euclidean operator norm), then the family \({\mathbf {w}}'=T(\omega {\cdot }t)\,{\mathbf {w}},\;\omega \in \varOmega \) has exponential dichotomy. This well-known property of robustness is a consequence of the Sacker and Sell Spectral Theorem (Theorem 6 of [19]).

  3. 3.

    Assume that the family of linear systems is of Hamiltonian type, as in the case of (1.9), and that it has exponential dichotomy. Then, the sections

    $$\begin{aligned} l^\pm (\omega ):=\left\{ \left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \in {\mathbb {R}}^{2n}\,\bigg |\;\left( \omega ,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \right) \in L^\pm \right\} \end{aligned}$$
    (2.3)

    are real Lagrange planes. In addition,

    $$\begin{aligned} \begin{aligned} l^\pm (\omega )&=\left\{ {\mathbf {z}}\in {\mathbb {R}}^{2n}\,|\;\lim _{t\rightarrow \pm \infty }\left\| U_{\mathrm{H}}(t,\omega )\, \left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \right\| ={\mathbf {0}}\right\} \\&=\left\{ {\mathbf {z}}\in {\mathbb {R}}^{2n}\,|\;\sup _{\pm t\in [0,\infty )}\left\| U_{\mathrm{H}}(t,\omega )\,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array}\right] \right\| <\infty \right\} . \end{aligned} \end{aligned}$$

    These properties are proved, for example, in Section 1.4 of [14].

  4. 4.

    Also in the Hamiltonian case, assume that for all \(\omega \in \varOmega \) the Lagrange plane \(l^+(\omega )\) can be represented by the matrix \(\left[ \begin{array}{c} I_n\\ M^+(\omega )\end{array}\right] \). Or, equivalently, that for all \(\omega \in \varOmega \) the Lagrange plane \(l^+(\omega )\) can be represented by a matrix \(\left[ \begin{array}{c} L_{\omega ,1}^+\\ L_{\omega ,2}^+\end{array}\right] \) with \(\det L_{\omega ,1}^+\ne 0\) (so that \(M^+(\omega )=L_{\omega ,2}\,L_{\omega ,1}^{-1}\)). In this case, \(M^+:\varOmega \rightarrow {\mathbb {S}}_n({\mathbb {R}})\) is a continuous matrix-valued function, and it is known as one of the Weyl functions for (1.9). In this situation, we say that the Weyl function \(M^+\) globally exists. (The other Weyl function is \(M^-\), associated with the subbundle \(L^-\), and it satisfies the same properties if it exists.)

The other fundamental hypothesis of our main theorem refers to the value of the rotation number for the family (1.9), whose definition we recall now. This object depends on a given \(\sigma \)-ergodic measure on \(\varOmega \). Among the many equivalent definitions for this quantity, we give one which extends that which is possibly the best known in dimension 2 (see [7, 9]). We write as \(U_{\mathrm{H}}(t,\omega )=\left[ \begin{array}{cc} U_1(t,\omega )&{}U_3(t,\omega )\\ U_2(t,\omega )&{}U_4(t,\omega )\end{array}\right] \) the matrix-valued solution of (1.9) with \(U_{\mathrm{H}}(0,\omega )=I_{2n}\). And \(\arg :{\mathbb {C}}\rightarrow {\mathbb {R}}\) is the continuous branch of the argument of a complex number for which \(\arg 1=0\).

Definition 2.3

Let m be a \(\sigma \)-ergodic measure on \(\varOmega \). The rotation number \(\alpha (m)\)of the family of linear Hamiltonian systems (1.9) with respect to m is the value of

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\,\arg \det (U_1(t,\omega )-iU_2(t,\omega )) \end{aligned}$$

for m-a.a. \(\omega \in \varOmega \).

It is proved in [17] that the limits exist and take the same finite value for m-a.a. \(\omega \in \varOmega \). The analysis of \(\alpha (m)\) made in [17] is completed in [3] and in Chapter 2 of [14], where the interested reader may find many other equivalent definitions and an exhaustive description of the properties of the rotation number.

Now, we introduce the concept of uniform weak disconjugacy.

Definition 2.4

The family of linear Hamiltonian systems (1.9) is uniformly weakly disconjugate on \([0,\infty )\) if there exists \(t_0\ge 0\) independent of \(\omega \) such that for every nonzero solution \({\mathbf {z}}(t,\omega )=\left[ \begin{array}{c} {\mathbf {z}}_1(t,\omega )\\ {\mathbf {z}}_2(t,\omega )\end{array}\right] \) of the systems corresponding to \(\omega \) with \({\mathbf {z}}_1(0,\omega )={\mathbf {0}}\), there holds \({\mathbf {z}}_1(t,\omega )\ne {\mathbf {0}}\) for all \(t>t_0\).

In the next remarks, some results proved in [5, 11] and in Chapter 5 of [14] are summarized. Note that the fact that the submatrix \(B\,R^{-1}B^{{\mathrm{T}}}\) of H [see (1.10)] is positive semidefinite is fundamental in what follows.

Remarks 2.5

  1. 1.

     If the family (1.9) is uniformly weakly disconjugate, then there exist uniform principal solutions at \(\pm \infty \), \(\left[ \begin{array}{c} L_1^\pm (t,\omega )\\ L_2^\pm (t,\omega )\end{array}\right] \) (see Theorem 5.2 of [11] or Theorem 5.26 of [14]). They are real \(2n\times n\) matrix-valued solutions of (1.9) satisfying the following properties: for all \(t\in {\mathbb {R}}\) and \(\omega \in \varOmega \), the matrices \(L_1^\pm (t,\omega )\) are nonsingular and \(\left[ \begin{array}{c} L_1^\pm (t,\omega )\\ L_2^\pm (t,\omega )\end{array}\right] \) represent Lagrange planes; and for all \(\omega \in \varOmega \),

    $$\begin{aligned} \lim _{\pm t\rightarrow \infty }\left( \int _0^t (L_1^\pm )^{-1}(s,\omega )\,H_3(\omega {\cdot }s)\, ((L_1^\pm )^{{\mathrm{T}}})^{-1}(s,\omega )\,\mathrm{d}s\right) ^{-1} =0_n. \end{aligned}$$

    In addition, if the matrix-valued functions \(\left[ \begin{array}{c} L_1^\pm (t,\omega )\\ L_2^\pm (t,\omega )\end{array}\right] \) are uniform principal solutions at \(\pm \infty \), then the real matrix-valued functions \(N^\pm :\varOmega \rightarrow {\mathbb {S}}_n({\mathbb {R}}),\;\omega \mapsto N^\pm (\omega ):= L_2^\pm (0,\omega )\,(L_1^\pm (0,\omega ))^{-1}\) are unique. The functions \(N^\pm \) are called principal functions of (1.9). The interested reader can find in Chapter 5 of [14] a careful description of the uniform principal solutions and the principal functions for families of linear Hamiltonian systems of type (1.9). A more general theory concerning principal solutions for less restrictive assumptions is developed in [23, 24] and references therein. And a recent in-depth analysis of the corresponding Riccati equations (which the principal functions solve) can be found in [22].

  2. 2.

    According to Theorem 5.2 of [11] or to Theorem 5.17 of [14], the uniform weak disconjugacy of the family (1.9) ensures the validity of the condition D2 For all \(\omega \in \varOmega \) and for any nonzero solution \(\left[ \begin{array}{c} {\mathbf {x}}(t,\omega )\\ {\mathbf {y}}(t,\omega )\end{array}\right] \) of system (1.9)\(_\omega \) with \({\mathbf {x}}(0,\omega )={\mathbf {0}}\), the vector \({\mathbf {x}}(t,\omega )\) does not vanish identically on \([0,\infty )\). It also ensures that the rotation number \(\alpha (m)\) with respect to any ergodic measure m on \(\varOmega \) vanishes: see Theorem 2 of [5] or Theorem 5.67 of [14]. Conversely, if there exists a \(\sigma \)-ergodic measure on \(\varOmega \) with total support (which is always the case if \(\varOmega \) is minimal) for which the rotation number is zero and D2 holds, then the family (1.9) is uniformly weakly disconjugate. This assertion follows from Theorems 5.67 and 5.17 of [14].

2.1 The hull construction

Let us complete Sect. 2 by explaining briefly how we obtain the family of problems given by (1.6), (1.7) and (1.8) from the initial one, given by (1.1), (1.2) and (1.3).

Let us denote \(C_0:=(A_0,B_0,G_0,g_0,R_0)\), so that

$$\begin{aligned} C_0:{\mathbb {R}}\rightarrow {\mathbb {M}}_{n\times n}({\mathbb {R}})\times {\mathbb {M}}_{n\times m}({\mathbb {R}}) \times {\mathbb {M}}_{n\times n}({\mathbb {R}})\times {\mathbb {M}}_{n\times m}({\mathbb {R}})\times {\mathbb {M}}_{m\times m}({\mathbb {R}}), \end{aligned}$$
(2.4)

and define \(\varOmega \) as its hull, that is, the closure with respect to the compact-open topology of \({\mathbb {R}}\) of the set \(\{C_s\,|\;s\in {\mathbb {R}}\}\), where \(C_s(t):=C_0(t+s)\). It turns out that \(\varOmega \) is a compact metric space and that the time translation map

$$\begin{aligned} \sigma :{\mathbb {R}}\times \varOmega \rightarrow \varOmega ,\quad (t,\omega )\mapsto \omega {\cdot }t, \end{aligned}$$

where \((\omega {\cdot }t)(s)=\omega (t+s),\) defines a continuous flow on \(\varOmega \). The proofs of these assertions can be found in Sell [21].

Note that any element \(\omega \in \varOmega \) can be written as \(\omega =(\omega _1,\omega _2,\omega _3,\omega _4,\omega _5)\), and that \(\omega {\cdot }t=(\omega _1{\cdot }t,\omega _2{\cdot }t,\omega _3{\cdot }t,\omega _4{\cdot }t,\omega _5{\cdot }t)\) with \((\omega _i{\cdot }t)(s)=\omega _i(t+s)\) for \(i=1,\ldots ,5\). We define

$$\begin{aligned} A:\varOmega \rightarrow {\mathbb {M}}_{n\times n}({\mathbb {R}}),\quad (\omega _1,\omega _2,\omega _3,\omega _4,\omega _5)\mapsto \omega _1(0), \end{aligned}$$

and proceed in a similar way to define \(B:\varOmega \rightarrow {\mathbb {M}}_{n\times m}({\mathbb {R}})\), \(G:\varOmega \rightarrow {\mathbb {M}}_{n\times n}({\mathbb {R}})\), \(g:\varOmega \rightarrow {\mathbb {M}}_{n\times m}({\mathbb {R}})\), and \(R:\varOmega \rightarrow {\mathbb {M}}_{n\times n}({\mathbb {R}})\). It is obvious that ABGg and R are continuous maps on \(\varOmega \). In addition, if \(\widetilde{\omega }=C_0\in \varOmega \), then \(A(\widetilde{\omega }{\cdot }t)= (\widetilde{\omega }_1{\cdot }t)(0)=\widetilde{\omega }_1(t)=A_0(t)\), and analogous equalities hold for BGg and R. This means that the family of problems given by (1.1), (1.2) and (1.3) for \(\omega \in \varOmega \) includes the initial one, which corresponds to the element \(C_0\) of \(\varOmega \). Note that G and R are symmetric, and that \(R>0\).

Additional recurrence properties on \(C_0\) ensure that the flow on \(\varOmega \) is minimal, which is one of the hypotheses of Theorem 1.1. This is, for instance, the case when \(C_0\) is almost periodic or almost automorphic (see, e.g., [6, 25] for the definitions). Note that in the minimal case, \(\varOmega \) is the hull of any of its elements.

3 Proof of Theorem 1.1

Let \((\varOmega ,\sigma )\) be a real continuous global flow on a compact metric space, and let us denote \(\omega {\cdot }t:=\sigma (t,\omega )\). We consider the family of control systems (1.6) and functionals (1.8) under the conditions on the coefficients ABGg and R described in the Introduction (which are guaranteed under the initial conditions on \(A_0,B_0,G_0,g_0\) and \(R_0\), as explained in Sect. 2.1), and consider the minimization problem there explained. We also consider the family of linear Hamiltonian systems defined by (1.9) and (1.10). During this whole section, we will be working under the hypotheses of Theorem 1.1, namely

Hypotheses 3.1

\(\varOmega \) is minimal, there exists \(\omega _0\in \varOmega \) such that the \(n\times m\) matrix \(B(\omega _0)\) has full rank, the family of linear Hamiltonian systems (1.9) has exponential dichotomy over \(\varOmega \), and there exists a \(\sigma \)-ergodic measure m on \(\varOmega \) for which the rotation number \(\alpha (m)\) is zero.

We will analyze later on the scope of these hypotheses. Let us begin with a result which includes the assertions of Theorem 1.1 in the simplest situation, that of \(m\ge n\). This result will play a fundamental role in the general proof.

Theorem 3.2

Assume that Hypotheses 3.1 hold and that \(m\ge n\). Then,

  1. (i)

    The family (1.9) is uniformly weakly disconjugate.

  2. (ii)

    The Weyl functions \(M^\pm :\varOmega \rightarrow {\mathbb {S}}_n({\mathbb {R}})\) associated with the exponential dichotomy of the family (1.9) globally exist, and they agree with the principal functions \(N^\pm \).

  3. (iii)

    There exist admissible pairs for the functional \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\) given by (1.8)\(_\omega \) for all \(({\mathbf {x}}_0,\omega )\in {\mathbb {R}}^n \times \varOmega \), and the corresponding minimization problem is solvable. In addition, the (unique) minimizing pair \((\widetilde{{\mathbf {x}}}_\omega (t),\widetilde{{\mathbf {u}}}_\omega (t))\in {\mathcal {P}}_{{\mathbf {x}}_0,\omega }\) comes from the solution \(\left[ \begin{array}{c}\widetilde{{\mathbf {x}}}_\omega (t)\\ \widetilde{{\mathbf {y}}}_\omega (t)\end{array}\right] \) of (1.9)\(_\omega \) with initial data \(\left[ \begin{array}{c} {\mathbf {x}}_0\\ M^+(\omega )\,{\mathbf {x}}_0\end{array}\right] \) via the feedback rule (1.11)\(_\omega \), and the value of the minimum is \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }(\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )=-(1/2)\,{\mathbf {x}}_0^{{\mathrm{T}}}M^+(\omega )\,{\mathbf {x}}_0\).

Proof

Let \(U_A(t,\omega _0)\) be the fundamental matrix solution of \({\mathbf {x}}'=A(\omega _0{\cdot }t)\,{\mathbf {x}}\) with \(U_A(0,\omega _0)=I_n\). Since the rank of the \(n\times m\) matrix \(B(\omega _0)\) is n (as one of the Hypotheses 3.1 guarantees), the \(n\times n\) matrix \(B(\omega _0)\,B^{{\mathrm{T}}}(\omega _0)\) is not singular. Hence,

$$\begin{aligned} \int _0^\infty U_A^{-1}(t,\omega _0)\,B(\omega _0{\cdot }t)\,B^{{\mathrm{T}}}(\omega _0{\cdot }t)\,(U_A^{-1})^{{\mathrm{T}}}(t,\omega _0)\,\mathrm{d}t>0, \end{aligned}$$

which ensures that the system (1.6)\(_{\omega _0}\) is null controllable (see [1, Theorem 7.2.2]). That is, for any \({\mathbf {x}}_0\in {\mathbb {R}}^n\) there exists a time \(t_0=t_0({\mathbf {x}}_0,\omega )\) and an integrable control \({\mathbf {u}}:[0,t_0]\rightarrow {\mathbb {R}}^n\) such that the solution of the corresponding system with \({\mathbf {x}}(0)={\mathbf {x}}_0\) satisfies \({\mathbf {x}}(t_0)={\mathbf {0}}\), which means that any \({\mathbf {x}}_0\) can be steered to \({\mathbf {0}}\) in finite time by an integrable control \({\mathbf {u}}\).

According to the results of [10] (see also Theorem 6.4 of [14]), the minimality of \(\varOmega \) and the previous property ensure the uniform null controllability of the family (1.6); that is, all the systems are null controllable and there is a common time \(t_0({\mathbf {x}}_0,\omega )\) for all \(({\mathbf {x}}_0,\omega )\in {\mathbb {R}}^n\times \varOmega \). As explained in Remark 6.8.1 of [14], this uniform null controllability holds if and only if the family (1.9) satisfies condition D2. On the other hand, Hypotheses 3.1 ensure the existence of an ergodic measure m for which the rotation number vanishes; and, as said in Sect. 2, the minimality of the set \(\varOmega \) ensures that m has total support. In this situation, according to Remark 2.5.2, the family (1.9) is uniformly weakly disconjugate, which proves (i). The simultaneous occurrence of uniform weak disconjugacy and exponential dichotomy ensures (ii): Theorem 5.58 of [14] proves the global existence of the Weyl functions \(M^\pm \), which agree with the principal functions (see Remark 2.5.1). Finally, as recalled in the Introduction, and according to Theorem 4.3 of [4] (see also Remark 7.7 and Theorem 7.10 of [14]), the presence of exponential dichotomy and the global existence of \(M^+\) ensure the assertions in (iii). \(\square \)

Remark 3.3

Note that in the situation of the previous theorem, the global existence of \(M^+\) ensures that if \(l^+(\omega )\equiv \left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \), then \(M^+(\omega )=L_{\omega ,2}\,(L_{\omega ,1})^{-1}\) and that, for every \({\mathbf {x}}_0\in {\mathbb {R}}^n\) there exists a unique \({\mathbf {c}}\in {\mathbb {R}}^n\) such that \({\mathbf {x}}_0=L_{\omega ,1}\,{\mathbf {c}}\), and hence, a unique \({\mathbf {y}}_0\) such that \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+(\omega )\), being \({\mathbf {y}}_0=L_{\omega ,2}\,{\mathbf {c}}=M^+(\omega )\,{\mathbf {x}}_0\), Note also that \({\mathbf {x}}_0^{{\mathrm{T}}}M^+(\omega )\,{\mathbf {x}}_0={\mathbf {c}}^{{\mathrm{T}}} L_{\omega ,1}^{{\mathrm{T}}}L_{\omega ,2}\,(L_{\omega ,1})^{-1}L_{\omega ,1}\,{\mathbf {c}}= {\mathbf {c}}^{{\mathrm{T}}}L_{\omega ,1}^{{\mathrm{T}}}L_{\omega ,2}\,{\mathbf {c}}={\mathbf {c}}^{{\mathrm{T}}}L_{\omega ,2}^{{\mathrm{T}}}L_{\omega ,1}\,{\mathbf {c}}\). These are the reasons for which we asserted that Theorem 3.2 proves Theorem 1.1 if \(m\ge n\): under its hypotheses, every functional \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\) can be minimized.

The next technical lemma will also be used in the Proof of Theorem 1.1. Note that Hypotheses 3.1 are not required.

Lemma 3.4

Let us fix \(\omega \in \varOmega \), and let \(\left[ \begin{array}{c} L_1(t)\\ L_2(t)\end{array}\right] \) be a \(2n\times n\) solution of the linear Hamiltonian system (1.9)\(_\omega \). Let us also fix \({\mathbf {c}}\in {\mathbb {R}}^n\) and define

$$\begin{aligned} \begin{aligned} {\mathbf {x}}(t)&:=L_1(t)\,{\mathbf {c}},\quad {\mathbf {y}}(t):=L_2(t)\,{\mathbf {c}},\\ {\mathbf {u}}(t)&:=R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,{\mathbf {y}}(t)-R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,{\mathbf {x}}(t),\\ V(t)&:={\mathbf {y}}^{{\mathrm{T}}}(t)\,{\mathbf {x}}(t), \end{aligned} \end{aligned}$$

and \({\mathcal {Q}}\) by (1.7). Then,

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\,V(t)=2\,{\mathcal {Q}}_\omega (t,{\mathbf {x}}(t),{\mathbf {u}}(t)). \end{aligned}$$

Proof

It follows from the definitions in the statement that

$$\begin{aligned} \begin{aligned} {\mathbf {u}}(t)&=\big (R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,L_2(t)-R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,L_1(t)\big )\,{\mathbf {c}},\\ V(t)&={\mathbf {c}}^{{\mathrm{T}}}L_2^{{\mathrm{T}}}(t)\,L_1(t)\,{\mathbf {c}}. \end{aligned} \end{aligned}$$

A straightforward and simple computation shows that

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\,V(t)&={\mathbf {c}}^{{\mathrm{T}}}\Big (L_1^{{\mathrm{T}}}(t)\,G(\omega {\cdot }t)\,L_1(t)-L_1^{{\mathrm{T}}}(t)\,g(\omega {\cdot }t)\,R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,L_1(t)\\&\quad +\,L_2(t)\,B(\omega {\cdot }t)\,R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,L_2^{{\mathrm{T}}}(t)\Big )\,{\mathbf {c}}. \end{aligned} \end{aligned}$$

And a longer computation shows that \({\mathcal {Q}}_\omega \) evaluated at \((t,{\mathbf {x}}(t),{\mathbf {u}}(t))\), namely

$$\begin{aligned} 2\,{\mathcal {Q}}_\omega \Big (t,\,L_1(t)\,{\mathbf {c}}, \,\big (R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,L_2(t)-R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,L_1(t)\big )\,{\mathbf {c}}\Big ), \end{aligned}$$

takes the same value.\(\square \)

Proof of Theorem 1.1

Note that we can assume that \(m<n\), since otherwise Theorem 3.2 proves all the assertions (see also Remark 3.3). We first prove the result in a particular case, from where the general one will follow easily.

Particular case. Let us first assume that \(B(\omega _0)=\left[ \begin{array}{c} B_1(\omega _0)\\ B_2(\omega _0)\end{array}\right] \), where \(B_1(\omega _0)\) is an \(m\times m\) nonsingular matrix. We define the \(n\times n\) matrix-valued functions

$$\begin{aligned} B_\varepsilon :=\left[ \begin{array}{cc} B_1&{}\quad 0\\ B_2&{}\quad \varepsilon \,I_{n-m}\end{array}\right] , \qquad R_\varepsilon :=\left[ \begin{array}{cc}R&{}\quad 0\\ 0&{}\quad \varepsilon \,I_{n-m}\end{array}\right] , \quad \text {and}\quad g_\varepsilon :=\left[ \begin{array}{cc}g&0\end{array}\right] \end{aligned}$$

for \(\varepsilon >0\), where 0 stands for the null matrix of the suitable dimension whenever it appears. Let us consider the families of control problems and functionals given by

$$\begin{aligned}&\displaystyle {\mathbf {x}}'=\,A(\omega {\cdot }t)\,{\mathbf {x}}+B_\varepsilon (\omega {\cdot }t)\,{\mathbf {v}}, \end{aligned}$$
(3.1)
$$\begin{aligned}&\displaystyle {\mathcal {Q}}^\varepsilon _\omega (t,{\mathbf {x}},{\mathbf {v}}):=\,\frac{1}{2}\left( \langle \,{\mathbf {x}}, G(\omega {\cdot }t)\,{\mathbf {x}}\,\rangle +2\langle \,{\mathbf {x}},g_\varepsilon (\omega {\cdot }t)\,{\mathbf {u}}\,\rangle +\langle \,{\mathbf {v}}, R_\varepsilon (\omega {\cdot }t)\,{\mathbf {v}}\,\rangle \right) , \end{aligned}$$
(3.2)
$$\begin{aligned}&\displaystyle {\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }:{\mathcal {P}}^\varepsilon _{{\mathbf {x}}_0,\omega }\rightarrow {\mathbb {R}}\cup \{\pm \infty \},\quad ({\mathbf {x}},{\mathbf {v}})\mapsto \int _0^{\infty } {\mathcal {Q}}^\varepsilon _\omega (t,{\mathbf {x}}(t),{\mathbf {v}}(t))\,\mathrm{d}t, \end{aligned}$$
(3.3)

where \({\mathcal {P}}^\varepsilon _{{\mathbf {x}}_0,\omega }\) is the set of measurable pairs \(({\mathbf {x}},{\mathbf {v}}):[0,\infty )\rightarrow {\mathbb {R}}^n \times {\mathbb {R}}^n\) solving (3.1)\(_\omega \) with \({\mathbf {x}}(0)={\mathbf {x}}_0\). Note that both the state \({\mathbf {x}}\) and the control \({\mathbf {v}}\) are now n-dimensional.

The associated family of linear Hamiltonian systems,

$$\begin{aligned} \left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array} \right] ' =H_\varepsilon (\omega {\cdot }t)\,\left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array} \right] ,\qquad \omega \in \varOmega , \end{aligned}$$
(3.4)

is given by

$$\begin{aligned} \begin{aligned} H_\varepsilon :=&\,\left[ \begin{array}{cc} A-B_\varepsilon \,R_\varepsilon ^{-1}g_\varepsilon ^{{\mathrm{T}}}&{}\quad B_\varepsilon \,R_\varepsilon ^{-1}B_\varepsilon ^{{\mathrm{T}}}\\ G-g_\varepsilon \,R_\varepsilon ^{-1}g_\varepsilon ^{{\mathrm{T}}} &{}\quad -A_0^{{\mathrm{T}}} +g_\varepsilon \,R_\varepsilon ^{-1}B_\varepsilon ^{{\mathrm{T}}} \end{array}\right] =\left[ \begin{array}{cc} A-B\,R^{-1}g^{{\mathrm{T}}}&{}\quad B_\varepsilon \,R_\varepsilon ^{-1}B_\varepsilon ^{{\mathrm{T}}}\\ G-g\,R^{-1}g^{{\mathrm{T}}} &{}\quad -A_0^{{\mathrm{T}}} +g\,R^{-1}B^{{\mathrm{T}}} \end{array}\right] . \end{aligned} \end{aligned}$$

The unique submatrix depending on \(\varepsilon \) is

$$\begin{aligned} B_\varepsilon \,R_\varepsilon ^{-1}B_\varepsilon ^{{\mathrm{T}}}=B\,R^{-1}B^{{\mathrm{T}}}+\varepsilon \left[ \begin{array}{cc}0_m&{}\quad 0\\ 0&{}\quad I_{n-m} \end{array}\right] . \end{aligned}$$
(3.5)

Therefore, (3.4) agrees with (1.9) for \(\varepsilon =0\), and hence according to Hypotheses (3.1) it has exponential dichotomy over \(\varOmega \). The robustness of this property (see Remark 2.2.2) ensures the exponential dichotomy for family (3.4) if \(\varepsilon \in [0,\varepsilon _0]\) for \(\varepsilon _0>0\) small enough.

Let m be the \(\sigma \)-ergodic measure in \(\varOmega \) appearing in Hypotheses 3.1, and represent by \(\alpha _\varepsilon (m)\) the corresponding rotation number of family (3.4), so that \(\alpha _0(m)=0\). According to Theorem 5.2 of [3] (see also Theorem 2.28 of [14]), the exponential dichotomy forces \(\alpha _\varepsilon (m)\) to take values in a discrete group if \(\varepsilon \in [0,\varepsilon _0]\). On the other hand, Theorem 4.3 of [3] (or Theorem 2.25 of [14]) ensures that \(\alpha _\varepsilon (m)\) varies continuously with respect to \(\varepsilon \). Since \(\alpha _0(m)=0\), we conclude that \(\alpha _\varepsilon (m)=0\) for \(\varepsilon \in [0,\varepsilon _0]\).

Since \(B_\varepsilon (\omega _0)\) is nonsingular for any \(\varepsilon >0\), the problems for \(\varepsilon \in (0,\varepsilon _0]\) fulfill the corresponding Hypotheses 3.1. Hence, Theorem 3.2 ensures the uniform weak disconjugacy of the families (3.4), the global existence of the Weyl function \(M^+_\varepsilon :\varOmega \rightarrow {\mathbb {S}}_n({\mathbb {R}})\) associated with (3.4), and the solvability of the minimization problems for \({\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }\) subject to (3.1). In addition, given \(\omega \in \varOmega \) and \({\mathbf {x}}_0\in {\mathbb {R}}^n\), the pair \((\widetilde{{\mathbf {x}}}^\varepsilon _\omega (t),\widetilde{{\mathbf {v}}}^\varepsilon _\omega (t))\) minimizing \({\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }\) comes from the solution \(\left[ \begin{array}{c}\widetilde{{\mathbf {x}}}^\varepsilon _\omega (t)\\ \widetilde{{\mathbf {y}}}^\varepsilon _\omega (t)\end{array}\right] \) of (1.9)\(_\omega ^\varepsilon \) with initial data \(\left[ \begin{array}{c} {\mathbf {x}}_0\\ M_\varepsilon ^+(\omega )\,{\mathbf {x}}_0\end{array}\right] \) via the analogous of feedback rule (1.11)\(_\omega \), and the value of the minimum is

$$\begin{aligned} {\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }(\widetilde{{\mathbf {x}}}^\varepsilon _\omega ,\widetilde{{\mathbf {v}}}^\varepsilon _\omega )=-\frac{1}{2}\,{\mathbf {x}}_0^{{\mathrm{T}}}M_\varepsilon ^+(\omega )\,{\mathbf {x}}_0. \end{aligned}$$
(3.6)

For further purposes, we point out that if \({\mathbf {v}}:{\mathbb {R}}\rightarrow {\mathbb {R}}^n\) is written as \({\mathbf {v}}(t)=\left[ \begin{array}{c}{\mathbf {u}}(t)\\ {\mathbf {v}}_2(t)\end{array}\right] \) with \({\mathbf {u}}:{\mathbb {R}}\rightarrow {\mathbb {R}}^m\), then

$$\begin{aligned} 2\,{\mathcal {Q}}^\varepsilon _\omega (t,{\mathbf {x}}(t),{\mathbf {v}}(t))=2\,{\mathcal {Q}}_\omega (t,{\mathbf {x}}(t),{\mathbf {u}}(t))+\varepsilon \,|{\mathbf {v}}_2(t)|^2, \end{aligned}$$
(3.7)

as easily deduced from the respective definitions (1.7)\(_\omega \) and (3.2)\(_\omega \) of \({\mathcal {Q}}_\omega \) and \({\mathcal {Q}}_\omega ^\varepsilon \).

It follows from (3.5) that \(H_\varepsilon \) satisfies the condition of monotonicity required by Proposition 5.51 of [14]. This result combined with Theorem 3.2(ii) ensures a monotone behavior of the Weyl functions. In particular, \(M_{\varepsilon _2}^+\le M_{\varepsilon _1}^+\) if \(\varepsilon _0>\varepsilon _1>\varepsilon _2>0\). Therefore, we can apply Theorem 1 of [16], which establishes two alternatives for the limiting behavior of \({\mathbf {x}}_0^{{\mathrm{T}}} M_\varepsilon ^+(\omega )\,{\mathbf {x}}_0\) as \(\varepsilon \rightarrow 0^+\), which depend of the pair \(({\mathbf {x}}_0,\omega )\in {\mathbb {R}}^n\times \varOmega \) :

(a):

\(\lim _{\varepsilon \rightarrow 0^+} {\mathbf {x}}_0^{{\mathrm{T}}} M_\varepsilon ^+(\omega )\,{\mathbf {x}}_0\) belongs to \({\mathbb {R}}\) if and only if there exists \({\mathbf {y}}_0\in {\mathbb {R}}^n\) such that \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+(\omega )\). In this case, if \(l^+(\omega )\equiv \left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \) and \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] =\left[ \begin{array}{c} L_{\omega ,1}\,{\mathbf {c}}\\ L_{\omega ,2}\,{\mathbf {c}}\end{array}\right] \) for a vector \({\mathbf {c}}\in {\mathbb {R}}^n\), then \(\lim _{\varepsilon \rightarrow 0^+} {\mathbf {x}}_0^{{\mathrm{T}}} M_\varepsilon ^+(\omega )\,{\mathbf {x}}_0={\mathbf {c}}^{{\mathrm{T}}}L^{{\mathrm{T}}}_{\omega ,2}\,L_{\omega ,1}\,{\mathbf {c}}\).

(b):

\(\lim _{\varepsilon \rightarrow 0^+} {\mathbf {x}}_0^{{\mathrm{T}}} M_\varepsilon ^+(\omega )\,{\mathbf {x}}_0=-\,\infty \) otherwise.

We will prove that

1.:

if the pair \(({\mathbf {x}}_0,\omega )\) is in case (a), then there exist admissible pairs for \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\), and all the assertions in Theorem 1.1(ii) are true.

2.:

If the pair \(({\mathbf {x}}_0,\omega )\) is in case (b), then \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }({\mathbf {x}},{\mathbf {u}})=\infty \) for all \(({\mathbf {x}},{\mathbf {u}})\in {\mathcal {P}}_{{\mathbf {x}}_0,\omega }\), so that there are no admissible pairs.

These two facts will complete the proof in the particular case we have started with.

Let us assume that \(({\mathbf {x}}_0,\omega )\) is in the situation (a), and take \({\mathbf {c}}\in {\mathbb {R}}^n\) as there. We define \(\left[ \begin{array}{c} L_1(t)\\ L_2(t)\end{array}\right] \) as the \(2n\times n\) matrix-valued solution of (1.9) with \(\left[ \begin{array}{c} L_1(0)\\ L_2(0)\end{array}\right] =\left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \).

We also define

$$\begin{aligned} \begin{aligned} \widetilde{{\mathbf {x}}}_\omega (t)&:=L_1(t)\,{\mathbf {c}},\quad \widetilde{{\mathbf {y}}}_\omega (t):=L_2(t)\,{\mathbf {c}},\\ \widetilde{{\mathbf {u}}}_\omega (t)&:=R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,\widetilde{{\mathbf {y}}}_\omega (t)-R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,\widetilde{{\mathbf {x}}}_\omega (t),\\ V(t)&:=\widetilde{{\mathbf {y}}}_\omega ^{{\mathrm{T}}}(t)\,\widetilde{{\mathbf {x}}}_\omega (t). \end{aligned} \end{aligned}$$

Note that the pair \((\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )\) belongs to \({\mathcal {P}}_{{\mathbf {x}}_0,\omega }\) and is that of Theorem 1.1(ii). It follows from Definition 2.1, from the fact that \(\left[ \begin{array}{c}\widetilde{{\mathbf {x}}}_\omega (0)\\ \widetilde{{\mathbf {y}}}_\omega (0)\end{array}\right] =\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+(\omega )\), and from the definition of \(\widetilde{{\mathbf {u}}}_\omega \) that \((\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )\in L^2({\mathbb {R}}^+,{\mathbb {R}}^n)\times L^2({\mathbb {R}}^+,{\mathbb {R}}^m)\); thus, \((\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )\) is an admissible pair for \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\). Lemma 3.4 yields

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0,\omega }(\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {u}}}_\omega )= \frac{1}{2}\left( \lim _{t\rightarrow \infty } \widetilde{{\mathbf {y}}}_\omega ^{{\mathrm{T}}}(t)\,\widetilde{{\mathbf {x}}}_\omega (t)- \widetilde{{\mathbf {y}}}_\omega ^{{\mathrm{T}}}(0)\,\widetilde{{\mathbf {x}}}_\omega (0)\right) =-\frac{1}{2}\,{\mathbf {c}}^{{\mathrm{T}}}L^{{\mathrm{T}}}_{\omega ,2}\,L_{\omega ,1}\,{\mathbf {c}}. \end{aligned}$$

Here, we use that \(\lim _{t\rightarrow \infty }\left[ \begin{array}{c} \widetilde{{\mathbf {x}}}_\omega (t)\\ \widetilde{{\mathbf {y}}}_\omega (t)\end{array}\right] =\left[ \begin{array}{c}{\mathbf {0}}\\ {\mathbf {0}}\end{array}\right] \), which also follows from \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+(\omega )\).

Our next step is proving that \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {u}}}}})\ge -(1/2)\,{\mathbf {c}}^{{\mathrm{T}}}L^{{\mathrm{T}}}_{\omega ,2}\,L_{\omega ,1}\,{\mathbf {c}}\) for any other admissible pair \(({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {u}}}}})\). Given such a pair, we define \({{\bar{{\mathbf {v}}}}}:{\mathbb {R}}\rightarrow {\mathbb {R}}^n\) by \({{\bar{{\mathbf {v}}}}}(t)=\left[ \begin{array}{c}{{\bar{{\mathbf {u}}}}}(t)\\ {\mathbf {0}}\end{array}\right] \). Since \(B(\omega {\cdot }t)\,{{\bar{{\mathbf {u}}}}}(t)=B_\varepsilon (\omega {\cdot }t)\,{{\bar{{\mathbf {v}}}}}(t)\), the pair \(({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {v}}}}})\) is admissible for \({\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }\) for all \(\varepsilon >0\). It follows from (3.7) and (3.6) that

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0,\omega }({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {u}}}}})={\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {v}}}}})\ge {\mathcal {I}}^\varepsilon _{{\mathbf {x}}_0,\omega }(\widetilde{{\mathbf {x}}}_\omega ,\widetilde{{\mathbf {v}}}_\omega )=-\frac{1}{2}\,{\mathbf {x}}_0^{{\mathrm{T}}}M^+_\varepsilon (\omega )\,{\mathbf {x}}_0, \end{aligned}$$
(3.8)

so that the assertion follows from the information provided by (a) by taking limit as \(\varepsilon \rightarrow 0^+\). This completes the proof of 1.

In order to check 2, we assume that the pair \(({\mathbf {x}}_0,\omega )\) is in case (b) and assume for contradiction that there exists an admissible pair \(({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {u}}}}})\) for \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\). Repeating the previous procedure, we obtain (3.8). Taking limit as \(\varepsilon \rightarrow 0^+\) and using the information provided by (b), we conclude that \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {u}}}}})=\infty \), which precludes the admissibility of the pair. This completes the proofs of point 2 and of the initial case.

General case. Basic results on linear algebra provide an orthogonal \(n\times n\) matrix-valued function P such that \(P\,B(\omega _0)=\left[ \begin{array}{c} B_1(\omega _0)\\ B_2(\omega _2)\end{array}\right] \) for an \(m\times m\) nonsingular matrix \(B_1(\omega _0)\). Let us define

$$\begin{aligned} \begin{aligned} \widetilde{A}(\omega )&:= P\,A(\omega )\,P^{{\mathrm{T}}},\\ \widetilde{B}(\omega )&:= P\,B(\omega ),\\ \widetilde{G}(\omega )&:= P\,G(\omega )\,P^{{\mathrm{T}}},\\ \widetilde{g}(\omega )&:= P\,g(\omega ), \end{aligned} \end{aligned}$$

and consider the families of control systems and functionals

$$\begin{aligned} {\mathbf {z}}'=&\,\widetilde{A}(\omega {\cdot }t)\,{\mathbf {z}}+\widetilde{B}(\omega {\cdot }t)\,{\mathbf {u}}, \end{aligned}$$
(3.9)
$$\begin{aligned} \widetilde{{\mathcal {Q}}}_\omega (t,{\mathbf {z}},{\mathbf {u}}):=&\,\frac{1}{2}\left( \langle \,{\mathbf {z}},\widetilde{G}(\omega {\cdot }t)\,{\mathbf {z}}\,\rangle +2\langle \,{\mathbf {z}},\widetilde{g}(\omega {\cdot }t)\,{\mathbf {u}}\,\rangle +\langle \,{\mathbf {u}}, R(\omega {\cdot }t)\,{\mathbf {u}}\,\rangle \right) , \end{aligned}$$
(3.10)
$$\begin{aligned} \widetilde{{\mathcal {I}}}_{{\mathbf {z}}_0,\omega }({\mathbf {z}},{\mathbf {u}}):=&\, \int _0^{\infty }\widetilde{{\mathcal {Q}}}_\omega (t,{\mathbf {z}}(t),{\mathbf {u}}(t))\,\mathrm{d}t, \end{aligned}$$
(3.11)

obtained from the initial ones by means of the change in variables \({\mathbf {z}}(t)=P\,{\mathbf {x}}(t)\). The corresponding family of linear Hamiltonian systems is

$$\begin{aligned} \left[ \begin{array}{c}{\mathbf {z}}\\ {\mathbf {w}}\end{array} \right] '= \widetilde{H}(\omega {\cdot }t)\,\left[ \begin{array}{c}{\mathbf {z}}\\ {\mathbf {w}}\end{array} \right] ,\qquad \omega \in \varOmega \end{aligned}$$
(3.12)

with

$$\begin{aligned} \widetilde{H}:=\left[ \begin{array}{cc} \widetilde{A}-\widetilde{B}\,R^{-1}\widetilde{g}^{{\mathrm{T}}}&{}\quad \widetilde{B}\,R^{-1}\widetilde{B}^{{\mathrm{T}}}\\ \widetilde{G}-\widetilde{g}\,R^{-1}\widetilde{g}^{{\mathrm{T}}} &{}\quad -\widetilde{A}^{{\mathrm{T}}} +\widetilde{g}\,R^{-1}\widetilde{B}^{{\mathrm{T}}} \end{array}\right] . \end{aligned}$$

A straightforward computation shows that (3.12) comes from (1.9) by means of the change of variables

$$\begin{aligned} \left[ \begin{array}{c}{\mathbf {z}}\\ {\mathbf {w}}\end{array} \right] = \left[ \begin{array}{cc}P&{}0_n\\ 0_n&{}P\end{array}\right] \left[ \begin{array}{c}{\mathbf {x}}\\ {\mathbf {y}}\end{array} \right] . \end{aligned}$$

It is clear that the family (3.12) has exponential dichotomy over \(\varOmega \), since the change of variables is given by a constant matrix, and that \(\left[ \begin{array}{cc} P&{}0_n\\ 0_n&{}P\end{array}\right] \left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \) represents the Lagrange plane \(\widetilde{l}^+(\omega )\) of the solutions of (3.12) bounded at \(+\,\infty \) if and only if \(\left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \) represents \(l^+(\omega )\). In addition, since the matrix \(\left[ \begin{array}{cc} P&{}0_n\\ 0_n&{}P\end{array}\right] \) is symplectic, it follows from the results of Section 2 of [17] (see also Section 2.1.1 of [14]) that the rotation number is also preserved: it is 0 for any \(\sigma \)-ergodic measure. Therefore, the transformed families are in the situation analyzed in the particular case. It is clear that:

  • The pair \(({{\bar{{\mathbf {x}}}}},{{\bar{{\mathbf {u}}}}})\) is admissible for \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\) if and only if the pair \(({{\bar{{\mathbf {z}}}}},{{\bar{{\mathbf {u}}}}})\) given by \({\mathbf {z}}(t)=P\,{\mathbf {x}}(t)\) is admissible for \(\widetilde{{\mathcal {I}}}_{P{\mathbf {x}}_0,\omega }\).

  • There exists \({\mathbf {y}}_0\) such that \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+(\omega )\) if and only if there exists \({\mathbf {w}}_0\) such that \(\left[ \begin{array}{c} P\,{\mathbf {x}}_0\\ {\mathbf {w}}_0\end{array}\right] \in \widetilde{l}^+(\omega )\): just write \({\mathbf {w}}_0=P{\mathbf {y}}_0\).

  • \({\mathbf {c}}^{{\mathrm{T}}} L_{\omega ,2}^{{\mathrm{T}}}L_{\omega ,1}\,{\mathbf {c}}={\mathbf {c}}^{{\mathrm{T}}} \big (PL_{\omega ,2}\big )^{{\mathrm{T}}}\big (P\,L_{\omega ,1}\big )\,{\mathbf {c}}\).

  • The equality \({\mathbf {u}}(t)= R^{-1}(\omega {\cdot }t)\,B^{{\mathrm{T}}}(\omega {\cdot }t)\,{\mathbf {y}}(t)- R^{-1}(\omega {\cdot }t)\,g^{{\mathrm{T}}}(\omega {\cdot }t)\,{\mathbf {x}}(t)\) holds if and only if \({\mathbf {u}}(t)= R^{-1}(\omega {\cdot }t)\,\widetilde{B}^{{\mathrm{T}}}(\omega {\cdot }t)\,{\mathbf {w}}(t)- R^{-1}(\omega {\cdot }t)\,\widetilde{g}^{{\mathrm{T}}}(\omega {\cdot }t)\,{\mathbf {z}}(t)\) for \({\mathbf {z}}(t)=P\,{\mathbf {x}}(t)\) and \({\mathbf {w}}(t)=P\,{\mathbf {y}}(t)\).

It is easy to deduce the statements of Theorem 1.1 from all these properties. The proof is hence complete. \(\square \)

Remark 3.5

Let us represent \(l^+(\omega )\equiv \left[ \begin{array}{c} L_{\omega ,1}\\ L_{\omega ,2}\end{array}\right] \) and assume that \(L_{\omega ,1}{\mathbf {c}}=L_{\omega ,1}{\mathbf {d}}\) for \({\mathbf {d}}\ne {\mathbf {c}}\). Then, since \(l^+(\omega )\) is a Lagrange plane, \({\mathbf {c}}^{{\mathrm{T}}}L_{\omega ,2}^{{\mathrm{T}}}L_{\omega ,1}{\mathbf {c}}={\mathbf {d}}^{{\mathrm{T}}}L_{\omega ,2}^{{\mathrm{T}}}L_{\omega ,1}{\mathbf {d}}={\mathbf {d}}^{{\mathrm{T}}}L_{\omega ,2}^{{\mathrm{T}}}L_{\omega ,1}{\mathbf {c}}\). We point out this property to clarify that there is no ambiguity in the assertion of Theorem 1.1 concerning the value of the minimum of \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\), which of course is unique.

3.1 Coming back to a single problem

Recall that our starting point was the single problem given by (1.1), (1.2) and (1.3), which we included in the family given by (1.6), (1.7) and (1.8) by means of the hull procedure explained in Sect. 2.1. The initial problem is one of those of the family: It corresponds to the point \(\widetilde{\omega }=C_0=(A_0,B_0,G_0,g_0,R_0)\) of \(\varOmega \).

The conclusions of Theorem 1.1 apply to every \(\omega \in \varOmega \), so that they also apply to our initial problem. What about the hypotheses?

  • The hull \(\varOmega \) is minimal in the cases of recurrence of the initial coefficients, which includes (at least) the autonomous, periodic, quasi-periodic, almost periodic and almost automorphic cases.

  • The hypothesis on B holds if there exists \(t_0\in {\mathbb {R}}\) such that \(B_0(t_0)\) has full rank.

  • In the case of minimality of \(\varOmega \), the exponential dichotomy of the family (1.9) holds if and only if the initial Hamiltonian system (1.4) has exponential dichotomy on \({\mathbb {R}}\): see Remark 2.2.1.

  • And if the base is minimal and uniquely ergodic (which is the case at least if \(C_0\) is almost periodic: see [6]), then the value of the rotation number can be obtained for any one of the systems of the family, for instance, for the initial one (see Theorem 2.6 of [14]).

Therefore, a less general formulation of our main theorem reads as follows:

Theorem 3.6

Assume that the map \(C_0\) given by (2.4) is almost periodic with \(R_0(t)\ge \rho \,I_m\) for a common \(\rho >0\) and any \(t\in {\mathbb {R}}\), that there exists \(t_0\in {\mathbb {R}}\) such that the \(n\times m\) matrix \(B_0(t_0)\) has full rank, that Hamiltonian system (1.4) has exponential dichotomy on \({\mathbb {R}}\), and that

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\,\arg \det (U_1(t)-iU_2(t))=0, \end{aligned}$$

where \(U_{\mathrm{H}}=\left[ \begin{array}{cc} U_1&{}U_3\\ U_2&{}U_4\end{array}\right] \) is the matrix solution of (1.4) with \(U_{\mathrm{H}}(0)=I_{2n}\). Let \(l^+\) be the Lagrange plane of the solutions of (1.4) bounded at \(+\,\infty \), and let us fix \({\mathbf {x}}_0\in {\mathbb {R}}^n\). Then,

  1. (i)

    there exist admissible pairs \(({\mathbf {x}},{\mathbf {u}})\) for the functional \({\mathcal {I}}_{{\mathbf {x}}_0}\) given by (1.3) if and only if there exists \({\mathbf {y}}_0\in {\mathbb {R}}^n\) such that \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \in l^+\).

  2. (ii)

    In this case, the infimum of \({\mathcal {I}}_{{\mathbf {x}}_0}\) is finite. In addition, if the columns of the \(2n\times n\) matrix \(\left[ \begin{array}{c} L_1\\ L_2\end{array}\right] \) are a basis of \(l^+\) and \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] =\left[ \begin{array}{c} L_1\,{\mathbf {c}}\\ L_2\,{\mathbf {c}}\end{array}\right] \) for a vector \({\mathbf {c}}\in {\mathbb {R}}^n\), then the infimum is given by \(-(1/2)\,{\mathbf {c}}^{{\mathrm{T}}} L^{{\mathrm{T}}}_2\,L_1\,{\mathbf {c}}\), and a minimizing pair \((\widetilde{{\mathbf {x}}},\widetilde{{\mathbf {u}}})\) is obtained from the solution \(\left[ \begin{array}{c}\widetilde{{\mathbf {x}}}(t)\\ \widetilde{{\mathbf {y}}}(t)\end{array}\right] \) of (1.4) with initial data \(\left[ \begin{array}{c}{\mathbf {x}}_0\\ {\mathbf {y}}_0\end{array}\right] \) via the feedback rule (1.5).

Note also that the condition of almost periodicity of \(C_0\) can be replaced by the less restrictive one of minimality and ergodic uniqueness of the flow on its hull.

3.2 Examples of non-global solvability

We complete the paper with three examples. With the first one, of autonomous type, we intend to show a simple scenario, in which every required computation can be done by hand, and for which the existence of admissible pairs depends on the choice of the initial data. The second one is a generalization of almost periodic type.

The third example, more complex, shows a situation of non-global solvability for which the associated linear Hamiltonian system does not have exponential dichotomy, so that one of the hypotheses of our results is not fulfilled. In fact, the Hamiltonian system of this example is of nonuniform hyperbolic type. Its aim is showing that the same ideas involved in the description we have made can be extremely useful in the analysis of other situations.

Example 3.7

We consider the autonomous control problem and quadratic functional

$$\begin{aligned} \begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] '=&\left[ \begin{array}{cc} 1&{}1\\ 0&{}1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + \left[ \begin{array}{c} 1\\ 0\end{array}\right] u,\\ {\mathcal {Q}}\left( t,\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right) :=&\frac{1}{2}\,\left( [\,x_1\;x_2\,]\left[ \begin{array}{cc} 2&{}1\\ 1&{}1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + 2\,\,[\,x_1\;x_2\,]\left[ \begin{array}{c}1\\ 1\end{array}\right] u+u^2\right) , \end{aligned} \end{aligned}$$

and we pose the problem of minimizing the corresponding functional

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0}:{\mathcal {P}}_{{\mathbf {x}}_0}\rightarrow {\mathbb {R}}\cup \{\pm \infty \}, \quad \left( \left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right) \mapsto \int _0^\infty {\mathcal {Q}}\left( t,\left[ \begin{array}{c} x_1(t)\\ x_2(t)\end{array}\right] ,u(t)\right) \,\mathrm{d}t \end{aligned}$$
(3.13)

defined on the set \({\mathcal {P}}_{{\mathbf {x}}_0}\) of the measurable pairs \(({\mathbf {x}},u)\) satisfying the control system with \({\mathbf {x}}(0)={\mathbf {x}}_0\).

Let us check that all the hypotheses of Theorem 3.6 are satisfied. It is obvious that the map \(C_0\) given by the expression (2.4) corresponding to this problem is constant (and hence almost periodic), and that the rank of \(B_0(t)\equiv \left[ \begin{array}{c} 1\\ 0\end{array}\right] \) is 1 for all \(t\in {\mathbb {R}}\). It is also easy to check that the linear Hamiltonian system (1.4) takes the form

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] '= H\,\left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] \quad \text {for } H:=\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }r}0&{}0&{}1&{}0\\ 0&{}1&{}0&{}0\\ 1&{}0&{}0&{}0\\ 0&{} 0&{} 0&{} -1\end{array}\right] . \end{aligned}$$

Note that this system can be uncoupled to

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ y_1\end{array}\right] ' =\left[ \begin{array}{cc} 0&{}1\\ 1&{}0\end{array}\right] \left[ \begin{array}{c}x_1\\ y_1\end{array}\right] \quad \text {and}\quad \left[ \begin{array}{c}x_2\\ y_2\end{array}\right] ' =\left[ \begin{array}{rc} 1&{}\;\,0\\ 0&{}-1\end{array}\right] \left[ \begin{array}{c}x_2\\ y_2\end{array}\right] . \end{aligned}$$

It is very simple to check that these two-dimensional systems (of Hamiltonian type) have exponential dichotomy on \({\mathbb {R}}\). In addition, the initial data of the solutions bounded at \(+\,\infty \) and \(-\,\infty \) are given for the first one by \(\left[ \begin{array}{c}\;\,\,1\\ -1\end{array}\right] \) and \(\left[ \begin{array}{c} 1\\ 1\end{array}\right] \) (up to constant multiples) and for the second one by \(\left[ \begin{array}{c} 0\\ 1\end{array}\right] \) and \(\left[ \begin{array}{c} 1\\ 0\end{array}\right] \). It follows that our four-dimensional Hamiltonian system has exponential dichotomy on \({\mathbb {R}}\) (see, e.g., Remark 2.2.1), and that the Lagrange planes

$$\begin{aligned} l^+\equiv \left[ \begin{array}{rr} 1&{}\quad 0\\ 0&{}\quad 0\\ -1&{}0\\ 0&{}\;1\end{array}\right] \quad \text {and}\quad l^-\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ 1&{}\quad 0\\ 0&{}\quad 0\end{array}\right] \end{aligned}$$

are composed of the initial data of the solutions of the Hamiltonian system which are bounded as \(t\rightarrow +\,\infty \) and \(t\rightarrow -\,\infty \), respectively. It is also easy to compute fundamental matrix solutions of the two-dimensional systems, and from them

$$\begin{aligned} U_{\mathrm{H}}(t)=\frac{1}{2}\left[ \begin{array}{cccc} {\mathrm{e}}^t+{\mathrm{e}}^{-t}&{}\quad 0&{}\quad {\mathrm{e}}^t-{\mathrm{e}}^{-t}&{}\quad 0 \\ 0 &{}\quad 2\,{\mathrm{e}}^t&{}\quad 0&{}\quad 0\\ {\mathrm{e}}^t-{\mathrm{e}}^{-t}&{}\quad 0&{}\quad {\mathrm{e}}^t+{\mathrm{e}}^{-t}&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 2\,{\mathrm{e}}^{-t}\end{array}\right] , \end{aligned}$$

which is the matrix solution of the Hamiltonian system with \(U_{\mathrm{H}}(0)=I_4\). It follows from here that the last hypothesis of Theorem 3.6 holds, since

$$\begin{aligned}&\begin{aligned}&\lim _{t\rightarrow \infty }\frac{1}{t}\arg \det \frac{1}{2} \left( \left[ \begin{array}{cc}{\mathrm{e}}^t+{\mathrm{e}}^{-t}&{}0\\ 0&{}2\,{\mathrm{e}}^t \end{array}\right] -i\left[ \begin{array}{cc} {\mathrm{e}}^t-{\mathrm{e}}^{-t}&{}0\\ 0&{}0\end{array}\right] \right) \\&\quad =\lim _{t\rightarrow \infty }\frac{1}{t}\arctan \frac{1-{\mathrm{e}}^{2t}}{1+{\mathrm{e}}^{2t}}=0. \end{aligned} \end{aligned}$$

Theorem 3.6 ensures that there exist admissible pairs if and only if \({\mathbf {x}}_0=\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] =\left[ \begin{array}{cc} 1&{}0\\ 0&{}0\end{array}\right] \,\left[ \begin{array}{c} c_1\\ c_2\end{array}\right] \) for some \({\mathbf {x}}=\left[ \begin{array}{c} c_1\\ c_2\end{array}\right] \in {\mathbb {R}}^2\). That is, if and only if \(x_2=0\), in which case we can take \({\mathbf {c}}=\left[ \begin{array}{c} x_1\\ c_2\end{array}\right] \) for any \(c_2\in {\mathbb {R}}\). This provides \({\mathbf {y}}_0=\left[ \begin{array}{cc}-1&{}0\\ \;\,\,0&{}1\end{array}\right] \left[ \begin{array}{c} x_1\\ c_2\end{array}\right] = \left[ \begin{array}{c} -x_1\\ \;\,\,c_2\end{array}\right] \). In addition, also according to Theorem 3.6, the minimum is given by \(-(1/2)[\,x_1\;c_2\,]\left[ \begin{array}{cc} -1&{}0\\ \;\,\,0&{}1\end{array}\right] \left[ \begin{array}{cc} 1&{}0\\ 0&{}0\end{array}\right] \left[ \begin{array}{c} x_1\\ c_2\end{array}\right] = x_1^2/2\). Now, we compute

$$\begin{aligned} \left[ \,\begin{array}{r}x_1(t)\\ x_2(t)\\ y_1(t)\\ y_2(t)\end{array} \right] = U_{\mathrm{H}}(t)\left[ \, \begin{array}{r}x_1\\ 0\,\,\\ -\,x_1\\ c_2\end{array} \right] = \left[ \, \begin{array}{r}x_1\,{\mathrm{e}}^{-t}\\ 0\;\quad \\ -\,x_1\,{\mathrm{e}}^{-t}\\ c_2\,{\mathrm{e}}^{-t}\end{array} \right] , \end{aligned}$$

apply the feedback rule (1.5) in order to obtain the control

$$\begin{aligned} \widetilde{u}(t)= [\,1\;0\,]\,\left[ \, \begin{array}{r}-x_1\,{\mathrm{e}}^{-t}\\ c_2\,{\mathrm{e}}^{-t}\end{array} \right] - [\,1\;1\,]\,\left[ \, \begin{array}{c}x_1\,{\mathrm{e}}^{-t}\\ 0\end{array} \right] =-\,2\,x_1\,{\mathrm{e}}^{-t}, \end{aligned}$$

and conclude that there is a unique minimizing pair \((\widetilde{{\mathbf {x}}},\widetilde{u})\) with the shape described in Theorem 3.6, given by

$$\begin{aligned} \widetilde{{\mathbf {x}}}(t)=\left[ \, \begin{array}{c}x_1\,{\mathrm{e}}^{-t}\\ 0\end{array} \right] ,\qquad \widetilde{u}(t)=-\,2\,x_1\,{\mathrm{e}}^{-t}. \end{aligned}$$

Example 3.8

Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be an almost periodic function such that

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ y_1\end{array} \right] ' =\left[ \begin{array}{cccc} 0&{}1\\ f(t)&{}0\end{array}\right] \left[ \begin{array}{c}x_1\\ y_1\end{array} \right] \end{aligned}$$
(3.14)

satisfies two conditions. The first one is the existence of exponential dichotomy on \({\mathbb {R}}\) for (3.14) combined with the fact that the initial data of the solutions bounded at \(+\,\infty \) and \(-\,\infty \) are multiples of \(\left[ \begin{array}{c} 1\\ m^+\end{array}\right] \) and \(\left[ \begin{array}{c} 1\\ m^-\end{array}\right] \). The second one is that the rotation number (with respect to the unique ergodic measure which exists in the hull in the almost periodic case) is zero. As explained in the comments before Theorem 3.6, this fact is equivalent to say that, if \(V(t)=\left[ \begin{array}{cc} V_1(t)&{}V_3(t)\\ V_2(t)&{}V_4(t)\end{array}\right] \) is the matrix solution of (3.14) with \(V(0)=I_2\), then

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\,\arg (V_1(t)-iV_2(t))=0. \end{aligned}$$
(3.15)

The function f can be very simple. For instance, \(f\equiv 1\), as in the previous example. But it can also be extremely complex. For instance, \(f:=f_0+\lambda \) for any \(\lambda >0\) where \(f_0\) is the almost periodic function described by Johnson in [8], giving rise to a non-uniformly hyperbolic family of Schrödinger equations: as explained in Example 7.37 of [14], the corresponding system (3.14) has exponential dichotomy (if \(\lambda >0\)), the initial data of the solutions bounded at \(+\,\infty \) and \(-\,\infty \) are \(\left[ \begin{array}{c} 1\\ m^+\end{array}\right] \) and \(\left[ \begin{array}{c} 1\\ m^-\end{array}\right] \) (up to a multiple), and the solution \(\left[ \begin{array}{c} x_1(t)\\ x_2(t)\end{array}\right] =V(t)\left[ \begin{array}{c} 1\\ m^+\end{array}\right] \) satisfies \(x_1(t)\ne 0\) for all \(t\in {\mathbb {R}}\), from where we deduce that the rotation number is zero (use, for instance, Propositions 5.8 and 5.65 of [14]). We will refer again to the function \(f_0\) in Example 3.9.

Let \(h:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be any other almost periodic function. Given the control problem and the quadratic functional

$$\begin{aligned} \begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] '&=\left[ \begin{array}{cc} 1&{}h(t)\\ 0&{}1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + \left[ \begin{array}{c} 1\\ 0\end{array}\right] u,\\ {\mathcal {Q}}\left( t,\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right)&:=[\,x_1\;x_2\,]\left[ \begin{array}{cl} f(t)+1&{}h(t)\\ h(t)&{}h(t)^2\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + [\,x_1\;x_2\,]\left[ \begin{array}{c}1\\ h(t)\end{array} \right] u+u^2 \end{aligned} \end{aligned}$$

we pose the problem of minimizing the corresponding functional (3.13). As in the previous example, we will begin by checking that all the hypotheses of Theorem 3.6 are satisfied. It is obvious that the map \(C_0\) given by (2.4) is almost periodic, and the rank of \(B_0(t)\equiv \left[ \begin{array}{c} 1\\ 0\end{array}\right] \) is 1 for all \(t\in {\mathbb {R}}\). Now, the Hamiltonian system (1.4) takes the form

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] '= \left[ \begin{array}{cccc} 0&{}\quad 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 1&{}\quad 0&{}\quad 0\\ f(t)&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}-\,1 \end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] , \end{aligned}$$

which can be uncoupled to

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ y_1\end{array}\right] ' =\left[ \begin{array}{cc} 0&{}\quad 1\\ f(t)&{}\quad 0\end{array}\right] \left[ \begin{array}{c}x_1\\ y_1\end{array}\right] \quad \text {and}\quad \left[ \begin{array}{c}x_2\\ y_2\end{array}\right] ' =\left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad -\,1\end{array}\right] \left[ \begin{array}{c}x_2\\ y_2\end{array}\right] . \end{aligned}$$

As seen in Example 3.7, the second system has also exponential dichotomy, and \(\left[ \begin{array}{c} 0\\ 1\end{array}\right] \) and \(\left[ \begin{array}{c} 1\\ 0\end{array}\right] \) are the initial data of the solutions bounded at \(+\,\infty \) and \(-\,\infty \). Hence, our four-dimensional Hamiltonian system has exponential dichotomy on \({\mathbb {R}}\), and the Lagrange planes

$$\begin{aligned} l^+\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 0\\ \,m^+&{}\quad 0\\ 0&{}\quad 1\end{array}\right] \quad \text {and}\quad l^-\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ \,m^-&{}\quad 0\\ 0&{}\quad 0\end{array}\right] \end{aligned}$$

are composed of the initial data of the solutions of the four-dimensional system which are bounded as \(t\rightarrow +\,\infty \) and \(t\rightarrow -\,\infty \), respectively. In addition, the matrix solution \(U_{\mathrm{H}}(t)\) with \(U_{\mathrm{H}}(0)=I_4\) is given by

$$\begin{aligned} U_{\mathrm{H}}(t)=\left[ \begin{array}{cccc} V_1(t)&{}\quad 0&{}\quad V_3(t)&{}\quad 0 \\ 0 &{}\quad {\mathrm{e}}^t&{}\quad 0&{}\quad 0 \\ V_2(t)&{}\quad 0&{}\quad V_4(t)&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad {\mathrm{e}}^{-t}\end{array}\right] , \end{aligned}$$

and, using (3.15), we see that

$$\begin{aligned} \begin{aligned}&\lim _{t\rightarrow \infty }\frac{1}{t}\arg \det \left( \left[ \begin{array}{cc}V_1(t)&{}0\\ 0&{}{\mathrm{e}}^t \end{array}\right] -i\left[ \begin{array}{cc} V_2(t)&{}0\\ 0&{}0\end{array}\right] \right) \\&\quad =\lim _{t\rightarrow \infty }\frac{1}{t}\, \arg \big ({\mathrm{e}}^t(V_1(t)-iV_2(t))\big )=0. \end{aligned} \end{aligned}$$

Therefore, all the hypotheses of Theorem 3.6 are satisfied, as asserted. As in Example 3.7, we conclude that there exist admissible pairs if and only if \({\mathbf {x}}_0=\left[ \begin{array}{c} x_1\\ 0\end{array}\right] =\left[ \begin{array}{cc} 1&{}0\\ 0&{}0\end{array}\right] \,\left[ \begin{array}{c} x_1\\ c_2\end{array}\right] \) for any \(c_2\in {\mathbb {R}}\), which provides \({\mathbf {y}}_0=\left[ \begin{array}{cc} m^+&{}0\\ 0&{}1\end{array}\right] \left[ \begin{array}{c} x_1\\ c_2\end{array}\right] = \left[ \begin{array}{c} m^+ x_1\\ c_2\end{array}\right] \). In this case, the minimum of the functional is \(-x_1^2\,m^+ /2\,\). And, since

$$\begin{aligned} \left[ \begin{array}{cccc} V_1(t)&{}\quad 0&{}\quad V_3(t)&{}\quad 0 \\ 0 &{}\quad {\mathrm{e}}^t&{}\quad 0&{}\quad 0 \\ V_2(t)&{}\quad 0&{}\quad V_4(t)&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad {\mathrm{e}}^{-t}\end{array}\right] \left[ \begin{array}{c} x_1\\ 0\\ m^+ x_1\\ c_2\end{array} \right] = \left[ \begin{array}{c}(V_1(t)+V_3(t)\,m^+)\,x_1\\ 0\\ (V_2(t)+V_4(t)\,m^+)\,x_1\\ c_2\,{\mathrm{e}}^{-t} \end{array} \right] , \end{aligned}$$

then the feedback rule (1.5) provides the minimizing pair \((\widetilde{{\mathbf {x}}},\widetilde{u})\), with

$$\begin{aligned} \widetilde{{\mathbf {x}}}(t)=\left[ \begin{array}{c} (V_1(t)+V_3(t)\,m^+)\,x_1\\ 0\end{array} \right] , \qquad \widetilde{u}(t)=\big (V_2(t)-V_1(t)+(V_4(t)-V_3(t))\,m^+\big )\,x_1. \end{aligned}$$

Example 3.9

Let \(f_0\) be the almost periodic function described in [8], already mentioned in Example 3.8, and let \(h_0\) be any almost periodic function with frequency modulus contained in that of \(f_0\). Our aim is to analyze the minimization problem for the functional

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0}:{\mathcal {P}}_{{\mathbf {x}}_0}\rightarrow {\mathbb {R}}\cup \{\pm \infty \},\qquad \left( \left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right) \mapsto \int _0^\infty {\mathcal {Q}}\left( t,\left[ \begin{array}{c} x_1(t)\\ x_2(t)\end{array}\right] ,u(t)\right) \,\mathrm{d}t, \end{aligned}$$

which is evaluated on the set \({\mathcal {P}}_{{\mathbf {x}}_0}\) of measurable pairs \(({\mathbf {x}},u)\) solving

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] '=\left[ \begin{array}{cc} 1&{}\quad h_0(t)\\ 0&{}\quad 1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + \left[ \begin{array}{c} 1\\ 0\end{array}\right] u \end{aligned}$$

with \({\mathbf {x}}(0)={\mathbf {x}}_0\), and which is given by

$$\begin{aligned} {\mathcal {Q}}\left( t,\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right) :=[\,x_1\;x_2\,]\left[ \begin{array}{cc} f_0(t)+1&{}\quad h_0(t)\\ h_0(t)&{}\quad h_0(t)^2\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + [\,x_1\;x_2\,]\left[ \begin{array}{c}1\\ h_0(t)\end{array} \right] u+u^2. \end{aligned}$$

For reasons which will became clear later, we must consider in this case the (common) hull \(\varOmega \) of \(f_0\) and \(h_0\), whose construction we explained in Sect. 2.1. This provides the families

$$\begin{aligned}&\displaystyle \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] '=\left[ \begin{array}{cc} 1&{}\quad h(\omega {\cdot }t)\\ 0&{}\quad 1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + \left[ \begin{array}{c} 1\\ 0\end{array}\right] u,\\&\displaystyle {\mathcal {Q}}_\omega \left( t,\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right) :=[\,x_1\;x_2\,]\left[ \begin{array}{cc} f(\omega {\cdot }t)+1&{} \quad h(\omega {\cdot }t)\\ h(\omega {\cdot }t)&{} \quad h(\omega {\cdot }t)^2\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + [\,x_1\;x_2\,]\left[ \begin{array}{c}1\\ h(\omega {\cdot }t)\end{array} \right] u+u^2,\\&\displaystyle {\mathcal {I}}_{{\mathbf {x}}_0,\omega }:{\mathcal {P}}_{{\mathbf {x}}_0,\omega }\rightarrow {\mathbb {R}}\cup \{\pm \infty \},\quad \left( \left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,u\right) \mapsto \int _0^\infty {\mathcal {Q}}_\omega \left( t,\left[ \begin{array}{c} x_1(t)\\ x_2(t)\end{array}\right] ,u(t)\right) \,\mathrm{d}t \end{aligned}$$

for \(\omega \in \varOmega \), where \({\mathcal {P}}_{{\mathbf {x}}_0,\omega }\) is the set of the measurable pairs solving the control problem corresponding to \(\omega \) with initial state \({\mathbf {x}}_0\).

The corresponding family (3.12) takes the form

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] '= \left[ \begin{array}{cccc} 0&{}\quad 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 1&{}\quad 0&{}\quad 0\\ f(\omega {\cdot }t)&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad -\,1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] , \end{aligned}$$
(3.16)

which can be uncoupled to

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ y_1\end{array}\right] ' =\left[ \begin{array}{cc} 0&{}\quad 1\\ f(\omega {\cdot }t)&{}\quad 0\end{array}\right] \left[ \begin{array}{c}x_1\\ y_1\end{array}\right] \quad \text {and}\quad \left[ \begin{array}{c}x_2\\ y_2\end{array}\right] ' =\left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad -\,1\end{array}\right] \left[ \begin{array}{c}x_2\\ y_2\end{array}\right] . \end{aligned}$$
(3.17)

As said in Example 3.8, it is proved in [8] that the left family of systems in (3.17) does not have exponential dichotomy over \(\varOmega \). According to Remark 2.2.1, this assertion is equivalent to the existence of at least a nontrivial bounded solution for at least one \(\omega \in \varOmega \). Clearly, this bounded solution provides a bounded solution for (3.16)\(_\omega \), so that the four-dimensional family does not have exponential dichotomy over \(\varOmega \). Hence, we cannot apply Theorem 1.1.

Let us now take \(\lambda >0\) and consider the new families

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] '&=\left[ \begin{array}{cc} 1&{}\quad h(\omega {\cdot }t)\\ 0&{}\quad 1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad \lambda \end{array}\right] \left[ \begin{array}{c}v_1\\ v_2\end{array} \right] , \end{aligned}$$
$$\begin{aligned} {\mathcal {Q}}_\omega ^\lambda \left( t,\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,\left[ \begin{array}{c} v_1\\ v_2\end{array}\right] \right)&:= [\,x_1\;x_2\,]\left[ \begin{array}{cc} f(\omega {\cdot }t)+\lambda +1&{}\quad h(\omega {\cdot }t)\\ h(\omega {\cdot }t)&{}\quad h(\omega {\cdot }t)^2\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] + [\,x_1\;x_2\,]\left[ \begin{array}{cc}1&{}\quad 0\\ h(t)&{}\quad 0\end{array} \right] \left[ \begin{array}{c}v_1\\ v_2\end{array} \right] \\&\quad \,\,+\, [\,v_1\;v_2\,]\left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad \lambda \end{array}\right] \left[ \begin{array}{c}v_1\\ v_2\end{array} \right] ,\\ {\mathcal {I}}^\lambda _{{\mathbf {x}}_0,\omega }:{\mathcal {P}}^\lambda _{{\mathbf {x}}_0,\omega }&\rightarrow {\mathbb {R}}\cup \{\pm \infty \}, \quad \left( \left[ \begin{array}{c} x_1\\ x_2\end{array}\right] ,\left[ \begin{array}{c} v_1\\ v_2\end{array}\right] \right) \mapsto \int _0^\infty {\mathcal {Q}}^\lambda _\omega \left( t,\left[ \begin{array}{c} x_1(t)\\ x_2(t)\end{array}\right] , \left[ \begin{array}{c} v_1(t)\\ v_2(t)\end{array}\right] \right) \,\mathrm{d}t, \end{aligned}$$

with associated family of linear Hamiltonian systems given by

$$\begin{aligned} \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] '= \left[ \begin{array}{cccc}0&{}\quad 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 1&{}\quad 0&{}\quad \lambda \\ f(\omega {\cdot }t)+\quad \lambda &{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad -\,1\end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\\ y_1\\ y_2\end{array} \right] . \end{aligned}$$
(3.18)

Let us fix \(\lambda >0\). Then, the family of systems \(\left[ \begin{array}{c} x_1\\ y_1\end{array}\right] ' =\left[ \begin{array}{cc}0&{}1\\ f(\omega {\cdot }t)+\lambda &{}0\end{array}\right] \left[ \begin{array}{c} x_1\\ y_1\end{array}\right] \) has exponential dichotomy over \(\varOmega \), and the Lagrange planes of the solutions which are bounded at \(\pm \infty \) are represented by \(\left[ \begin{array}{c} 1\\ m^\pm _\lambda (\omega )\end{array}\right] \): see again Example 7.37 of [14]. It is easy to check that also the family \(\left[ \begin{array}{c} x_2\\ y_2\end{array}\right] '=\left[ \begin{array}{cc} 1&{}\;\lambda \\ 0&{}-\,1\end{array}\right] \left[ \begin{array}{c} x_2\\ y_2\end{array}\right] \) has exponential dichotomy, with Lagrange planes of the solutions bounded at \(+\,\infty \) and \(-\,\infty \) given by \(\left[ \begin{array}{c} 1\\ -\,2/\lambda \end{array}\right] \) and \(\left[ \begin{array}{c} 1\\ 0\end{array}\right] \), respectively. Therefore, the family (3.18) has exponential dichotomy, with

$$\begin{aligned} l^+_\lambda (\omega )\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ \,m^+_\lambda (\omega )&{}\quad 0\\ 0&{}\quad -\,2/\lambda \end{array}\right] \equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad -\,\lambda /2\\ \,m^+_\lambda (\omega )&{}\quad 0\\ 0&{}\quad 1\end{array}\right] ,\qquad l^-_\lambda (\omega )\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ \,m^-_\lambda (\omega )&{}\quad 0\\ 0&{}\quad 0\end{array}\right] . \end{aligned}$$

As in Example 3.8, we can check by direct computation that the rotation number of (3.18) is zero. In these conditions, Theorem 3.2 ensures the solvability of the problem for \({\mathcal {I}}^\lambda _{{\mathbf {x}}_0,\omega }\) for all \(({\mathbf {x}}_0,\omega )\), being the value of the minimum (if \({\mathbf {x}}_0=\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] \))

$$\begin{aligned} -\frac{1}{2}\;[\,x_1\;x_2\,]\left[ \begin{array}{cc} m^+_\lambda (\omega )&{}\quad 0 \\ 0&{}\quad -\,2/\lambda \end{array}\right] \left[ \begin{array}{c}x_1\\ x_2\end{array} \right] =-\frac{1}{2}\,x_1^2\,m_\lambda ^+(\omega )+\frac{1}{\lambda }\,x_2^2. \end{aligned}$$
(3.19)

On the other hand, it follows from Theorem 5.58 and Proposition 5.51 of [14] that, if \(0<\lambda _1\le \lambda _2\), then \(m^+_{\lambda _2}(\omega )\le m^+_{\lambda _1}(\omega )\le m^-_{\lambda _1}(\omega )\le m^-_{\lambda _2}(\omega )\). Therefore, there exist the limits \(n^\pm (\omega ):=\lim _{\lambda \rightarrow 0^+}m^\pm _\lambda (\omega )\) for all \(\omega \in \varOmega \). (As a matter of fact, \(n^\pm \) are the principal functions of the system corresponding to \(\lambda =0\), which is uniformly weakly disconjugate: see for instance Theorem 5.61 of [14].) Moreover, there exists an invariant subset \(\varOmega _0\subsetneq \varOmega \) with full measure (for the unique ergodic measure on the hull) such that, if \(\omega \in \varOmega _0\), then: \(n^+(\omega )<n^-(\omega )\), and the solution of (3.16)\(_\omega \) with initial data \(\left[ \begin{array}{c} x_0\\ n^\pm (\omega )\,x_0\end{array}\right] \), which can be written as \(\left[ \begin{array}{c} x_1(t)\\ n^\pm (\omega {\cdot }t)\,x_1(t)\end{array}\right] \), belongs to \(L^2([0,\infty ),{\mathbb {R}}^2)\) and satisfies

$$\begin{aligned} \lim _{t\rightarrow \pm \infty }\left[ \begin{array}{c} x_1(t)\\ n^\pm (\omega {\cdot }t)\,x_1(t)\end{array} \right] =\left[ \begin{array}{c}0\\ 0\end{array} \right] . \end{aligned}$$
(3.20)

The proofs of the last assertions follow from the properties of this family of two-dimensional systems described in [8], and are based on the fact that the functions \(n^+\) and \(n^-\) provide the Oseledets subbundles associated with the negative and positive Lyapunov exponents of the family. The interested reader can find in Theorem 6.3(iii) of [15] a detailed proof (formulated for the quasi-periodic case, but applicable without changes to the almost periodic case), and in Section 8.7 of [14] an exhaustive description of the construction of an example with similar behavior.

Let us now observe that the limits as \(\lambda \rightarrow 0^+\) of \(l^\pm _\lambda (\omega )\) are the Lagrange planes

$$\begin{aligned} l^+(\omega )\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 0\\ \,n^+(\omega )&{}\quad 0\\ 0&{}\quad 1\end{array}\right] \quad \text {and}\quad l^-(\omega )\equiv \left[ \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ \,n^-(\omega )&{}\quad 0\\ 0&{}\quad 0\end{array}\right] . \end{aligned}$$

In addition, if \(\omega \in \varOmega _0\), then the solutions of (3.16)\(_\omega \) with initial data in \(l^\pm (\omega )\) tend to \({\mathbf {0}}\) as \(t\rightarrow \pm \infty \), as deduced from (3.20) and from the behavior of the solutions of the right-hand system of (3.17). We will see that, if \(\omega \in \varOmega _0\), then the initial states \({\mathbf {x}}_0\) for which the minimization problem is solvable, as well as the value of the minimum and a minimizing pair, can be obtained from \(l^+(\omega )\). The analogy with the situation described in Theorem 1.1 will be obvious.

Let us fix \(\omega \in \varOmega _0\) and \({\mathbf {x}}_0=\left[ \begin{array}{c} x_1\\ x_2\end{array}\right] \in {\mathbb {R}}^2\), and let \(\big (\left[ \begin{array}{c} {\bar{x}}_1\\ {\bar{x}}_2\end{array}\right] ,{\bar{u}}\big )\) be an admissible pair for \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\). It is easy to check that \({\mathcal {I}}_{{\mathbf {x}}_0,\omega }\big (\left[ \begin{array}{c} {\bar{x}}_1\\ {\bar{x}}_2\end{array}\right] ,{\bar{u}}\big )= {\mathcal {I}}^\lambda _{{\mathbf {x}}_0,\omega }\big (\left[ \begin{array}{c} {\bar{x}}_1\\ {\bar{x}}_2\end{array}\right] ,\left[ \begin{array}{c} {\bar{u}}\\ 0\end{array}\right] \big )\) for any \(\lambda >0\), which combined with (3.19) ensures that

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0,\omega }\left( \left[ \begin{array}{c} {\bar{x}}_1\\ {\bar{x}}_2 \end{array}\right] ,{\bar{u}}\right) \ge -\frac{1}{2}\,x_1^2\,m_\lambda ^+(\omega )+\frac{1}{\lambda }\,x_2^2 \end{aligned}$$
(3.21)

for any \(\lambda >0\). By taking limit as \(\lambda \rightarrow 0^+\), we conclude from the admissibility and from the existence of the real limit \(n^+(\omega ):=\lim _{\lambda \rightarrow 0^+}m_\lambda ^+(\omega )\) that \(x_2\) must be 0. And this is equivalent to ensure that there exists \(\left[ \begin{array}{c} y_1\\ y_2\end{array}\right] \in {\mathbb {R}}^2\) such that \(\left[ \begin{array}{c} x_1\\ x_2\\ y_1\\ y_2\end{array}\right] \in l^+(\omega )\): we can take \(\left[ \begin{array}{c} y_1\\ y_2\end{array}\right] =\left[ \begin{array}{c} n^+x_1\\ c_2\end{array}\right] \) for any \(c_2\in {\mathbb {R}}\).

We work from now on with \({\mathbf {x}}_0=\left[ \begin{array}{c} x_1\\ 0\end{array}\right] \). As just said, taking limit as \(\lambda \rightarrow 0^+\) in (3.21) yields

$$\begin{aligned} {\mathcal {I}}_{{\mathbf {x}}_0,\omega }\left( \left[ \begin{array}{c}{\bar{x}}_1\\ {\bar{x}}_2 \end{array}\right] ,{\bar{u}}\right) \ge -\frac{1}{2}\,x_1^2\,n^+(\omega ) \end{aligned}$$

for any admissible pair. In fact, the right value is the infimum, since it is reached at the admissible pair \(\Bigg (\left[ \begin{array}{c} \widetilde{x}_1\\ 0\end{array}\right] ,\widetilde{u}\Bigg )\) defined from the solution \(\left[ \begin{array}{c} \widetilde{x}_1(t)\\ 0\\ n^+(\omega {\cdot }t)\,\widetilde{x}_1(t)\\ 0\end{array}\right] \) of (3.16)\(_\omega \) with initial data \(\left[ \begin{array}{c} x_1\\ 0\\ n^+(\omega )\,x_1\\ 0\end{array}\right] \) via the feedback rule (analogous to (1.11)\(_\omega \))

$$\begin{aligned} \widetilde{u}(t)= [\,1\;0\,]\,\left[ \, \begin{array}{c}n^+(\omega {\cdot }t)\,\widetilde{x}_1(t)\\ 0\end{array} \right] -[\,1\;1\,]\,\left[ \, \begin{array}{c}\widetilde{x}_1(t)\\ 0\end{array} \right] = (n^+(\omega {\cdot }t)-1)\,\widetilde{x}_1(t). \end{aligned}$$

This assertion follows from Lemma 3.4 (which does not require Hypotheses 3.1) combined with (3.20).

We finally observe that there is no way to know whether the initial problem of this example corresponds to a point \(\omega \in \varOmega _0\) (although the probability is 1, as the measure of the set \(\varOmega _0\)). In other words, this procedure does not allow us to provide conditions under which the initial minimization problem is solvable. This is one more sample of the extreme complexity which may arise in the nonautonomous dynamics.