Abstract
This article proposes a new approach for the design of lowdimensional suboptimal controllers to optimal control problems of nonlinear partial differential equations (PDEs) of parabolic type. The approach fits into the long tradition of seeking for slaving relationships between the small scales and the large ones (to be controlled) but differ by the introduction of a new type of manifolds to do so, namely the finitehorizon parameterizing manifolds (PMs). Given a finite horizon [0,T] and a lowmode truncation of the PDE, a PM provides an approximate parameterization of the high modes by the controlled low ones so that the unexplained highmode energy is reduced—in a meansquare sense over [0,T]—when this parameterization is applied.
Analytic formulas of such PMs are derived by application of the method of pullback approximation of the highmodes introduced in Chekroun et al. (2014). These formulas allow for an effective derivation of reduced systems of ordinary differential equations (ODEs), aimed to model the evolution of the lowmode truncation of the controlled state variable, where the highmode part is approximated by the PM function applied to the low modes. The design of lowdimensional suboptimal controllers is then obtained by (indirect) techniques from finitedimensional optimal control theory, applied to the PMbased reduced ODEs.
A priori error estimates between the resulting PMbased lowdimensional suboptimal controller \(u_{R}^{\ast}\) and the optimal controller u ^{∗} are derived under a secondorder sufficient optimality condition. These estimates demonstrate that the closeness of \(u_{R}^{\ast}\) to u ^{∗} is mainly conditioned on two factors: (i) the parameterization defect of a given PM, associated respectively with the suboptimal controller \(u_{R}^{\ast}\) and the optimal controller u ^{∗}; and (ii) the energy kept in the high modes of the PDE solution either driven by \(u_{R}^{\ast}\) or u ^{∗} itself.
The practical performances of such PMbased suboptimal controllers are numerically assessed for optimal control problems associated with a Burgerstype equation; the locally as well as globally distributed cases being both considered. The numerical results show that a PMbased reduced system allows for the design of suboptimal controllers with good performances provided that the associated parameterization defects and energy kept in the high modes are small enough, in agreement with the rigorous results.
Introduction
In this article, we propose a new approach for the synthesis of lowdimensional suboptimal controllers for optimal control problems of nonlinear partial differential equations (PDEs) of parabolic type. Optimal control of PDEs has been extensively studied in the past few decades due largely to its broad applications in both engineering and various scientific disciplines, and fruitful results have been obtained; see e.g. the monographs [8, 10, 31, 44, 49, 56, 78, 100].
Due to the complexity of most applications, optimal control problems of parabolic PDEs are often solved numerically. Among the commonly used methods one finds methods that solve at once the associated optimality system using techniques such as the Newton or quasiNewton methods [14, 56, 60], or methods that use optimization algorithms involving for instance an approximation to the gradient of the cost functional; see e.g. [13, 56, 60, 100]. In this case, the gradient can be approximated by using sensitivity methods or methods based on the adjoint equation; see e.g. [1, 15, 16, 51, 52, 62, 85, 86]. Efficient (and accurate) solutions can be designed by such methods [1, 7, 15, 30, 55, 85, 86] which may lead however to highdimensional problems that can turn out to be computationally expensive to solve, especially for fluid flows applications. The task becomes even more challenging when a dynamic programming approach is adopted, involving typically to solve (infinitedimensional) Hamilton–Jacobi–Bellman (HJB) equations [8, 9, 24, 35–38].
As an alternative, various reduction techniques have been proposed in the literature to seek instead for lowdimensional suboptimal controllers. The main issue related to such techniques relies however on the ability to design suboptimal solutions close enough to the genuine optimal one [40, 50, 57, 61, 101], while keeping cheap enough the numerical efforts to do so. A general class of model reduction techniques used extensively in this context is the socalled reducedorder modeling (ROM) approach, based on approximating the nonlinear dynamics by a Galerkin technique relying on basis functions, possibly empirical [48, 54, 55, 89]. Various ROM techniques differ in the choice of the basis functions. One popular method that falls into this category is the socalled proper orthogonal decomposition (POD); see among many others [6, 12, 57, 58, 74, 75, 83, 90], and [50, 63, 64] for other methods in constructing the reduced basis. We refer also to [76] for suboptimal controllers designed from the solutions of lowdimensional HJB equations associated with PODbased Galerkin reducedorder models.
Such Galerkin/ROMbased techniques can lead to a synthesis of very efficient suboptimal controllers once, at a given truncation, the disregarded highmodes do not contribute significantly to the dynamics of the low modes. However, when this is not the case, the seeking of parameterizations of the disregarded modes in terms of the low ones becomes central for the design of surrogate lowdimensional models of good performances. The idea of seeking for slaving relationships between the unstable or least stable modes with the more stable ones has a long tradition in control theory of largedimensional or distributedparameter systems. For instance, by use of methods from singular perturbation theory, the authors in [69–72] investigated the construction of such slaving functions for slowfast systems in terms of invariant (slow) manifolds.^{Footnote 1} Such manifolds are then used to decouple the slow and fast parts of the dynamics and to feed back the slow component of the state only. This is especially important since the fast components of the state are in general difficult to measure/estimate and consequently to feedback.
Complementary to singular perturbation methods, the authors of [29] used tools of center manifold and normal form theory to design a nonlinear controller and obtained a closedloop center manifold for a truncated distributedparameter system; in their case proximity to a bifurcation constitutes a guarantee to the separation of relevant time scales of the problem. In [32, 33], the authors have gone beyond the finitedimensional singular perturbation work of [69] or centermanifoldbased work of [29] to exploit approximate inertial manifolds (AIMs) [46] in the infinitedimensional case; the latter are global manifolds in phase space that can be thought of as generalizations of slow/center manifolds. Using AIMs, the authors of [29] designed then observerbased nonlinear feedback controllers (through the corresponding closedloop AIMs) and demonstrated their performance.
The potential usefulness of inertial manifolds (IMs) [34, 47, 98] or AIMs in control theory of nonlinear parabolic PDEs was actually quickly identified after IM theory started to be established [22, 32, 33, 94]; see e.g. [93, 96] for a stateoftheart of the literature at the end of the 90s. However since these works, IMs or AIMs have been mainly employed to derive lowdimensional vector fields for the design of feedback controllers [3, 92]. To the exception of [4, 61], the use of IMs or AIMs to design suboptimal solutions to optimal control problems have been much less considered.
The main purpose of this article is to introduce a general framework—in the continuity but different from the AIM approach—for the effective derivation of suboptimal lowdimensional solutions to optimal control problems associated with nonlinear PDE such as (1.1) given below. To be more specific, given an ambient Hilbert space, \(\mathcal{H}\), the control problems of PDEs we will consider hereafter take the following abstract form:
where L denotes a linear operator, F some nonlinearity, and \(\mathfrak{C}\) denotes a bounded linear operator on \(\mathcal {H}\); the state variable y and the controller u living both in \(L^{2}(0,T; \mathcal{H})\) for a given horizon T>0; see Sect. 2 for more details.
The underlying idea consists of seeking for manifolds \(\mathfrak{M}\) aimed to provide—over a finite horizon [0,T]—an approximate parameterization of the small scales of the solutions to the uncontrolled PDE associated with Eq. (1.1), namely
in terms of their large scales, so that \(\mathfrak{M}\) allows in turn to derive lowdimensional reduced models from which suboptimal controllers can be efficiently designed by standard methods of finitedimensional optimal control theory such as found in e.g. [18, 23, 67, 68, 95]. In that respect, the notion of finitehorizon parameterizing manifold (PM) is introduced in Definition 1 below. Finitehorizon PMs distinguish from the more classical AIMs in the sense that they provide approximate parametrization of the small scales by the large ones in the L ^{2}sense (over [0,T]) rather than a hard εapproximation to be valid for each time t∈[0,T], cf. [46]. In particular, a finitehorizon PM allows to reduce the (cumulative) unexplained highmode energy (over [0,T]) from the low modes to be controlled, in a way different from other slaving relationships considered so far; the highmode energy being reduced in a meansquare sense in the case of finitehorizon PMs.
Obviously, the difficulty relies still on the ability of such an approach to give access to suboptimal controllers of good performance. A priori the task in not easy and a key feature to ensure that a “good” performance is achieved from such a suboptimal lowdimensional controller, \(u_{R}^{*}\), relies on the ability of the manifold \(\mathfrak{M}\) derived from the uncontrolled problem to still achieve a sufficiently “small” parameterization defect (over the horizon [0,T]) of the small scales by the large ones once a controller \(u_{R}^{*}\) is used to drive the PDE (1.1); see (3.5) in Definition 1. This point is rigorously formulated as Theorem 1 in Sect. 4 (see also Corollary 2), which provides—under a secondorder sufficient optimality condition—error estimates on how “close” a lowdimensional suboptimal controller \(u_{R}^{\ast}\), designed from a PMbased reduced system, is to the optimal controller u ^{∗}. The error estimates (4.5) and (4.10) show in particular that the closeness of \(u_{R}^{\ast}\) to u ^{∗} is mainly conditioned on two factors: (i) the parameterization defect of a given PM, associated respectively with the suboptimal controller \(u_{R}^{\ast}\) and the optimal controller u ^{∗}; and (ii) the energy kept in the high modes of the PDE solution either driven by \(u_{R}^{\ast}\) or u ^{∗} itself.
The article is organized as follows. The functional framework associated with optimal control problems related to (1.1) is introduced in Sect. 2. The definition of finitehorizon PMs and a practical procedure to get access to such PMs are introduced in Sect. 3. In particular analytic formulas of leadingorder PMs are provided; the latter being subject to a cross nonresonance condition (NR) to be satisfied between the high and the low modes; see Sect. 3.2. Section 4 is devoted, given an arbitrary PM, to the derivation of rigorous a priori error estimates between a lowdimensional PMbased suboptimal controller and the optimal one; see Theorem 1 and Corollary 2. The performance of the resulting PMbased reduction approach is numerically investigated on a Burgerstype equation in the context of globally and locally distributed control laws; see Sects. 5–6, and Sect. 7. As a main byproduct, the numerical results strongly indicate that a PMbased reduced system allows for a design of suboptimal controllers with good performances provided that the aforementioned parameterization defects and the energy contained in the high modes are small enough, in agreement with the theoretical predictions of Theorem 1 and Corollary 2. This is particularly demonstrated in Sect. 6, where analytic formulas derived in Theorem 2 give access to higherorder PMs with reduced parameterization defects compared to those of the leadingorder PMs introduced in Sect. 3. In all the cases, the analytic formulas of the PMs used hereafter allows for an efficient design of suboptimal controllers by standard (and simple) application of the Pontryagin maximum principle [18, 19, 67, 88] to the PMbased reduced systems.
Optimal Control of Nonlinear PDEs, and Functional Framework
The functional framework for the optimal control problem considered in this article takes place in Hilbert spaces. Let us first introduce the class of partial differential equations (PDEs) to be controlled. For a given Hilbert space \(\mathcal{H}\), we consider \(\mathcal{H}_{1}\) to be a subspace compactly and densely embedded in \(\mathcal{H}\) such that \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}\) is a sectorial operator [53, Definition 1.3.1] satisfying
To include in our framework PDEs for which the nonlinear terms are responsible of a loss of regularity compared to the ambient space \(\mathcal{H}\), we consider standard interpolated spaces \(\mathcal {H}_{\alpha}\) between \(\mathcal{H}_{1}\) and \(\mathcal{H}\) (with α∈[0,1))^{Footnote 2} along with perturbations of the linear operator −A given by a oneparameter family, \(\{B_{\lambda}\}_{\lambda\in\mathbb {R}}\), of bounded linear operators from \(\mathcal{H}_{\alpha}\) to \(\mathcal{H}\), that depend continuously on a real parameter λ.
By defining
we are thus left with a oneparameter family of sectorial operators \(\{ L_{\lambda}\}_{\lambda\in\mathbb{R}}\), each of them mapping \(\mathcal {H}_{1}\) into \(\mathcal{H}\). Finally, \(F: \mathcal{H}_{\alpha}\rightarrow \mathcal{H}\) will denote a continuous klinear mapping (k≥2) for some α∈[0,1).^{Footnote 3}
The nonlinear evolution equation to be controlled takes then the following abstract form:
where \(y \in L^{2}(0,T; \mathcal{H})\) denotes the state variable, \(u \in L^{2}(0,T; \mathcal{H})\) denotes the controller; T>0 being a fixed horizon, and
denoting a bounded (and nonzero) linear control operator. In particular, we will be mainly concerned with distributed control problems (control inside the domain) and not with problems involving a control on the boundary which leads typically to an unbounded control operator; see e.g. [10, Part V, Chaps. 2 and 3] and [43–45]. The parameter λ governs typically the presence of (linearly) unstable modes for (2.1). In the application considered in Sects. 5–7, it will be chosen so that the linear operator, L _{ λ }, admits largescale unstable modes.
We introduce next the cost functional \(J: L^{2}(0,T; \mathcal{H}) \times L^{2}(0,T; \mathcal{H}) \rightarrow\mathbb{R}\) given by
where \(\mathcal{G}: \mathcal{H} \rightarrow\mathbb{R}^{+}\) and \(\mathcal {E}: \mathcal{H} \rightarrow\mathbb{R}^{+}\) are assumed to be continuous, and to satisfy the following conditions:
and
where ∥⋅∥ denotes the \(\mathcal{H}\)norm.
Given such a cost functional,^{Footnote 4} we will consider in this article the following type of optimal control problem:
To simplify the presentation, we will make the following assumptions on L _{ λ } and F throughout this article:
Standing Hypothesis
L _{ λ } is selfadjoint, whose eigenvalues (arranged in descending order) are denoted by \(\{\beta_{i}(\lambda)\}_{i \in\mathbb{N}}\); and the eigenvectors \(\{ e_{i}(\lambda)\}_{i \in\mathbb{N}}\) of L _{ λ } form a Hilbert basis of \(\mathcal{H}\). The eigenvectors are regular enough such that \(e_{i}(\lambda) \in\mathcal{H}_{\alpha}\) for all \(i\in\mathbb{N}\). The nonlinearity \(F: \mathcal{H}_{\alpha}\rightarrow\mathcal{H}\) is a continuous klinear mapping for some k≥2, and for some α∈[0,1). In particular, F(0)=0.
We also assume that for any initial datum \(y_{0}\in\mathcal{H}\), any T>0, and any given \(u \in L^{2}(0, T; \mathcal{H})\), the Cauchy problem
has a unique solution \(y(\cdot,y_{0};u) \in C([0,T]; \mathcal{H}) \cap L^{2}(0,T; \mathcal{H}_{\alpha})\), which lives furthermore in the space \(C^{1}((0,T]; \mathcal{H}) \cap C([0,T]; \mathcal{H}_{\alpha}) \cap L^{2}(0,T; \mathcal{H}_{1})\) when \(y_{0} \in\mathcal{H}_{\alpha}\); see e.g. [53, Chap. 3] and [82, Chap. 7] for conditions under which such properties are guaranteed. Sect. 5.1 below deals with such an example.
FiniteHorizon Parameterizing Manifolds: Definition, Pullback Characterization and Analytic Formulas
This section is devoted to the definition of finitehorizon parameterizing manifolds (PMs) for a given PDE of type (2.4) and a general method to give access to explicit formulas of such finitehorizon PMs in practice through pullback limits associated with certain backward–forward systems built from the uncontrolled Eq. (1.2).
The key idea takes its roots in the notion of (asymptotic) parameterizing manifold introduced in [27],^{Footnote 5} which reduces here of approximating—over some prescribed finite time interval [0,T]—the modes with “high” wave numbers as a pullback limit depending on the timehistory of (some approximation of) the dynamics of the modes with “low” wave numbers. The cut between what is “low” and what is “high” is organized in an abstract setting as follows; we refer to Sect. 7 for a more concrete specification of such a cut in the case of locally distributed controls. The subspace \(\mathcal{H}^{\mathfrak{c}} \subset\mathcal{H}\) defined by,
spanned by the mleading modes will be considered as our subspace associated with the low modes. Its topological complements, \(\mathcal{H}^{\mathfrak{s}}\) and \(\mathcal{H}^{\mathfrak{s} }_{\alpha}\), in respectively \(\mathcal{H}\) and \(\mathcal{H}_{\alpha}\), will be considered as associated with the high modes, leading to the following decomposition
We will use \(P_{\mathfrak{c}}\) and \(P_{\mathfrak{s}}\) to denote the canonical projectors associated with \(\mathcal{H}^{\mathfrak{c}}\) and \(\mathcal {H}^{\mathfrak{s}}\), respectively. Here, the usage of the eigenbasis in the decomposition of the phase space is employed for the sake of analytic formulations derived hereafter. In practice, the methodology presented below can be (numerically) adapted when the phase space \(\mathcal{H}\) is decomposed by using other bases; see also Remark 1(ii).
FiniteHorizon Parameterizing Manifolds
Let t ^{∗}>0 be fixed, \(\mathcal{V}\) be an open set in \(\mathcal {H}_{\alpha}\), and \(\mathcal{U}\) an open set in \(L^{2}(0,t^{\ast}; \mathcal{H})\). For a given PDE of type (2.4), a finitehorizon parameterizing manifold \(\mathfrak{M}\) over the interval [0,t ^{∗}] is defined as the graph of a function h ^{pm} from \(\mathcal{H}^{\mathfrak{c}}\) to \(\mathcal{H}^{\mathfrak {s}}_{\alpha}\), which is aimed to provide, for any y(t,y _{0};u) solution of (2.4) with initial datum \(y_{0} \in\mathcal{V}\) and control \(u \in\mathcal{U}\), an approximate parameterization of its “highfrequency” part, \(y_{\mathfrak{s}}(t, y_{0};u)=P_{\mathfrak{s}} y(t, y_{0};u)\), in terms of its “lowfrequency” part, \(y_{\mathfrak{c}}(t, y_{0};u)=P_{\mathfrak{c}} y(t, y_{0}; u)\), so that the meansquare error, \(\int_{0}^{t^{\ast}} \y_{\mathfrak {s}}(t, y_{0}; u)  h^{\mathrm{pm}}(y_{\mathfrak{c}}(t, y_{0}; u)) \_{\alpha}^{2} \,\mathrm{d}t \), is strictly smaller than the highmode energy of \(y_{\mathfrak{s}}\), \(\int_{0}^{t^{\ast}} \y_{\mathfrak{s}}(t, y_{0}; u)\_{\alpha}^{2} \; \mathrm{d}t\). Here the frequencies are understood in a spatial sense, i.e. in terms of wave numbers.^{Footnote 6} In statistical terms, a finitehorizon PM function h ^{pm} can thus be thought of as a slaving relationship between the high modes and the low ones such that the fraction of energy^{Footnote 7} of \(y_{\mathfrak{s}}\) unexplained by \(h^{\mathrm{pm}}(y_{\mathfrak{c}})\) (i.e. via this slaving relationship) is less than unity.
In more precise terms, we are left with the following definition:
Definition 1
Let t ^{∗}>0 be fixed, \(\mathcal{V}\) be an open set in \(\mathcal {H}_{\alpha}\), and \(\mathcal{U}\) an open set in \(L^{2}(0,t^{\ast}; \mathcal {H})\). A manifold \(\mathfrak{M}\) of the form
is called a finitehorizon parameterizing manifold (PM) over the time interval [0,t ^{∗}] associated with the PDE (2.4) if the following conditions are satisfied:

(i)
The function \(h^{\mathrm{pm}}: \mathcal{H}^{\mathfrak {c}} \rightarrow \mathcal{H}^{\mathfrak{s}}_{\alpha}\) is continuous.

(ii)
The following inequality holds for any \(y_{0} \in\mathcal {V}\) and any \(u \in\mathcal{U}\):
$$ \begin{aligned} \int_0^{t^\ast} \bigl\ y_{\mathfrak{s}}(t,y_0; u)  h^{\mathrm{pm}}\bigl(y_{\mathfrak{c}}(t, y_0; u)\bigr) \bigr\ _\alpha^2 \, \mathrm{d}t < \int _0^{t^\ast} \bigl\ y_{\mathfrak{s}}(t, y_0; u) \bigr\ _\alpha^2 \, \mathrm{d}t, \end{aligned} $$(3.4)where \(y_{\mathfrak{c}}(\cdot, y_{0}; u)\) and \(y_{\mathfrak{s}}(\cdot , y_{0}; u)\) are the projections to respectively the subspaces \(\mathcal{H}^{\mathfrak {c}}\) and \(\mathcal{H}^{\mathfrak{s}}_{\alpha}\) of the solution y(⋅,y _{0};u) for the PDE (2.4) driven by u emanating from y _{0}.
For a given initial datum y _{0}, if \(y_{\mathfrak{s}}(\cdot, y_{0}; u)\) is not identically zero, the parameterization defect of \(\mathfrak{M}\) over [0,t ^{∗}], and associated with the control u, is defined as the following ratio:
Note that in Sects. 5, 6 and 7, we will illustrate numerically that finitehorizon PMs can actually be obtained from the uncontrolled PDE (1.2), with still possibly small parameterization defects when a controller u is applied. The procedure to build in practice such PMs from the uncontrolled PDE (1.2) is described in the next section; see also [27, Sect. 4.5] for the construction of PMs over arbitrarily (and sufficiently) large horizons.
FiniteHorizon Parameterizing Manifolds as Pullback Limits of Backward–Forward Systems: The LeadingOrder Case
We consider now the important problem of the practical determination of finitehorizon PMs for PDEs of type (2.4). As mentioned above, following [27], the pullback approximation of the high modes in terms of the low ones via appropriate auxiliary systems associated with the uncontrolled PDE (1.2) will constitute the key ingredient to propose a solution to this problem; see also [27, Sect. 4.5]. In that respect, we consider first the following backward–forward system associated with the uncontrolled PDE (1.2):
where \(L_{\lambda}^{\mathfrak{c}} := P_{\mathfrak{c}} L_{\lambda}\), \(L_{\lambda}^{\mathfrak{s}} := P_{\mathfrak{s}} L_{\lambda}\), and \(\xi\in\mathcal{H}^{\mathfrak {c}}\). We refer to Sect. 6 for other backward–forward systems used in the construction of higherorder finitehorizon PMs.
In the system above, the initial value of \(y_{\mathfrak{c}}^{(1)}\) is prescribed at s=0, and the initial value of \(y_{\mathfrak{s}}^{(1)}\) at s=−τ. The solution of this system is obtained by using a twostep backward–forward integration procedure—where Eq. (3.6a) is integrated first backward and Eq. (3.6b) is then integrated forward—made possible due to the partial coupling present in (3.6a), (3.6b) where \(y^{(1)}_{\mathfrak{c}}\) forces the evolution equation of \(y^{(1)}_{\mathfrak{s}}\) but not reciprocally. Due to this forcing introduced by \(y_{\mathfrak{c}}^{(1)}\) which emanates (backward) from ξ, the solution process \(y_{\mathfrak{s}}^{(1)}\) depends naturally on ξ. For that reason, we will emphasize this dependence as \(y_{\mathfrak{s} }^{(1)}[\xi]\) hereafter.
It is clear that the solution to the above system is given by:
The dependence in τ and s in \(y_{\mathfrak{s}}^{(1)}[\xi ]\) is made apparent to emphasize the twotime description employed for the description of the nonautonomous dynamics inherent to (3.6b); see e.g. [25, 28]. Adopting the language of nonautonomous dynamical systems [25, 28], we then define \(h^{(1)}_{\lambda}(\xi)\) as the following pullback limit of the \(y_{\mathfrak{s}}^{(1)}\)component of the solution to the above system, i.e.,
when the latter limit exists. We derive hereafter necessary and sufficient conditions for such a limit to exist.
In that respect, first note that since L _{ λ } is selfadjoint, we have
where ξ _{ i }=〈ξ,e _{ i }〉, \(i \in\mathcal{I} :=\{1, \ldots , m\}\) with \(m=\operatorname{dim}(\mathcal{H}^{\mathfrak{c}})\), and 〈⋅,⋅〉 denoting the innerproduct in the ambient Hilbert space \(\mathcal{H}\).
Now for a fixed τ>0, by projecting \(y_{\mathfrak{s}}^{(1)}[\xi ](\tau,0)\) against each eigenmode e _{ n } for n>m, we obtain, by using (3.9) and the klinear property of F,
From this identity, we infer that \(h^{(1)}_{\lambda}\) is well defined if and only if each integral
converges, whenever the corresponding nonlinear interaction \(F(e_{i_{1}}, \ldots, e_{i_{k}})\) as projected against e _{ n }, is nonzero. Namely, \(h^{(1)}_{\lambda}\) exists if and only if the following (weak) nonresonance condition holds:
see also [26, Chap. 7].
Assuming the above (NR)condition, it follows then from (3.8) and (3.10) that \(h^{(1)}_{\lambda}\) takes the following form:
In particular under the (NR)condition, each e _{ n }component of \(h^{(1)}_{\lambda}(\xi)\) is—in the ξvariable—an homogeneous polynomial of order k, the order of the nonlinearity F. For that reason, \(h^{(1)}_{\lambda}\) will be referred to as the leadingorder finitehorizon PM when appropriate, that is when the latter provides a finitehorizon PM. We clarify in the remaining of this section, some (idealistic) conditions under which such a property is met by the manifold function \(h^{(1)}_{\lambda}\) for the PDE (2.4). In practice these conditions can be violated, while the manifold function \(h^{(1)}_{\lambda}\) defined by (3.11) still constitutes a finitehorizon PM; see Sects. 5.5 and 7 for numerical illustrations.
To delineate conditions under which \(h^{(1)}_{\lambda}\) is a finitehorizon PM is still valuable for the theory. This is the purpose of Lemma 1 below which relies on another key property of \(h^{(1)}_{\lambda}\) such as defined by (3.8), that can be explained using the language of invariant manifold theory for PDEs [26, 84]. The latter states that the manifold function \(h^{(1)}_{\lambda}\) constitutes—for the uncontrolled PDE (1.2)—the leadingorder approximation of some local invariant manifold near the trivial steady state; see [84, Appendix A] and [26, Chap. 7]. Based on this result we formulate the following lemma about the existence of finitehorizon PMs.
Lemma 1
Let λ be fixed and \(\mathcal{H}^{\mathfrak{c}}\) be the subspace spanned by the first m eigenmodes of the linear operator L _{ λ }. Assume that the standing hypothesis of Sect. 2 holds, and that
Assume furthermore that the nonresonance condition (NR) holds so that the pullback limit \(h^{(1)}_{\lambda}\) defined by (3.8) exists.
Assume that \(h^{(1)}_{\lambda}\) is nondegenerate in the sense that there exists C>0 such that
Then, for any fixed t ^{∗}>0, there exist open neighborhoods \(\mathcal {V} \subset\mathcal{H}^{\mathfrak{s}}_{\alpha}\) and \(\mathcal{U} \subset L^{2}(0, t^{\ast}; \mathcal{H})\) containing the origins of the respective spaces, such that \(h^{(1)}_{\lambda}\) is a finitehorizon parameterizing manifold over the time interval [0,t ^{∗}] for the PDE (2.4) driven by any control \(u \in\mathcal{U}\) and with initial data taken from \(\mathcal{V}\).
Proof
Let us first recall some related elements from [26]. Note that the PDE (1.2) fits into the framework of [26, Corollary 7.1].^{Footnote 8} Since the nonlinearity F is assumed to be klinear for some k≥2, according to [26, Corollary 7.1], under the assumption (3.12), there exists a local invariant manifold associated with the PDE (1.2) of the form,
where \(h_{\lambda}^{\mathrm{loc}}: \mathcal{H}^{\mathfrak{c}} \rightarrow\mathcal {H}^{\mathfrak{s}}_{\alpha}\) is the corresponding local manifold function, \(\mathfrak{B} \subset\mathcal{H}^{\mathfrak{c}}\) is an open neighborhood of the origin in \(\mathcal{H}^{\mathfrak{c}}\), and \(h_{\lambda }^{\mathrm{loc}}(0)=0\). Recall that the (NR)condition ensures the pullback limit \(h^{(1)}_{\lambda}\) given in (3.8) to be welldefined. According to [26, Corollary 7.1], the manifold function \(h^{(1)}_{\lambda}\) under its form (3.11) provides then the leading order approximation of the local invariant manifold function \(h_{\lambda}^{\mathrm{loc}}\), i.e.
It follows from (3.15) that for all ε>0 sufficiently small, there exists a neighborhood \(\mathfrak{B}_{1} \subset \mathfrak{B}\) such that
This together with the nondegeneracy condition on \(h^{(1)}_{\lambda}\) given by (3.13) implies that
By possibly choosing ε smaller, and \(\mathfrak{B}_{1}\) to be a smaller neighborhood of the origin, we obtain
We show now that the condition (3.4) required in Definition 1 holds for solutions of the uncontrolled PDE (1.2) emanating from sufficiently small initial data on the local invariant manifold \(\mathfrak{M}^{\mathrm{loc}}_{\lambda}\).
For this purpose, we note that for any fixed t ^{∗}>0, by continuous dependence of the solutions to (1.2) on the initial data, given any sufficiently small initial datum on the local invariant manifold \(\mathfrak{M}^{\mathrm{loc}}_{\lambda}\), the solution stays on \(\mathfrak{M}^{\mathrm{loc}}_{\lambda}\) over [0,t ^{∗}]. Let \(\mathfrak{B}_{2} \subset\mathfrak{B}_{1}\) be a neighborhood of the origin in \(\mathcal{H}^{\mathfrak{c}}\) so that each initial datum of the form \(y_{0} :=\xi+h_{\lambda}^{\mathrm{loc}}(\xi)\), \(\xi\in\mathfrak{B}_{2}\), satisfies the aforementioned property, and the corresponding solution y(⋅,y _{0};0) satisfies furthermore that
where the latter property can be guaranteed by choosing \(\mathfrak {B}_{2}\) properly thanks again to the continuous dependence of the solution on the initial data.
By the local invariant property of \(\mathfrak{M}^{\mathrm{loc}}_{\lambda}\), we have
Now, for each such chosen initial datum, thanks to (3.16) and (3.19), we get
Besides, by (3.18) we have
We obtain then for all \(y_{0} = \xi+ h_{\lambda}^{\mathrm{loc}}(\xi)\) with \(\xi\in\mathfrak{B}_{2}\) that
The RHS can be made less than one by again the continuity argument and by possibly choosing \(\mathfrak{B}_{2}\) to be an even smaller neighborhood.
By appealing to the continuous dependences on initial data y _{0} and the control u of the solution y(0,y _{0};u) to the controlled PDE (2.4), there exist an open set \(\mathcal{V}\) in \(\mathcal {H}_{\alpha}\) containing the set \(\{ y_{0} = \xi+ h_{\lambda}^{\mathrm{loc}}(\xi) \mid\xi\in\mathfrak{B}_{2}\}\), and an open set \(\mathcal {U}\) of the origin in \(L^{2}(0, t^{\ast}; \mathcal{H})\), such that the solution y(0,y _{0};u) satisfies (3.22) with the RHS of (3.22) staying less than one as y _{0} various in \(\mathcal{V}\) and the control u varies in \(\mathcal{U}\). The proof is complete. □
We conclude this section by some remarks regarding possible ways of constructing more elaborated finitehorizon PMs as well as PMs relying on decompositions of the phase space \(\mathcal{H}\) involving other bases than a standard eigenbasis.
Remark 1

(i)
More elaborated backward–forward systems than (3.6a), (3.6b) can be imagined in order to design finitehorizon PMs of smaller parameterization defect than offered by \(h^{(1)}_{\lambda}\); see [27, Sect. 4.3]. The idea remains however the same, namely to parameterize the highmodes as pullback limits of some approximation of the timehistory of the dynamics of low modes. We refer to Sect. 6 for such a parameterization leading in particular to finitehorizon PMs whose e _{ n }components are polynomials of higher order than for those constituting \(h^{(1)}_{\lambda}\). As we will see in Sect. 6.2, such higherorder PMs can give rise to a better design of suboptimal solutions to a given optimal control problem (including terminal payoff terms) than those accessible from the leading order finitehorizon PM \(h^{(1)}_{\lambda}\); see also Remark 4 below.

(ii)
Note also that the usage of the eigenbasis in the decomposition of the phase space \(\mathcal{H}\) is not essential for the definition of the finitehorizon PMs as well as for the construction of PM candidates based on the backward–forward procedure presented in this section or discussed above. In practice, empirical bases such as the POD basis [58] can be adopted to decompose the phase space into resolved lowmode part and its orthogonal complement (the highmode part). By doing so, the resulting subspaces \(\mathcal{H}^{\mathfrak{c}}\) and \(\mathcal{H}^{\mathfrak {s}}\) are not invariant subspaces of the linear operator L _{ λ } anymore, and explicit formulas such as (3.11) should be revised accordingly; this important point for applications will be addressed elsewhere.
FiniteHorizon Parameterizing Manifolds for Suboptimal Control of PDEs
Abstract Results
Given a finitehorizon PM, we present hereafter an abstract formulation of the corresponding reduced equations from which we will see how suboptimal solutions to the problem (\(\mathcal {P}\)) can be efficiently synthesized once an analytic formulation of such reduced equations is available; see Sects. 5, 6 and 7.
The approach consists of reducing the PDE (2.4) governing the evolution of the state y(t) to an ordinary differential equation (ODE) system which is aimed to model the evolution of the low modes \(P_{\mathfrak{c}} y(t)\), by substituting their interactions with the high modes \(P_{\mathfrak{s}}y(t)\), by means of the parameterizing function h associated with a given PM.
For simplicity, we assume that the nonlinearity F is bilinear, denoted by B hereafter so that
is thus a continuous bilinear mapping.
For the sake of readability, the notations introduced in the previous sections are completed by those summarized in Table 1 above. Note also that throughout this article, B(v) will be sometimes used in place of B(v,v) to simplify the presentation.
Recall that the subspace \(\mathcal{H}^{\mathfrak{c}}\) is spanned by the first m dominant eigenmodes associated with the linear operator L _{ λ } for some positive integer m. We denote as before its topological complements in \(\mathcal{H}\) and \(\mathcal{H}_{\alpha}\) by \(\mathcal{H}^{\mathfrak{s}}\) and \(\mathcal{H}^{\mathfrak{s}}_{\alpha}\), respectively. Let \(h:\mathcal{H}^{\mathfrak{c}} \rightarrow\mathcal{H}^{\mathfrak {s}}_{\alpha}\) be a finitehorizon PM function associated with (2.4); see Definition 1. The corresponding PMbased reduced optimal control problem (\(\mathcal {P}_{\mathrm {sub}}\)) below, is then built from the following mdimensional PMbased reduced system:
supplemented by
the system (4.1a) being aimed to model the dynamics of the low modes \(P_{\mathfrak{c}}y(t)\) by z(t), and the dynamics of the high modes \(P_{\mathfrak{s} } y(t)\) by h(z(t)). To avoid pathological situations, we will assume throughout this article that \(P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{c}}\) is nonzero.
To simplify the presentation, we will assume furthermore that the PM function h has been chosen so that for any z(0) in \(\mathcal{H}^{\mathfrak{c}}\), the problem (4.1a), (4.1b) admits a welldefined global (\(\mathcal{H}^{\mathfrak{c}}\)valued) solution that is continuous in time. Such PM functions are identified in the case of a Burgerstype equation in Sects. 5–7; see also Appendix B for more details on the corresponding wellposedness problem for the associated reduced systems.
Note that only the lowmode projection of the controller u, \(P_{\mathfrak{c}} u\), is kept in the above reduced model. In the following we denote by \(u_{R}:= P_{\mathfrak{c}} u \in L^{2}(0,T; \mathcal {H}^{\mathfrak{c}})\) this mdimensional controller. Then, the problem (4.1a), (4.1b) can be rewritten as:
and the cost functional (2.3) is substituted by
The finitehorizon PMbased reduced optimal control problem is then given by:
Throughout this section, we assume that the original problem (\(\mathcal {P}\)) as well as its reduced form (\(\mathcal {P}_{\mathrm {sub}}\)) admit each an optimal control, denoted respectively by u ^{∗} and \(u_{R}^{\ast}\). Theorem 1 below provides then an important a priori estimate for the theory. It gives indeed a measure on how far to the optimal control u ^{∗} a suboptimal control \(u^{*}_{R}\) built on a given PM is. More precisely, under a secondorder sufficient optimality condition on the cost functional J, an a priori estimate of \(\ u_{R}^{\ast} u^{\ast}\^{2}_{L^{2}(0,T; \mathcal{H})}\) is expressed in terms of key quantities associated with a given PM on one hand, and key quantities associated with the optimal control u ^{∗}, on the other; see (4.5) below. These quantities involve the parameterization defects associated with u ^{∗} and \(u_{R}^{*}\); the energy contained in the high modes of the optimal and suboptimal PDE trajectories associated with u ^{∗} and \(u_{R}^{*}\), respectively; and the highmode energy remainder \(\ P_{\mathfrak{s}}u^{\ast}\_{L^{2}(0,T; \mathcal{H})}\) of u ^{∗}. Our treatment is here inspired by [61] but differs however from the latter by the use of PMs instead of AIMs; the framework of PMs allowing for a natural interpretation of the error estimate (4.5) derived hereafter that as we will see in the applications, will help analyze the performances of a PMbased suboptimal controller; see Sects. 5–6, and Sect. 7.
Theorem 1
Assume that the optimal control problem (\(\mathcal {P}\)) admits an optimal controller u ^{∗}, where the cost functional J defined in (2.3) satisfies the assumptions of Sect. 2.
Assume furthermore there exists σ>0 such that the following (local) second order sufficient optimality condition holds:
where \(v \in L^{2}(0, T; \mathcal{H})\) is chosen from some neighborhood \(\mathcal{U}\) of u ^{∗}, and y(⋅;v) denotes the solution to (2.4) with v in place of the controller u.
Assume finally that the corresponding PMbased reduced optimal control problem (\(\mathcal {P}_{\mathrm {sub}}\)) admits an optimal controller \(u_{R}^{\ast}\), which is furthermore contained in \(\mathcal{U}\), and that the underlying PM function \(h: \mathcal{H}^{\mathfrak{c}}\rightarrow\mathcal{H}_{\alpha}^{\mathfrak{s}}\) is locally Lipschitz.
Then, the suboptimal controller \(u_{R}^{\ast}\) satisfies the following error estimate
where \(Q(T, y_{0}; u_{R}^{\ast})\) and Q(T,y _{0};u ^{∗}) denote the parameterization defects of the finitehorizon PM function h associated with the controllers in Eq. (2.4) taken to be respectively \(u_{R}^{\ast}\) and u ^{∗}; \(y_{R,\mathfrak{s}}^{\ast}:= P_{\mathfrak{s}} y_{R}^{\ast}\) and \(y_{\mathfrak{s}}^{\ast}:= P_{\mathfrak{s}} y^{\ast}\) denote the highmode projections of the suboptimal trajectory \(y_{R}^{\ast}\) and the optimal trajectory y ^{∗} to Eq. (2.4) driven respectively by \(\mathfrak {C}u_{R}^{\ast}\) and \(\mathfrak{C}u^{\ast}\); and \(\mathcal{C}\) denotes a positive constant depending in particular on T and the local Lipschitz constant of h; see (4.38) below.
Besides the suboptimal trajectory \(y_{R}^{\ast}\), another trajectory of theoretical interest is the “lifted” trajectory by the PM function h, of the (lowdimensional) optimal trajectory \(z_{R}^{\ast}:=z(\cdot, P_{\mathfrak{c}}y_{0}; u_{R}^{\ast})\) of the reduced optimal control problem (\(\mathcal {P}_{\mathrm {sub}}\)). This lifted trajectory is defined as
for which if \(z_{R}^{\ast}\) constitutes a good approximation of the lowmode projection \(P_{\mathfrak{c}}y^{\ast}\) and h has a small parameterization defect,^{Footnote 9} l _{ R } provides a good approximation of the optimal trajectory y ^{∗}, itself.
This intuitive idea is made precise in Corollary 1 below that provides a general condition under which an error estimate regarding the distance \(\y^{\ast}l_{R} \^{2}_{L^{2}(0,T; \mathcal{H})}\), between the lifted trajectory l _{ R } and the optimal trajectory y ^{∗}, can be deduced from the error estimate (4.5) about the distance between the respective controllers; see (4.8) below. This condition concerns the L ^{2}response over the interval [0,T] of the PMbased reduced system (4.2a) with respect to perturbation of the control term \(\mathfrak{C} P_{\mathfrak {c}} u^{\ast}\).
Corollary 1
In addition to the assumptions of Theorem 1, assume that the PMbased reduced system (4.2a) satisfies the following sublinear response property:
There exist κ>0 and a neighborhood \(\mathcal{U} \subset L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})\) of \(P_{\mathfrak{c}} u^{\ast}\), such that the following inequality holds for all \(u_{R}\in\mathcal{U}\):
where \(z(\cdot, P_{\mathfrak{c}}y_{0}; u_{R})\) denotes the solution to (4.2a), (4.2b) emanating from \(P_{\mathfrak{c}}y_{0}\) and driven by \(\mathfrak{C} u_{R}\).
Then, the following error estimate between the optimal trajectory \(z_{R}^{\ast}:=z(\cdot, P_{\mathfrak{c}}y_{0}; u_{R}^{\ast})\) for the reduced optimal control problem (\(\mathcal {P}_{\mathrm {sub}}\)) and the lowmode projection \(y_{\mathfrak {c}}^{\ast}:= P_{\mathfrak{c}}y^{\ast}\) of the optimal trajectory associated with (\(\mathcal {P}\)), holds:
where \(\mathcal{C}\) is the same positive constant as given by (4.5) in Theorem 1 and \(\widetilde{\mathcal{C}}_{1}\), \(\widetilde{\mathcal{C}}_{2}\) are given by (4.11) in Lemma 2 below.
Moreover, the following error estimate regarding the distance \(\y^{\ast}l_{R} \^{2}_{L^{2}(0,T; \mathcal{H})}\), between the lifted trajectory l _{ R } and the optimal trajectory y ^{∗}, holds
where C _{1} and C _{ α } are some generic constants given by (4.18) and (4.34), respectively; and \(\operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\) is the local Lipschitz constant of the PM function h over some bounded set \(V_{\mathfrak{c}} \subset \mathcal{H}^{\mathfrak{c} }\); see (4.30) and (4.33).
Finally, the last corollary concerns a refinement of the error estimate (4.5) which consists of identifying conditions under which the contribution of the highmode energy remainder \(\ P_{\mathfrak{s}}u^{\ast}\_{L^{2}(0,T; \mathcal{H})}\) of the optimal control, can be removed in the upper bound of \(\u_{R}^{\ast} u^{\ast}\^{2}_{L^{2}(0,T; \mathcal{H})}\).
Corollary 2
Assume that the assumptions given in Theorem 1 hold. Assume furthermore that the linear operator \(\mathfrak{C}\) leaves stable the subspaces \(\mathcal{H}^{\mathfrak{c}}\) and \(\mathcal {H}^{\mathfrak{s}}\), i.e.
Then, the error estimate (4.5) reduces to:
Similarly, the corresponding results of Corollary 1 under the additional condition (4.9) amounts to dropping the terms involving \(P_{\mathfrak{s}}u^{\ast}\) on the RHS of the estimates (4.7) and (4.8).
Proofs of Theorem 1 and Corollaries 1 and 2
For the proofs of the above results, we will make use of the following preparatory lemma.
Lemma 2
Given any control \(u \in L^{2}(0, T; \mathcal{H})\), we denote by y(t) the corresponding solution to (2.4). Let \(h: \mathcal{H}^{\mathfrak{c}}\rightarrow\mathcal{H}_{\alpha}^{\mathfrak{s}}\) be a PM function assumed to be locally Lipschitz, and z(t) be the solution to the corresponding PMbased reduced system (4.2a) driven by \({P_{\mathfrak{c}}} \mathfrak{C}P_{\mathfrak{c}}u\) and emanating from \(P_{\mathfrak{c}}y(0)\).
Then, there exists \(\widetilde{\mathcal{C}}_{1}, \widetilde{\mathcal {C}}_{2} > 0\) such that
where \(y_{\mathfrak{c}}:=P_{\mathfrak{c}}y\), \(y_{\mathfrak {s}}:=P_{\mathfrak{s}}y\); and \(\widetilde{\mathcal {C}}_{1}\), \(\widetilde{\mathcal{C}}_{2}\) depend in particular on T and the local Lipschitz constant of h; see (4.23) below.
Proof
Let us introduce \(w(t):=y_{\mathfrak{c}}(t)  z(t)\). By projecting (2.4) against the subspace \(\mathcal{H}^{\mathfrak{c}}\), we obtain
This together with (4.1a), (4.1b) implies that w satisfies the following problem:
recalling that \(uP_{\mathfrak{c}}u= P_{\mathfrak{s}}u\).
By taking the \(\mathcal{H}\)inner product on both sides of (4.12) with w, we obtain:
Since \(B: \mathcal{H}_{\alpha}\times\mathcal{H}_{\alpha}\rightarrow \mathcal{H}\) is a continuous bilinear mapping, there exists C _{ B }>0 such that for any v _{1} and v _{2} in \(\mathcal{H}_{\alpha}\), it holds that
Thanks to the above bilinear estimate, we get thus
On the other hand, the assumptions made at the end of Sect. 2 and in this section regarding the wellposedness problem associated respectively with Eq. (2.4) and the reduced system (4.2a), ensure the existence of a bounded set V in \(\mathcal{H}_{\alpha}\), such that y(t) and z(t)+h(z(t)) stay in V for all t∈[0,T]. As a consequence, there exists a constant C(V)>0, such that
Note also that by using the local Lipschitz property of h, we get
where \(V_{\mathfrak{c}}=P_{\mathfrak{c}}V\), and C _{1} in the last inequality denotes the generic positive constant for which
due to the finitedimensional nature of \(\mathcal{H}^{\mathfrak{c}}\).
By using now the estimates (4.16) and (4.17) in (4.15), we get
where we have applied the standard Young’s inequality \(ab < \frac {a^{2}}{2} +\frac{b^{2}}{2}\) to derive the last inequality.
Since L _{ λ } is assumed to be selfadjoint with dominant eigenvalue β _{1}(λ), we obtain
Note also that
Using (4.19)–(4.21) in (4.13), we obtain
Now, by a standard application of the Gronwall’s inequality, we obtain for all t∈[0,T],
taking into account that \(w(0) = y_{\mathfrak{c}}(0)  z(0) = 0\), by assumption. The estimate (4.11) is thus proved. □
We present now the proofs of Theorem 1 and Corollaries 1 and 2.
Proof of Theorem 1
Let us denote by y ^{∗} in \(C^{1}([0,T]; \mathcal{H}) \cap C([0,T]; \mathcal {H}_{\alpha})\) the optimal trajectory to the optimal control problem (\(\mathcal {P}\)), and by \(y^{\ast}_{R}\) (in the same functional space) the trajectory of Eq. (2.4) corresponding to the control u taken to be the optimal (lowdimensional) controller \(u_{R}^{\ast}\) of the reduced optimal control problem (\(\mathcal {P}_{\mathrm {sub}}\)).
Let us also introduce the lifted trajectories
where \(z^{\ast}_{R}\) and z ^{∗} are the solutions to (4.2a), (4.2b) driven respectively by \(P_{\mathfrak{c}}\mathfrak{C} u_{R}^{\ast}(t)\) and \(P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{c}} u^{\ast}(t)\), t∈[0,T].
Thanks to the second order optimality condition (4.4), the proof boils down to the derivation of a suitable upper bound for \(\varDelta:=J(y^{\ast}_{R}, u_{R}^{\ast})  J(y^{\ast}, u^{\ast})\), which is organized as follows.
In Step 1, we reduce the control of Δ to the control of \(J(y^{\ast}_{R}, u_{R}^{\ast})  J(l_{R}, u_{R}^{\ast}) + J(l^{\ast}, u^{\ast})  J(y^{\ast}, u^{\ast})\) by using the optimality property of the pair \((z^{\ast}_{R}, u_{R} ^{\ast})\) for the reduced problem (\(\mathcal {P}_{\mathrm {sub}}\)). The main interest in doing so relies on the fact that only \(\ y^{\ast}_{R}l_{R}\\) and ∥y ^{∗}−l ^{∗}∥ are then determining in the control of Δ; see Step 2. This leads in turn to an upper bound of Δ expressed in terms of key quantities for the design of suboptimal controller in our PMbased theory.
In that respect, the upper bound of Δ derived in (4.36) involves \(\ y^{\ast}_{R,\mathfrak{s}}  h(y^{\ast}_{R,\mathfrak {c}})\_{L^{2}(0,T; \mathcal{H})}\) and \(\ y_{\mathfrak{s}}^{\ast} h(y_{\mathfrak {c}}^{\ast})\_{L^{2}(0,T; \mathcal {H})}\), the energy (over the interval [0,T]) of the high modes unexplained by the PM function when applied respectively to \(y^{\ast}_{R,\mathfrak{c}}\) and \(y_{\mathfrak{c}}^{\ast}\); and involves \(\ y^{\ast}_{R,\mathfrak{c}}  z^{\ast}_{R} \ _{L^{2}(0,T; \mathcal{H})}\) and \(\y_{\mathfrak{c}}^{\ast} z^{\ast}\_{L^{2}(0,T; \mathcal{H})}\), the errors associated with the modeling of the \(y^{\ast}_{R,\mathfrak{c}}\) and \(y_{\mathfrak{c}}^{\ast}\)dynamics by the reduced system (4.2a).
Thanks to Lemma 2, we can bound the two latter quantities by the former ones together with a term involving the energy contained in the high modes of u ^{∗}. This is the purpose of Step 3. The desired result follows then by rewriting the relevant unexplained energies by using the parameterization defects associated with the PM function h and the controllers u ^{∗} and \(u^{\ast}_{R}\).
Step 1. Since (y ^{∗},u ^{∗}) is an optimal pair for (\(\mathcal {P}\)), we get
Since \((z^{\ast}_{R}, u_{R}^{\ast})\) is an optimal pair for the reduced problem (\(\mathcal {P}_{\mathrm {sub}}\)), we obtain
Note also that
and that according to (C2)
since \(\P_{\mathfrak{c}}u^{\ast}\\leq\u^{\ast}\\).
Consequently,
We obtain then from (4.25) that
Step 2. Let \(V \subset\mathcal{H}_{\alpha}\) be a bounded set such that
Let also
It is clear that \(P_{\mathfrak{c}}y^{\ast}_{R}(t)\), \(P_{\mathfrak {c}}y^{\ast}(t)\), \(z^{\ast}_{R}(t)\) and z ^{∗}(t) are contained in \(V_{\mathfrak{c}}\) for all t∈[0,T].
Recalling (C1), we denote by \(\operatorname{Lip}(\mathcal{G})\vert_{V}\) the Lipschitz constant of \(\mathcal{G}:\mathcal{H} \rightarrow \mathbb {R}^{+}\) restricted to the bounded set V. In (4.28), by applying Lipschitz estimates to the \(\mathcal{G}\)part of the cost functional J, we obtain
where the last inequality follows from Hölder’s inequality.
Recall that \(l_{R}(t) = z^{\ast}_{R}(t) + h(z^{\ast}_{R}(t))\). Let us also rewrite \(y^{\ast}_{R}(t)\) as \(y^{\ast}_{R,\mathfrak{c}}(t) + y^{\ast}_{R,\mathfrak{s}}(t)\) with \(y^{\ast}_{R,\mathfrak{c}}(t)=P_{\mathfrak{c}}y^{\ast}_{R}(t)\) and \(y^{\ast}_{R,\mathfrak{s}}(t)=P_{\mathfrak{s}}y^{\ast}_{R}(t)\). We obtain then
Let us denote by \(\operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\) the Lipschitz constant of \(h: \mathcal{H}^{\mathfrak{c}} \rightarrow\mathcal{H}^{\mathfrak {s}}_{\alpha}\) restricted to the bounded set \(V_{\mathfrak{c}}\). We get
where we have used the equivalence between the norms on \(\mathcal {H}^{\mathfrak{c} }\); see (4.18).
Since \(\mathcal{H}_{\alpha}\) is continuously embedded into \(\mathcal{H}\), there exists a generic positive constant C _{ α }, such that
We obtain then
This together with (4.32) leads to
Similarly,
Reporting the above two estimates into (4.31), we obtain
Step 3. By using Lemma 2 (see (4.23) above), we obtain:
where we have used \(P_{\mathfrak{s}}u_{R}^{\ast}= 0\) since \(u_{R}^{\ast}\) lives in \(L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})\); and the same lemma leads to
Now, by reporting these estimates in (4.36) and using again the property of continuous embedding (4.34), we obtain:
where
In terms of parameterization defects defined in (3.5), the above estimate (4.37) can be rewritten as:
where \(Q(T, y_{0}; u_{R}^{\ast})\) and Q(T,y _{0};u ^{∗}) are the parameterization defects of the finitehorizon PM function h when the control in (2.4) is taken to be \(u_{R}^{\ast}\) and u ^{∗}, respectively.
The proof is complete. □
Proof of Corollary 1
The estimate given by (4.7) can be derived directly from Theorem 1 and Lemma 2 by noting that
Indeed, the first term on the RHS above can be controlled as follows by Lemma 2:
For the term \(\z^{\ast} z_{R}^{\ast}\^{2}_{L^{2}(0,T; \mathcal{H})}\), according to the condition (4.6) on the sublinear response and Theorem 1, we obtain
We obtain then (4.7) by combining the above two estimates.
The estimate (4.8) follows from (4.7) by noting that
and that
see (4.35) for more details about the derivation of this last inequality (with \(y_{R,\mathfrak{c}}^{\ast}\) therein replaced by \(y_{\mathfrak{c}}^{\ast}\) here). □
Proof of Corollary 2
Note that if \(\mathfrak{C}\) leaves stable the two subspaces \(\mathcal {H}^{\mathfrak{c}}\) and \(\mathcal{H}^{\mathfrak{s}}\), then in Lemma 2, Eq. (4.12) satisfied by the difference \(w(t):=y_{\mathfrak {c}}(t)  z(t)\) is simplified into the following:
where the term \(P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{s}}u\) vanishes here. Consequently, the terms involving P _{ s } u in the subsequent estimates are dropped out, leading then to the estimate given in (4.10). □
2DSuboptimal Controller Synthesis Based on the LeadingOrder FiniteHorizon PM: Application to a Burgerstype Equation
We apply in this section and the next, the PMbased reduction approach introduced above for the design of suboptimal solutions to an optimal control problem of a Burgerstype equation, in the case of globally distributed control laws. The more challenging case of locally distributed control laws, is addressed in Sect. 7.
Cost Functional of Terminal Payoff Type for a BurgersType Equation, and Existence of Optimal Solution
The model considered here takes the following form, which is posed on the interval (0,l) driven by a globally distributed control term \(\mathfrak{C} u(x, t)\):
where ν,λ and γ are positive parameters, the final time T>0 is fixed, and conditions on the linear operator \(\mathfrak{C}\) are specified in Sect. 5.2 below.
The equation is supplemented with the Dirichlet boundary condition
and appropriate initial condition
The classical Burgers equation (with λ=0 in (5.1)) has widely served as a theoretical laboratory to test various methodologies devoted to the design of optimal/suboptimal controllers of nonlinear distributedparameter systems; see e.g. [7, 30, 73, 76, 102] and references therein. The inclusion of the term λy here allows for the presence of linearly unstable modes, which lead in turn to the existence of nontrivial (and nonlinearly) stable steady states for the uncontrolled version of (5.1) provided that λ is large enough; see [59]. The latter property will be used in the choices of initial data and targets for the associated optimal control problems analyzed hereafter. From a physical perspective, we mention that (5.1) arises in the modeling of flame front propagation [11]. This model will serve us here to demonstrate the effectiveness of the PM approach introduced above in the design of suboptimal solutions to optimal control problems.
In that respect, we consider the following cost functional associated with (5.1)–(5.3),
constituted by a running cost along the controlled trajectory and a terminal payoff term defining a penalty on the final state; here μ _{1} and μ _{2} are some positive constants, Y∈L ^{2}(0,l) is some given target profile, and ∥⋅∥ denotes the L ^{2}(0,l)norm.
Compared to the cost functional (2.3) associated with the optimal control problem (\(\mathcal {P}\)) given in Sect. 2, we have added here a terminal payoff \(\frac{\mu_{2}}{2} \{y(\cdot, T; y_{0}, u)  Y}\^{2}\) to the running cost \(\int_{0}^{T} ( \frac{1}{2}\{y(\cdot,t; y_{0}, u)}\^{2} + \frac{\mu_{1}}{2}\{u(\cdot ,t)}\^{2} ) \,\mathrm{d}t\). In Sect. 4, the optimal control problem (\(\mathcal {P}\)) involving only the latter type of running cost, has served to identify the determining quantities controlling the distance to an optimal control of a suboptimal solution to (\(\mathcal {P}\)) built from a PMreduced system; see Theorem 1 and Corollary 2. For a functional cost of type (5.4), error estimates similar to (4.5) and (4.10) can be derived by controlling appropriately the contribution of the terminal payoff term to \(J(y^{\ast}_{R}, u_{R}^{\ast})  J(y^{\ast}, u^{\ast})\) in the estimate (4.31). For instance, the error estimate (4.10) becomes
where \(C_{T}(v, Y) := \frac{\mu_{2}}{2} \v  Y\^{2}\), \(y^{*}_{R,T} = y^{*}_{R}(T)\) and \(y^{*}_{T} = y^{*}(T)\). We dealt with the simpler situation of a single running cost type functional in Sect. 4 in order not to overburden the presentation. Furthermore, as we will see in this section and the forthcoming ones, the error estimates derived in Sect. 4 are sufficient enough to provide useful (and computable) insights to help analyze the performances of a PMbased suboptimal controller.^{Footnote 10}
The interest of cost functionals such as (5.4) is that they arise naturally when the goal is to drive the state y(⋅;u) of (5.1) as close as possible to a target profile Y at the final time T, while keeping the cost of the control, expressed by \(\frac{\mu_{1}}{2} \int_{0}^{T} \u(t)\^{2} \,\mathrm{d}t\), as low as possible. Here, the terminal payoff term gives a measurement of the “proximity” to the target Y at the finaltime SPDE profile. If one can make μ _{2}=+∞, it means the problem is exactly controllable, if not the system is approximately controllable [81].
We turn now to the precise description of the optimal control problem considered in this section and the next. Adopting the notations of Sect. 2, the functional spaces are
the linear operator \(L_{\lambda}: \mathcal{H}_{1} \rightarrow\mathcal{H}\) is given by
and the nonlinearity F is expressed by the bilinear term
with slight abuse of notations, understanding (5.7) and y∂ _{ x } y in (5.8) within the appropriate weak sense.
The optimal control problem for which we will propose suboptimal solutions takes here the following form:
It can be checked by standard energy estimates that for any given controller \(u \in L^{2}(0,T; \mathcal{H})\), initial datum \(y_{0} \in \mathcal{H}\) and any finite T>0, there exists a unique weak solution^{Footnote 11} y(⋅;y _{0},u) for the problem (5.1)–(5.3) such that \(y(\cdot; y_{0}, u) \in L^{2}(0,T; \mathcal {H}_{1/2})\) and \(y'(\cdot; y_{0}, u) \in L^{2}(0,T; (\mathcal{H}_{1/2})^{1})\), where \((\mathcal{H}_{1/2})^{1} = H^{1}(0,l)\) is the dual of \(\mathcal {H}_{1/2} = H_{0}^{1}(0,l)\); see e.g. [102] for the standard Burgers equation subject to affine control.
Note also that \(y(\cdot; y_{0}, u) \in C([0,T]; \mathcal{H}) \) thanks to the continuous embedding
see e.g. [41, Sect. 5.9, Theorem 3] for more details. This last property implies thus that the cost functional J given by (5.4) is well defined for any pair \((y,u) \in\mathcal{W} \times L^{2}(0,T; \mathcal{H}) \) that satisfies the problem (5.1)–(5.3) in the weak sense (5.10).
Within this functional setting, the existence of an optimal pair to (5.9) in \(\mathcal{W} \times L^{2}(0,T; \mathcal{H})\), can be achieved by application of the direct method of calculus of variations [39]. The closest application of such a method that serves our purpose can be found in the proof of [102, Proposition 4] for the standard Burgers equation where the author considered cost functional of tracking type; the arguments being easily adaptable to cost functional of the form (5.4). We provide below a sketch of such arguments.
First note that given a minimizing sequence \(\{(y^{n}, u^{n})\} \in (\mathcal{W} \times L^{2}(0,T; \mathcal{H}))^{\mathbb{N}}\), since the cost functional J defined by (5.4) is positive (and thus bounded from below) and satisfies
the minimizing sequence lives in a bounded subset of the functional space \(\mathcal{W} \times L^{2}(0,T; \mathcal{H})\). We can then extract a subsequence, say \(\{(y^{n_{j}}, u^{n_{j}})\}\), which converges weakly to some element \((y^{\ast}, u^{\ast}) \in\mathcal{W} \times L^{2}(0,T; \mathcal {H})\); see e.g. [21, Theorem 3.18]. By using the fact that \(\mathcal{W}\) is compactly embedded in L ^{2}(0,T;L ^{∞}(0,l)) [97], standard energy estimates on the nonlinear term allow to show that actually (y ^{∗},u ^{∗}) satisfies (5.1)–(5.3) in the following weak sense, i.e. for any \(\varphi\in L^{2}(0,T; \mathcal{H}_{1/2})\) and any T>0,
with y ^{∗}(0)=y _{0}.
Invoking now the lower semicontinuity property of the norm in Banach space (see e.g. [21, Proposition 3.5(iii)]) with respect to the convergence in the weak topology, from the functional form of J given in (5.4) we conclude that (y ^{∗},u ^{∗}) is an optimal pair for the optimal control problem (5.9). Having ensured the existence of an optimal pair to (5.9), we turn now to the design of lowdimensional suboptimal pairs based on the (leadingorder) parameterizing manifold introduced in Sect. 3.2.
Analytic Derivation of the \(h^{(1)}_{\lambda}\)Based 2D Reduced System for the Design of Suboptimal Controllers
We present in this section the analytic derivation of the \(h^{(1)}_{\lambda}\)based reduced system on which we will rely to design suboptimal solutions to problem (5.9). In this respect, we consider the particular case where the subspace \(\mathcal{H}^{\mathfrak{c}}\) of the lowmodes is chosen to be the subspace spanned by the first two eigenmodes of the linear operator L _{ λ } defined in (5.7). Recall that the eigenvalues of L _{ λ } are given by
and the corresponding eigenvectors are
Throughout the numerical applications presented hereafter, we will choose λ to be bigger than the critical value \(\lambda_{c}:= \frac {\nu\pi^{2}}{l^{2}}\) such that L _{ λ } admits one and only one unstable eigenmode. The subspace \(\mathcal{H}^{\mathfrak{c}}\) given by
is thus spanned by one unstable and one stable mode.
For the regimes considered hereafter, it can be checked that the (NR)condition is satisfied, leading in particular to a welldefined \(h^{(1)}_{\lambda}\). We take as a finitehorizon PM candidate, the manifold function \(h^{(1)}_{\lambda}\) provided by the explicit formula (3.11) that we apply to the PDE (5.1). Recall that according to Lemma 1, the manifold function \(h^{(1)}_{\lambda}\) provides a natural theoretical PM candidate. Numerical results reported in Fig. 2 will support that this choice is in fact relevant for the regimes analyzed hereafter for the PDE (5.1) leading in particular to manifold functions with parameterization defect less than unity as required in Definition 1.
To analyze the performances achieved by the \(h^{(1)}_{\lambda}\)based reduced system in the design of suboptimal solutions to (5.9), we place ourselves within the conditions of Corollary 2. In particular, we assume that the continuous linear operator \(\mathfrak{C}: \mathcal{H} \rightarrow\mathcal{H}\) leaves stable the subspaces \(\mathcal{H}^{\mathfrak{c}}\) and \(\mathcal{H}^{\mathfrak{s}}\):
Recall that under such assumptions, the highmode energy remainder \(\ P_{\mathfrak{s}}u^{\ast}\_{L^{2}(0,T; \mathcal{H})}\) of the (unknown) optimal controller u ^{∗}, does not contribute to the estimate of \(\u_{R}^{\ast} u^{\ast}\^{2}_{L^{2}(0,T; \mathcal{H})}\); leaving the parameterization defect as a key determining parameter in the control of the latter. In particular we will see in Sect. 6 that other manifold functions with a smaller parameterization defect than the one associated with \(h^{(1)}_{\lambda}\), lead to a design of better suboptimal solutions to (5.9) than those based on \(h^{(1)}_{\lambda}\).
To be more specific, the operator \(\mathfrak{C}\) when restricted to \(\mathcal{H}^{\mathfrak{c}}\) takes the following form
where the coefficient matrix
is chosen to be nontrivial to avoid pathological situations.
Corresponding to the cost functional (5.4), the cost associated with the \(h^{(1)}_{\lambda}\)based reduced system takes the following form:
where \(Y \in\mathcal{H}\) is some prescribed target.
Recall that following (4.2a), (4.2b), the \(h^{(1)}_{\lambda}\)based reduced system intended to model the dynamics of the low modes \(P_{\mathfrak{c} }y\), takes the following abstract form:
where y _{0} is the initial datum of the original PDE (5.1), and \(u_{R}\in L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})\) is a given control of the reduced system.
We are thus left with the following reduced optimal control problem associated with (5.9):
We turn now to the description of the analytic form of (5.19).
Analytic form of ( 5.19 ). We proceed with the explicit expression of \(h^{(1)}_{\lambda}\) provided by (3.11) that we apply to the Burgerstype equation (5.1). In that respect, the nonlinear interactions between the \(\mathcal{H}^{\mathfrak{c}}\)modes as projected onto the \(\mathcal{H}^{\mathfrak{s}}\)modes given by
constitute key quantities to determine. In the case of the Burgerstype equation (5.1), they take the following form:
where
In particular, we have
for any n≥5 and i _{1},i _{2}∈{1,2}.
By using the above nonlinear interaction relations in (3.11), we obtain thus the following expression of \(h^{(1)}_{\lambda}\):
where
with the β _{ i }(λ) given such as given by (5.11). Note that this set of eigenvalues obey the (NR)condition for any λvalue of interest here (i.e. λ>λ _{ c }). Note also that α _{1}(λ)<0 and α _{2}(λ)<0 for any such λ.
Now, by using (5.22), we can rewrite (5.17) into the following explicit form:
where
and
with z _{ i }:=〈z,e _{ i }〉, u _{ R,i }:=〈u _{ R },e _{ i }〉, and Y _{ i }:=〈Y,e _{ i }〉, i=1,2.
By using furthermore the expression of \(h^{(1)}_{\lambda}\) given in (5.22) into (5.18), we obtain finally after projection onto \(\mathcal{H}^{\mathfrak{c}}\), the following analytic formulation of (5.18):
where α _{1}(λ) and α _{2}(λ) are defined in (5.23), and \(\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}\).
Note that for any given initial datum (z _{1,0},z _{2,0}) and any T>0, the \(h^{(1)}_{\lambda}\)based reduced system (5.27) admits a unique solution in \(C([0,T]; \mathbb {R}^{2})\); this is carried out through some simple but specific energy estimates that are provided in Appendix B for the sake of clarity.
Synthesis of Suboptimal Controllers by a PontryaginMaximumPrinciple Approach
The analytic form (5.27) of the \(h^{(1)}_{\lambda}\)based reduced system (5.18) allows for the use of standard techniques from finitedimensional optimal control theory to solve the related reduced optimal control problem (5.19) [18, 23, 67, 68, 95]. We follow below an indirect approach relying on the Pontryagin maximum principle (PMP); see e.g. [18, 20, 67, 68, 88, 95]. Usually, the use of the Pontryagin maximum principle allows to identify a set of necessary conditions to be satisfied by an optimal solution. However, as we will see, due to the particular form of the cost functionals considered here and the nature of the reduced control system (5.27), these conditions will turn out to be sufficient to ensure the existence of a (unique) optimal control for the reduced problem. Relying on a PMP approach allows also for theoretical insights that can be gained on the reduced optimal control problem (5.19) from the (costatebased) explicit formula of the (reduced) optimal controller reachable by such an approach; see (5.32) and Lemmas 3 and 4 below.
In that perspective, let us denote the \(h^{(1)}_{\lambda}\)based reduced vector field involved in (5.27), by
We introduce now the following Hamiltonian associated with the reduced optimal control problem (5.19):
where p:=(p _{1},p _{2})^{tr} is the costate (or adjoint state) associated with the state z=(z _{1},z _{2})^{tr}.
It follows from the Pontryagin maximum principle that for a given pair
to be optimal for the reduced problem (5.19), it must satisfy the following constrained Hamiltonian system:
where ∇_{ x } stands for the gradient operator along the xdirection, \(p_{R}^{\ast}= p_{R,1}^{\ast}e_{1} + p_{R,2}^{\ast}e_{2}\) is the costate associated with \(z_{R}^{\ast}\), and the vector field g=(g _{1},g _{2})^{tr} has the following expression
Note also that
The 1storder optimality condition (5.29b) reduces then to
which written into a compact form, gives
where M is the matrix introduced in (5.16).
Thanks to the relation (5.31) between \(u_{R}^{\ast}\) and the costate \(p_{R}^{\ast}\), we get
Finally, the terminal condition (5.29c) leads to
By using the above relations, we can reformulate the set of necessary conditions (5.29a)–(5.29c) as the following boundaryvalue problem (BVP) to be satisfied by \(z_{R}^{\ast}\) and \(p_{R}^{\ast}\):
subject to the boundary conditions
where f _{3} and f _{4} are given by (5.33), and g _{1}(z,p) and g _{2}(z,p) are given by (5.30).
Once this BVP is solved, the corresponding controller \(u_{R}^{\ast}\) determined by (5.32) constitutes then a natural candidate to solve the \(h^{(1)}_{\lambda}\)based reduced optimal control problem (5.19). For the problem at hand, since the cost functional (5.17) is quadratic in u _{ R } and the dependence on the controller is affine for the system of Eqs. (5.27), it is known that the controller \(u_{R}^{\ast}\) so obtained is actually the unique optimal controller of the reduced problem (5.19); see e.g. [67, Sect. 5.3] and [99]. This observation also holds for the other reduced optimal control problems derived in later sections.
It is worth mentioning that the solution of the above BVP depends on the coefficient matrix M defined in (5.16) associated with the linear operator \(\mathfrak{C}\) through the expressions of f _{3} and f _{4} given in (5.33). However, due to the specific form of f _{3} and f _{4}, different choices of M can lead to the same solution of the BVP. More precisely, the solutions of (5.35)–(5.36) remain unchanged as long as M stays in the group of 2×2 orthogonal matrices. The following lemma summarizes this result.
Lemma 3
The solution of (5.35)–(5.36) is the same for any M∈O(2).
Proof
The result follows trivially by noting that given any M∈O(2), it holds that M ^{tr} M=I. In particular, the following basic identities hold:
By using the above identities in (5.33), we obtain for any M∈O(2) that
which is independent of M. The desired result follows. □
In connection to the above lemma, let us make finally the following basic observation, which will be of some interest in the numerical experiments.
Lemma 4
For any two bounded linear operators \(\mathfrak{C}_{i}: \mathcal{H} \rightarrow\mathcal{H}\) (i=1,2), if they leave invariant the subspaces \(\mathcal{H}^{\mathfrak{c}}\) and \(\mathcal{H}^{\mathfrak{s}}\), and their actions on the low modes differs only by an orthogonal transformation, i.e.,
then the optimal pairs \((z_{R}^{\ast}, u_{R}^{\ast})\) and \((\overline {z}_{R}^{\ast}, \overline{u}_{R}^{\ast})\), corresponding to the reduced optimal control problem (5.19) with \(\mathfrak{C}\) in (5.18) taken to be \(\mathfrak{C}_{1}\) and \(\mathfrak{C}_{2}\) respectively, satisfy the following relation:
If we assume furthermore that \(P_{\mathfrak{s}} \mathfrak{C}_{1}= P_{\mathfrak{s}} \mathfrak {C}_{2}\), then analogous results hold for the original optimal control problem (5.9).
Remark 2
The above result is not limited to the twodimensional nature of \(\mathcal{H}^{\mathfrak{c}}\) given by (5.13), and can be generalized to a higher dimension m, as long as \(\mathcal{H}^{\mathfrak{c}}\) is spanned by the first m eigenmodes, and M lives in O(m).
Suboptimal Pair \((y_{R}^{\ast},u_{R}^{\ast})\) to (5.9) Based on \(h^{(1)}_{\lambda}\): Numerical Aspects
The method used to solve the reduced optimal control problem (5.19) being clarified in the previous section, we turn now to the practical aspects concerning the synthesis of an \(h^{(1)}_{\lambda}\)based suboptimal pair \((y_{R}^{\ast},u_{R}^{\ast})\) to the optimal control problem (5.9) associated with the Burgerstype equation (5.1). This synthesis is organized in two steps. First, the BVP problem (5.35)–(5.36) is solved to get the \(h^{(1)}_{\lambda}\)based suboptimal controller \(u_{R}^{\ast}\) according to the costatebased explicit expression (5.32). Second, this suboptimal controller is then used in (5.1) to get the suboptimal trajectory \(y_{R}^{\ast}\) driven by \(\mathfrak{C} u_{R}^{\ast}\). We explain below how these steps are numerically carried out.
Recall that the uncontrolled Burgerstype equation admits two locally stable steady states y ^{±} (emerging from a pitchfork bifurcation) when λ is above the critical value \(\lambda_{c} = \frac{\nu\pi^{2}}{l^{2}}\) at which the leading eigenmode e _{1} loses its linear stability [59]. In the experiments below we take y ^{+} as initial data y _{0}, the target Y being specified in Sect. 5.5.
Shooting and collocation methods are commonly used to solve twopoint boundary value problems [5, 19, 23, 65, 91]. A convenient collocation code is the Matlab builtin solver bvp4c.m,^{Footnote 12} which is used to solve the aforementioned BVP (5.35)–(5.36) as well as other BVPs encountered in later sections.
The simulation of the Burgers equation (5.1) as driven by the 2D suboptimal controller \(u_{R}^{\ast}\) is then performed by means of a semiimplicit Euler scheme where at each time step the nonlinear term yy _{ x }=(y ^{2})_{ x }/2 and the controller \(u_{R}^{\ast}(x,t)\) are treated explicitly, while the linear term is treated implicitly. The Laplacian operator is discretized using a standard secondorder central difference approximation. The resulting semiimplicit scheme now reads as follows:
where \(y_{j}^{n}\) denotes the discrete approximation of y(jδx,nδt); \(u^{R,n}_{j}\), the discrete approximation of \(u_{R}^{\ast}(j\delta x, n\delta t)\); δx, the mesh size of the spatial discretization; δt, the time step; while Δ _{ d } and ∇_{ d } denote the discrete Laplacian and discrete firstorder derivative given respectively by
The Dirichlet boundary condition (5.2) becomes
where N _{ x }+1 is the number of grid points used for the discretization of the spatial domain [0,l].
The timedependent (N _{ x }−1)dimensional vector solution to (5.37) is denoted by Y ^{n}, and is intended to be an approximation of the suboptimal trajectory \(y_{R}^{\ast}\) at time t=nδt. Let us also denote by U ^{n} the spatial discretization of \(u_{R}^{\ast}(x,n \delta t)\) for x∈[δx,l−δx], given by
Then after rearranging the terms, Eq. (5.37) can be rewritten into the following algebraic system:
where I is the (N _{ x }−1)×(N _{ x }−1) identity matrix, A is the tridiagonal matrix associated with the discrete Laplacian Δ _{ d }, B is the matrix associated with the discrete spatial derivative ∇_{ d }, and S(Y ^{n}) denotes the vector whose entries are the square of the corresponding entries of Y ^{n}.
Since the eigenvalues of A are given by \(\frac {2}{(\delta x)^{2}} ( \cos( \frac{j \pi\delta x}{l})  1 )\) (j=1,…,N _{ x }−1) and the corresponding eigenvectors are the discretized version of the first N _{ x }−1 sine modes \(e_{1}, \ldots, e_{N_{x}1}\) given in (5.12), the eigenvalues of the matrix M:=(1−λδt)I−νδt A of the LHS of (5.38) can be obtained easily, and the corresponding eigenvectors are still the discretized sine functions. At each time step, the algebraic system (5.38) can thus be solved efficiently using the discrete sine transform. To do so, we first compute the discrete sine transform of the RHS and then divide the elements of the transformed vector by the eigenvalues of M to which the inverse discrete sine transform is applied to find Y ^{n+1}; see e.g. [42, Sect. 3.2] for more details. In the numerical results that follow, the discrete sine transform has been handled by using the Matlab builtin function dst.m.
Finally, it is worthwhile mentioning that we have used a uniform time mesh for the integration of the PDE, whereas the \(u_{R}^{\ast}\) is defined on a nonuniform mesh due to the adaptive mesh feature of the bvp4c solver. This discrepancy is resolved by using linear interpolation to obtain the value of \(u_{R}^{\ast}\) at the uniform mesh used in the PDE scheme.
For the sake of comparison, the synthesis of a suboptimal controller based on a twomode Galerkin approximation has been carried out following the same steps and the same numerical treatment described above. The corresponding suboptimal controller \(u_{G}^{\ast}\) associated with the 2D Galerkinbased reduced optimal problem (A.5) is also obtained via a PMP approach which leads to solving a BVP described in Appendix A.1; see (A.7). The same procedure is applied to higherdimensional Galerkinbased reduced optimal control problems (A.10) derived in Appendix A.2.
2DSuboptimal Controller Synthesis Based on \(h^{(1)}_{\lambda}\), and Control Performances: Numerical Results
We assess in this section the control performances achieved by the \(h^{(1)}_{\lambda}\)based suboptimal pair \((y_{R}^{\ast},u_{R}^{\ast})\) of the optimal control problem (5.9) such as synthesized according to the procedure described above. These performances are compared with those achieved by a suboptimal solution computed from the 2D Galerkinbased reduced optimal control problem (A.5). In that respect, the cost (5.4) evaluated at the suboptimal pair \((y(\cdot; y_{0}, u_{R}^{\ast}), u_{R}^{\ast})\) will be compared with the cost evaluated at the suboptimal pair \((y(\cdot; y_{0}, u_{G}^{\ast}), u_{G}^{\ast})\), where \(u_{G}^{\ast}\) is the suboptimal controller synthesized from (A.5).
We also set the coefficient μ _{2} weighting the terminal payoff part of the cost functional (5.4) to be sufficiently large so that the comparison of the solution profile at the final time T of (5.37)—driven by the corresponding synthesized controller—with the prescribed target profile Y, provides a way to visualize the performance of the synthesized suboptimal controller.
The simulations reported below, are performed for δt=0.001 and N _{ x }=251 with l=1.3π so that δx≈0.02. The system parameters are taken to be ν=1, γ=2.5, and λ=3λ _{ c }≈1.78. The parameters μ _{1} and μ _{2} in the cost functional (5.4) are taken to be μ _{1}=1 and μ _{2}=20. For all the simulations conducted in this article, the relative tolerance for the bvp4c has been set to 10^{−8} and the BVP mesh size parameter has been set to 1.6E4. The linear operator \(\mathfrak{C}: \mathcal{H} \rightarrow\mathcal{H}\) is taken to be the identity mapping for the sake of simplicity. According to Lemma 4, any operator \(\mathfrak{C}\) such that \(P_{\mathfrak{c}} \mathfrak{C} \in O(2)\) and \(P_{\mathfrak{s}}\mathfrak{C} = \mathrm{Id}_{\mathcal {H}^{\mathfrak{s}}}\) can be reduced to this case.
The numerical results at the final time T=3 are reported in Fig. 1. The left panel of this figure presents for this final time, the solution profile to (5.37) as driven by \(u_{R}^{\ast}\) and \(u_{G}^{\ast}\), respectively. For these simulations, the target profile has been chosen to be given by
The right panel of Fig. 1 shows the two components of the synthesized suboptimal controllers \(u_{R}^{\ast}\) and \(u_{G}^{\ast}\).
As can be observed, the (approximate) PDE final state \(y(T; u_{R} ^{\ast})\) associated with the controller \(u_{R}^{\ast}\) captures the main qualitative feature of the target, while \(y(T; u_{G}^{\ast})\) associated with the controller \(u_{G}^{\ast}\) fails in this task. At a more quantitative level, the relative L ^{2}errors between the respective driven PDE final states and the target Y are given by
This discrepancy in the control performance as revealed on the above relative L ^{2}errors, goes with a noticeable discrepancy between the respective numerical values of the cost, namely
These preliminary results clearly indicate that given a decomposition \(\mathcal{H}^{\mathfrak{c}}\oplus\mathcal{H}^{\mathfrak{s}}\) of \(\mathcal{H}\), the slaving relationships between the \(\mathcal{H}^{\mathfrak{s}}\)modes and the \(\mathcal{H}^{\mathfrak{c}}\)modes such as parameterized by \(h^{(1)}_{\lambda}\), participate in improving the control performance of the suboptimal solutions synthesized from a reduced system involving only the (partial) interactions between the \(\mathcal{H}^{\mathfrak{c}}\)modes as modeled by a lowdimensional Galerkin approximation.
To better assess the control performance achieved by the \(h^{(1)}_{\lambda}\)based suboptimal pair \((y_{R}^{\ast},u_{R}^{\ast})\), we compared with the performance achieved by a (suboptimal) solution to (5.9) based on a highdimensional Galerkin approximation of (5.1). In that respect, we checked that the cost associated with a suboptimal pair \((y(\cdot; y_{0}, \widetilde{u}_{G}^{\ast}), \widetilde {u}_{G}^{\ast})\), where \(\widetilde{u}_{G}^{\ast}\) is a controller synthesized by solving the BVP (A.13a)–(A.13c) associated with an mdimensional Galerkinbased reduced optimal problem (A.10), can serve as good estimate of the cost associated with the (genuine) optimal solution to the problem (5.9) provided that m is sufficiently large. We indeed observed that increasing the dimension beyond m=16 does not result in significant change of the cost value (up to six significant digits) and we thus retained the results obtained for m=16 as reference for providing a good approximation of the optimal solution to (5.9). For m=16, the corresponding values of the cost (5.4), and the relative L ^{2}error for the final time solution profile are given by
These values when compared with those obtained for the twodimensional \(h^{(1)}_{\lambda}\)based reduced problem (5.19) indicates that the twodimensional controller \(u_{R}^{\ast}\) already provides a fairly good control performance but at a much cheaper expense.
On the other hand, the quantitative discrepancy observed on the cost values and relative L ^{2}errors between the results based on (5.19) and those for the original optimal control problem (as indicated by the results based on the highdimensional Galerkin reduced problem) can be attributed to two main factors according to the theoretical results of Sect. 4; see Corollary 2 and in particular the error estimate (4.10). The first factor is related to the parameterization defect associated with the finitehorizon PM used here, namely \(h^{(1)}_{\lambda}\); and the second concerns the energy kept in the high modes of the solution either driven by the suboptimal controller \(u_{R}^{\ast}\) or the optimal controller u ^{∗} itself.
For the remaining part of this section, we report on detailed numerical results which further emphasize the practical relevance of the aforementioned theoretic results provided by Corollary 2. These numerical results shown in Figs. 2 and 3 are carried out by varying the final time T in the range [0.1,5] while keeping other parameters the same as used in Fig. 1.
Panel (a) of Fig. 2 shows the cost values, when T is varied, associated with the suboptimal pairs \((y_{R}^{\ast},u_{R}^{\ast})\) on one hand, and associated with the suboptimal pairs \((\widetilde{y}_{G}^{\ast}, \widetilde{u}_{G}^{\ast})\), on the other hand. As one can observe up to T=3, the suboptimal controllers \(u_{R}^{\ast}\) synthesized from the \(h^{(1)}_{\lambda}\)based reduce problem (5.19) gives access to suboptimal solutions whose cost values are close to those achieved by the optimal ones.^{Footnote 13} Such good performances starts however to noticeably deteriorate as T increases from T=3.
The reasons of this deterioration are actually rich of teaching, as we explain now. If the error estimate (4.10) is meaningful, analyzing its main constitutive elements should help understand what causes this deterioration. In that respect, we computed (i) the corresponding parameterization defects^{Footnote 14} associated with \(h^{(1)}_{\lambda}\) and a given suboptimal controller \(u_{R}^{\ast}\), and (ii) the energy contained in the high modes of the PDE solution either driven by the suboptimal controller \(u_{R}^{\ast}\) (leading to the suboptimal trajectory \(y_{R}^{\ast}\)) or the (sub)optimal controller \(\widetilde{u}_{G}^{*}\) (leading to the (sub)optimal trajectory \(\widetilde{y}_{G}^{\ast}\)).
As a first result, the panels (b)–(f) of Fig. 2 show that \(h^{(1)}_{\lambda}\) provides a finitehorizon PM for the whole range of T analyzed here. The parameterization defects of \(h^{(1)}_{\lambda}\) is furthermore robust with respect to variations of T, reaching a (nearly) constant value of about 0.57 for T≥1. At the same time, a substantial growth of the energy contained in the high modes of the suboptimal trajectories \(y_{R}^{\ast}\) (i.e. \(\P_{\mathfrak{s}} y_{R}^{\ast}(t)\ _{H^{1}(0,l)}\)), is observed from T=3 to T=5 while \(\P_{\mathfrak{s}} \widetilde{y}_{G}^{\ast}(t)\_{H^{1}(0,l)}\) does not change significantly; see Fig. 3. A closer look at the numbers reveals that
which clearly shows that the RHS of the error estimate (4.10) experiences a growth of about 15 % when T increases from T=3 to T=5. This growth of the RHS of (4.10) comes with a growth related to the lowmode part of the LHS of (4.10), i.e. \(\ P_{\mathfrak{c}}(u_{R}^{\ast}\widetilde{u}_{G}^{\ast})\ _{L^{2}(0,T;L^{2}(0,l))}^{2}\), of about 10 %. This deviation from \(\widetilde{u}_{G}^{\ast}\), observed on its lowmode part, is consistent with the substantial growth observed on the cost value \(J(y_{R}^{\ast}, u_{R}^{\ast})\) as shown in Fig. 2(a).
To summarize, the error estimate (4.10) given in Corollary 2 provides useful (and computable) insights that can be used to guide the design of PMbased suboptimal controllers with good control performance. In particular, it addresses the importance of constructing PMs with small parameterization defects on one hand, while keeping small the energy contained in the highmodes, on the other. While the latter factor can be conceivably alleviated by increasing the dimension of the reduced phase space \(\mathcal{H}^{\mathfrak{c}}\), finitehorizon PMs with smaller parameterization defects than proposed by \(h^{(1)}_{\lambda}\) can be thus expected to be even more useful for the design of lowdimensional suboptimal controllers with good performances. The next section addresses the construction of such finitehorizon PMs.
Remark 3
We mention that the numerical results reported in Fig. 1 have been compared with those obtained by solving the reduced optimal control problem (5.19) with the BOCOP toolbox [17].^{Footnote 15} For the parameters used, the relative error under the L ^{2}norm between the controllers numerically obtained by this toolbox and by our calculations has been observed to be within a margin of 0.1 %. For the sake of reproducibility of the results for (5.19), we provide the following numerical values of the components of Y used in (5.39): 〈Y,e _{1}〉=0.2561 and 〈Y,e _{2}〉=−1.9193.
2DSuboptimal Controller Synthesis Based on HigherOrder FiniteHorizon PMs
As illustrated in the previous section in the context of a Burgerstype equation, the finitehorizon PM \(h^{(1)}_{\lambda}\) based on the simple onelayer backward forward system (3.6a), (3.6b), can be used efficiently to obtain lowdimensional suboptimal controllers with relatively good performances for certain cases. Figures 2 and 3 indicate that these performances can be altered when the parameterization defects associated with \(h^{(1)}_{\lambda}\) is not specially small, while the energy contained in the high modes of the solution—either driven by the suboptimal controller \(u_{R}^{\ast}\) or the optimal controller u ^{∗} itself—get large, in agreement with the theoretical predictions of Corollary 2. The error estimate (4.10) suggests that other finitehorizon PMs with smaller parameterization defects than \(h^{(1)}_{\lambda}\) should help in the synthesis of suboptimal controllers with better performances. The main purpose of this section is to build effectively such PMs that in particular add higherorder terms to \(h^{(1)}_{\lambda}\) (Theorem 2 below) which will turn out to play a crucial role to improve the performances of the \(h^{(1)}_{\lambda}\)based suboptimal controllers encountered so far; see Remark 4 below.
HigherOrder FiniteHorizon PMs Based on TwoLayer Backward–Forward System: Analytic Derivation
We follow [27, Chap. 7] and consider the following twolayer backward–forward system associated with the uncontrolled version of (5.1):
where \(L_{\lambda}^{\mathfrak{c}} := P_{\mathfrak{c}} L_{\lambda}\), \(L_{\lambda}^{\mathfrak{s}} := P_{\mathfrak{s}} L_{\lambda}\), and \(\xi\in\mathcal{H}^{ \mathfrak{c}}\).
Similar to the onelayer backward–forward system (3.6a), (3.6b), the above system is integrated using a twostep backward–forward integration procedure where Eqs. (6.1a), (6.1b) are integrated first backward, and Eq. (6.1c) is then integrated forward. We will emphasize the dependence on ξ of the highmode component \(y_{\mathfrak{s}}^{(2)}\) of this system as \(y_{\mathfrak {s}}^{(2)}[\xi]\).
Theorem 2 below identifies nonresonance conditions (NR2) under which the pullback limit of \(y_{\mathfrak {s}}^{(2)}[\xi ]\) exists as τ→∞. In particular, it provides an analytical expression of this pullback limit. As it will be supported by the numerical results of Sect. 6.2, this pullback limit will turn out to give access to finitehorizon PMs for a broad class of targets.
Theorem 2
Consider the twolayer backward–forward system (6.1a)–(6.1c) associated with the uncontrolled Burgerstype equation (5.1), i.e. with \(\mathfrak{C} = 0\). Let \(\mathcal {H}^{\mathfrak{c} }\) be the subspace spanned by the first two eigenmodes e _{1} and e _{2} of the corresponding linear operator L _{ λ } defined in (5.7). Assume that the eigenvalues of L _{ λ } satisfy the following nonresonance conditions:
Then the pullback limit of the solution \(y_{\mathfrak{s}}^{(2)}[\xi ]\) to (6.1a)–(6.1c) exists and is given by:
Under the above conditions, \(h^{(2)}_{\lambda}\) has furthermore the following analytic expression:
where
with
and
Remark 4
Note that the analytic expression of \(h^{(2)}_{\lambda}\) given in (6.3) can be written as the sum of \(h^{(1)}_{\lambda}\) given by (5.22)^{Footnote 16} associated with the onelayer backward–forward system (3.6a), (3.6b), and some other higherorder terms. It is worth noting that the extra five terms contained in the expression of \(h^{(2)}_{\lambda}\) result from the nonlinear selfinteractions between the low modes as brought by \(P_{\mathfrak {c}} B ( y^{(1)}_{\mathfrak{c}}, y^{(1)}_{\mathfrak{c}} )\) in (6.1b). Numerical results of Sect. 6.2 below, support the fact that these extra terms can be interpreted as corrective terms to \(h^{(1)}_{\lambda}\). Indeed, as we will illustrate for the optimal control problem (5.9), these terms can help design suboptimal lowdimensional controller of better performances than those built from \(h^{(1)}_{\lambda}\)based reduced system; the \(h^{(2)}_{\lambda}\)based reduced system bringing extra higherorder terms corresponding to “lowhigh” and “highhigh” interactions absent from the \(h^{(1)}_{\lambda}\)based reduced system. This last point can be observed by comparing (5.27) with (6.17) below, where both reduced systems are derived from the abstract formulation (4.2a), (4.2b) by setting the PM function h to be \(h^{(1)}_{\lambda}\) or \(h^{(2)}_{\lambda}\), respectively.
Proof
A simple integration of (6.1a)–(6.1c) shows that for any τ>0 and \(\xi\in\mathcal{H}^{\mathfrak{c}}\) the solution to the backward–forward system (6.1a)–(6.1c) is given by:
for all s∈[−τ,0].
Due to (6.7c), the pullback limit of \(y_{\mathfrak {s}}^{(2)}[\xi](\tau ,0)\) takes the form given in (6.2) provided that the concerned integral exists. We show below that the (NR2)condition is necessary and sufficient for such an integral to exist. In that respect, the fact that \(\mathcal{H}^{\mathfrak{c}}\) is spanned by the first two eigenmodes facilitate some of the manipulations as described below.
First, note that the projections of \(y^{(1)}_{\mathfrak{c}}\) onto e _{1} and e _{2}, give respectively,
where ξ _{ i }:=〈ξ,e _{ i }〉, i=1,2.
To determine the projection of \(y^{(2)}_{\mathfrak{c}}\) against e _{1} and e _{2}, we need to recall that the nonlinear interaction laws (5.20), give here
which leads to
The projection of \(y^{(2)}_{\mathfrak{c}}\) against e _{1} and e _{2} are then given by
Relying again on to the nonlinear interaction laws (5.20), we have
which leads to
By using the expressions of \(y^{(2)}_{1}\) and \(y^{(2)}_{2}\) given in (6.10) (and using also (6.8)), it can be checked that the limit \(h^{(2),3}_{\lambda}:= \lim_{\tau\rightarrow+\infty} y^{(2)}_{3}[\xi]{(\tau, 0)} \) exists if and only if the first four inequalities in the (NR2)condition hold, while \(h^{(2),3}_{\lambda}\) is given by (6.4a) under these conditions. Similarly, the limit \(h^{(2),4}_{\lambda}:= \lim_{\tau\rightarrow +\infty} y^{(2)}_{4}[\xi]{(\tau, 0)} \) exists if and only if the last three inequalities in the (NR2)condition hold, and \(h^{(2),4}_{\lambda}\) is given by (6.4b) under these conditions. The theorem is proved. □
Controller Synthesis Based on \(h^{(2)}_{\lambda}\), and Control Performances: Analytic Derivation and Numerical Results
Analytic derivation of the \(h^{(2)}_{\lambda}\)based reduced optimal control problem. Following (4.2a), (4.2b), the \(h^{(2)}_{\lambda}\)based reduced system intended to model the dynamics of the low modes \(P_{\mathfrak{c}}y\) of (5.1), takes the following abstract form:
where y _{0} is the initial datum for the original PDE (5.1).
Analogous to (5.17), the cost functional associated with the reduced system (6.13) is given by
where \(C_{T}(z(T), P_{\mathfrak{c}} Y) := \frac{\mu_{2}}{2} \sum_{i=1}^{m} z_{i}(T)  Y_{i}^{2}\) is the terminal payoff term as defined in (5.26), with Y being some prescribed target for (5.1).
By using the analytic expression of \(h^{(2)}_{\lambda}\) given in (6.3)–(6.5), the cost functional (6.14) can be written into the following explicit form:
where
with z _{ i }:=〈z,e _{ i }〉 and u _{ R,i }:=〈u _{ R },e _{ i }〉, i=1,2.
Now, by using again the analytic expression
in (6.13) and projecting this equation against e _{1} and e _{2} respectively, we obtain, after simplification by using the nonlinear interaction laws (5.20), the following analytic formulation of the \(h^{(2)}_{\lambda}\)based reduced system (6.13):
with \(h^{(2),3}_{\lambda}(z_{1},z_{2})\) and \(h^{(2),4}_{\lambda}(z_{1},z_{2})\) given by (6.4a)–(6.4b)–(6.5).^{Footnote 17}
The resulting reduced optimal control problem based on \(h^{(2)}_{\lambda}\) is thus:
By following similar arguments as provided in Sect. 5.2 and applying the Pontryagin maximum Principle, we can conclude that for a given pair
to be optimal for the \(h^{(2)}_{\lambda}\)reduced optimal problem (6.18), it is necessary and sufficient^{Footnote 18} to satisfy the following set of conditions:
where \((\widehat{p}_{R,1}^{\ast}, \widehat{p}_{R,2}^{\ast})\) is the costate associated with \((\widehat{z}_{R,1}^{\ast}, \widehat{z}_{R,1}^{\ast})\), both determined by solving the following BVP:
subject to the boundary condition
where
The vector field (g _{1},g _{2}) given above has been determined by evaluating \(\nabla_{z} \widehat{H}(z,p,u)\), with the following Hamiltonian \(\widehat{H}\), formed by application of the PMP to (6.18)
where \((\widehat{f_{1}}, \widehat{f_{2}})\) denotes the vector field constituting the RHS of the zequations in (6.20).
Numerical results. The above BVP is solved again using bvp4c, and the resulting twodimensional suboptimal controller \(\widehat{u}^{\ast}_{R}\) is obtained according to (6.19). As before, the corresponding suboptimal trajectory \(\widehat{y}_{R}^{\ast}\) of the PDE (5.1) is computed by driving (5.1) with \(\widehat{u}^{\ast}_{R}\), following the numerical procedure described in Sect. 5.4.
The corresponding control performance is shown in Fig. 4, where the performance of the suboptimal controllers \(u^{\ast}_{R}\) and \(u^{\ast}_{G}\) associated with respectively the twodimensional \(h^{(1)}_{\lambda}\)based reduced optimal control problem (5.19) and the twodimensional Galerkinbased one (A.5) are also reported for comparison. In panel (a) of Fig. 4, we present the PDE final time solution profile \(y(T,\widehat{u}_{R}^{\ast})\), \(y(T,u_{R}^{\ast})\), and \(y(T,u_{G}^{\ast})\) driven respectively by \(\widehat{u}_{R}^{\ast}\), \(u_{R}^{\ast}\) and \(u _{G}^{\ast}\), for T=3. For these simulations, the target profile Y has been chosen to be again spanned by the first two eigenfunctions, but given this time by
the initial profile is taken to be the positive steady state y ^{+} for the uncontrolled PDE as used in Sect. 5.5, see panel (b). The two components of the synthesized suboptimal controllers are shown in panel (c), and the parameterization defects associated with respectively \(h^{(1)}_{\lambda}\) and \(h^{(2)}_{\lambda}\) are shown in panel (d). The corresponding cost values and finaltime relative L ^{2}errors are given in Table 2 above.
The results of Fig. 4(a) and Table 2 illustrate that for a given reduced phase space—here the twodimensional vector space \(\mathcal {H}^{\mathfrak{c}}\)—the slaving relationship of the highmodes (not in \(\mathcal{H}^{\mathfrak{c}}\)) by the low modes (in \(\mathcal {H}^{\mathfrak{c}}\)) as parameterized by \(h^{(2)}_{\lambda}\) can turn out to be superior than the one proposed by \(h^{(1)}_{\lambda}\) for the synthesis of suboptimal solutions to (5.9), and can turn out to be clearly advantageous compared to suboptimal solutions for which no slaving relationship whatsoever is involved such as for those built from the 2D Galerkinbased reduced optimal control problem (A.5). Again, Corollary 2 and the error estimate (4.10) provide theoretical insights that help understand why improving the quality of such a slaving relationship participates to improve the performance of a suboptimal controller. For instance, the improvement in getting closer to the prescribed target Y (Fig. 4(d))—accompanied with a noticeable reduction of the cost values (Table 2)—occurs when the PDE (5.1) is driven by the \(h^{(2)}_{\lambda}\)based suboptimal controller \(\widehat{u}^{\ast}_{R}\) instead of the \(h^{(1)}_{\lambda}\)based one \(u_{R}^{\ast}\), and goes with a parameterization defect (overall) smaller for \(h^{(2)}_{\lambda}\) than for \(h^{(1)}_{\lambda}\) (Fig. 4(d)). Interestingly, this reduction of the parameterization defect comes with the higherorder terms contained in \(h^{(2)}_{\lambda }\) (see Theorem 2) that can be thus reasonably interpreted as correction terms to the parameterization proposed by \(h^{(1)}_{\lambda}\); see also Remark 4.
However, such a statement has to be nuanced and an \(h^{(2)}_{\lambda }\)based reduced system does not always lead to the significant advantages in the design of suboptimal solutions such as illustrated in Fig. 4. The caveat relies on the fact that the parameterization defect associated with \(h^{(2)}_{\lambda}\) also depends on the target profile. For instance, with the signchanging target (5.39) used in the experiments of Sect. 5.5, the suboptimal solutions designed from (6.18) achieve comparable performances to those designed from (5.19).
These remarks motivate further analysis to arbitrate whether the success achieved for the target prescribed in (6.22) are pathological or robust, to some extent. For that purpose, we considered deformations of the target (6.22) taken to be of the form
with σ _{1}∈[0.2,0.7] and σ _{2}∈[0.01,0.5], and we solved the corresponding \(h^{(2)}_{\lambda}\)based (resp. \(h^{(1)}_{\lambda}\)based) reduced optimal problem to provide the corresponding \(h^{(2)}_{\lambda}\)based (resp. \(h^{(1)}_{\lambda }\)based) suboptimal solutions. As a benchmark,^{Footnote 19} these solutions are compared with those obtained from the mdimensional Galerkinbased reduced optimal problem (A.10) with m=16. The results are reported in Fig. 5 and in Fig. 6 above. Figure 5 shows for each (σ _{1},σ _{2}) the corresponding relative L ^{2}errors at the finaltime solution profiles compared with the target \(Y_{\sigma_{1},\sigma_{2}}\); and Fig. 6 shows the cost values associated with the suboptimal controllers \(u_{R}^{\ast}\) and \(\widehat{u}_{R}^{\ast}\), on one hand, and \(\widetilde{u}_{G}^{\ast}\) obtained from the mdimensional Galerkinbased reduced problem, on the other.
Figures 5 and 6 show that the good performance achieved by the \(h^{(2)}_{\lambda}\)based suboptimal controller shown in Fig. 4(a), is not isolated and can be even further improved within a broad region of the (σ _{1},σ _{2})parameter space when \(Y_{\sigma_{1},\sigma_{2}}\) is changed accordingly. Compared to the bad performances observed on Fig. 5 (top panel) for the \(h^{(1)}_{\lambda}\)based suboptimal controllers, these \(h^{(2)}_{\lambda}\)based results provide strong evidence that the higherorder terms brought by \(h^{(2)}_{\lambda}\) with respect to \(h^{(1)}_{\lambda}\), act as corrective terms in the highmode parametrization proposed by \(h^{(1)}_{\lambda}\).
These numerical results together with the theoretic results of Corollary 2 suggest that in order to design reduced problems whose solutions would provide even better control performance than those reported here, one can try to construct finitehorizon PMs with smaller parameterization defects than those achieved by \(h^{(2)}_{\lambda}\). In that respect, the discussions and results of [27, Sects. 4.3–4.5], presented in the context of asymptotic PMs, can be valuable. In connection to the discussion concerning Figs. 2 and 3 in Sect. 5.5, the searching for better slaving relationships between the \(\mathcal{H}^{\mathfrak{s}}\)modes and the \(\mathcal {H}^{\mathfrak{c}}\)modes can be combined with the usage of higher dimensional reduced phase spaces \(\mathcal{H}^{\mathfrak{c}}\) so that the energy kept in the high modes gets reduced. The next section shows that a moderate increase of \(\operatorname{dim}(\mathcal{H}^{\mathfrak{c}})\) can actually already help improve the performances based on \(h^{(1)}_{\lambda}\), in the case of locally distributed control laws.
Synthesis of mDimensional Locally Distributed Suboptimal Controllers
In this last section, we consider the more challenging case of optimal locally distributed control problems associated with the Burgerstype equation (5.1). This situation corresponds to the case where the linear operator \(\mathfrak{C}\) is associated with the characteristic function χ _{ Ω } of a subdomain Ω⊂[0,l], such that for any \(u \in\mathcal{H} = L^{2}(0,l)\), the action of \(\mathfrak{C}\) on u is defined by:
As used in the fully distributed case in the previous sections, we will consider for some prescribed (timeindependent) target Y, cost functionals of terminal payoff type such as:
but also cost functionals of tracking type:
where in both cases, μ _{1} and μ _{2} are some positive parameters.
The optimal control problem takes thus one of the following forms:
or
The goal of this last section is to show that the PMapproach introduced above provides an efficient way to design suboptimal solutions for such optimal control problems associated with locally distributed control laws. For simplicity, we will focus on the performance achieved by the \(h^{(1)}_{\lambda}\)based reduced system for the design of such suboptimal solutions, that is the following mdimensional reduced system
will be at the core of our synthesis of suboptimal controllers.
It is worthwhile to note that in general, the choice of the reduced dimension, m, depends typically on the system parameters such as the viscosity ν, the domain size l and the control parameter λ; and m is chosen so that the resolved modes explain a sufficient large portion of the energy contained in the PDE solution. For the particular case of locally distributed control laws, the size and the location of the subdomain Ω plays also a determining role in sizing “a good” m. For instance, the smaller the subdomain Ω will be, the larger the dimension m will need to be in order to obtain a reduced system useful for the design of good suboptimal controllers. Intuitively, this is related to the fact that further eigenmodes are needed in order to obtain a reasonably good approximation of the characteristic function χ _{ Ω } when the size of the support Ω is further reduced. This intuition will be numerically confirmed in Sect. 7.3 below, where a reduction of 40 percent of the domain compared to the globally distributed case analyzed in Sect. 5.5, led to a choice of m=4 for a design of suboptimal controllers with comparable performances than those achieved in Sect. 5.5, from twodimensional reduced systems.
We now describe the \(h^{(1)}_{\lambda}\)based reduced optimal control that will serve us to design the corresponding suboptimal controllers. First, note that the cost functional associated with (7.6) takes one of the following forms
or
depending on whether (7.2) or (7.3) is considered.
The reduced optimal control problem for (7.4) reads then as follows:
Accordingly, the reduced optimal control problem for (7.5) reads:
Analytic Derivation of mDimensional \(h_{\lambda}^{(1)}\)Based Reduced Systems for the Design of Suboptimal Controllers
In this subsection, we derive explicit forms of the reduced suboptimal control problems (7.9) and (7.10). Details are presented for (7.9), while the analogous derivation for (7.10) is left to the interested reader. For this purpose, let us first examine the existence of the finitehorizon PM candidate \(h^{(1)}_{\lambda}\). We know from Sect. 3.2 that the pullback limit \(h^{(1)}_{\lambda}\) associated with the backward–forward system (3.6a), (3.6b) exists when the (NR)condition holds. For the Burgers equation considered here, due to the nonlinear interaction relations (5.20), the (NR)condition reads as follows:
By using the analytic expression of the eigenvalues as given in (5.11), we get
which is positive for all values of λ of interest here (\(\lambda> {\lambda_{c} := \frac{\nu\pi^{2}}{l^{2}}}\)). Consequently, the pullback limit \(h^{(1)}_{\lambda}\) always exists for such given λ, and its analytic form provided in (3.11) reads as follows for the problem considered here:
where
From (7.14), it is clear that \(h^{(1),n}_{\lambda}= 0\) for all n>2m. Note also that it follows from the nonlinear interaction laws (5.20) that
where \(\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}\). By using this identity, we can rewrite \(h^{(1),n}_{\lambda}\) for n=m+1,…,2m as follows:
where the convention that the sum is zero when the lower bound of the summation index is greater than its upper bound, has been adopted.
Let us denote by M the matrix whose components are given by
Let us also introduce
By rewriting the reduced system (7.6) as
and by using the expansions
along with the nonlinear interaction relations (5.20), the above system of equations becomes:
where ⌊x⌋ denotes the largest integer less than x; \(h_{\lambda}^{(1),n}\) is provided by (7.15); and the coefficients ω _{ i,j } are given by
In the above system, the terms gathered in (a) correspond to the selfinteractions between the low modes: 〈B(z,z),e _{ i }〉, the terms gathered in (b) correspond to the crossinteractions between the low and (unresolved) high modes such as parameterized by \(h_{\lambda}^{(1)}\): \(\langle B(z, h^{(1)}_{\lambda}(z)), e_{i} \rangle+ \langle B(h^{(1)}_{\lambda}(z),z), e_{i} \rangle\), and the terms gathered in (c) correspond to the selfinteractions between the high modes (still such as parameterized by \(h_{\lambda}^{(1)}\)) as projected onto \(\mathcal{H}^{\mathfrak{c}}\): \(\langle B(h^{(1)}_{\lambda}(z), h^{(1)}_{\lambda}(z)), e_{i} \rangle\).
Note that in the case m=2 the system (7.19) takes the same functional form as the \(h^{(1)}_{\lambda}\)based reduced system (5.27) derived in Sect. 5.2 for the globally distributed control case, only the matrices given in (5.16) and (7.16) differ. We refer again to Appendix B for an analysis of the Cauchy problem associated with (7.19), leaving to the interested reader the generalization to the mdimensional case.
Synthesis of mDimensional Locally Distributed Suboptimal Controllers
We apply once more the Pontryagin maximum principle to derive boundary value problems to be satisfied by an \(h^{(1)}_{\lambda}\)based suboptimal controller. We focus again on the case with terminal payoff given by (7.9), and indicate necessary changes for the case of tracking type (7.10) at the end of this subsection.
Let us denote the RHS of (7.19) by f(z,v _{ R }). The Hamiltonian associated with the cost functional (7.7) reads then as follows:
where p:=(p _{1},…,p _{ m })^{tr} is the costate, and v _{ R }=M ^{tr} u _{ R }; see (7.17).
Recall also that the terminal payoff, denoted by \(C_{T}(z(T), P_{\mathfrak{c}}Y)\), reads in this case:
It follows from the Pontryagin maximum principle that for a given pair
to be optimal for the reduced problem (7.9), it must satisfy the following conditions for all i=1,…,m (see e.g. [67, Chap. 5]):
where \(v_{R}^{\ast}=M^{\mathrm{tr}} u_{R}^{\ast}\); \(p_{R}^{\ast}=\sum_{i=1}^{m} p_{R,i}^{\ast}e_{i}\) denotes the costate associated with \(z_{R}^{\ast}\); and the vector field (g _{1},…,g _{ m })^{tr} is defined by
Here the partial derivatives \(\frac{\partial h_{\lambda}^{(1),n}(z)}{\partial z_{i}}\) can be obtained by using the expression of \(h_{\lambda}^{(1),n}\) given in (7.15) which leads to
The formula for \(\frac{\partial f_{j}(z,v_{R})}{\partial z_{i}}\) can be obtained by taking the corresponding partial derivative of the RHS of (7.19) form which we obtain after simplifications
where δ _{ ij } denotes the Kronecker delta, and \(I_{j,i}^{a}\), \(I_{j,i}^{b}\) and \(I_{j,i}^{c}\) are given by
and
We derive next a relation between \(u_{R}^{\ast}\) and \(p_{R}^{\ast}\), which when used in (7.22a)–(7.22d) leads to a BVP for \((z_{R}^{\ast}, p_{R}^{\ast})\) to be solved in order to find \(u_{R}^{\ast}\). To this end, note that from the expression of the Hamiltonian H given in (7.20), we obtain the following expression of \(\nabla _{u_{R}} H(z^{\ast}_{R}, p^{\ast}_{R}, u^{\ast}_{R})\), which written componentwise, gives:
The firstorder optimality condition (7.22c) leads to
where M is given by (7.16).
It follows then that the controller \(v_{R}^{\ast}\) in (7.22a) takes the form:
To summarize, corresponding to the \(h^{(1)}_{\lambda}\)based reduced optimal control problem (7.9), we have derived the following BVP to be satisfied by the optimal trajectory \(z_{R}^{\ast}\) and its costate \(p_{R}^{\ast}\):
where \(v^{\ast}_{R}\) is given by (7.30), y _{0,i } is the projection of the initial data y _{0} for the underlying PDE (5.1) against e _{ i }, and the boundary condition for \(p^{\ast}_{R}\) is derived from the terminal condition (7.22d) by using the expression of the terminal payoff C _{ T } given in (7.21). Once (7.31a)–(7.31c) is solved, the mdimensional controller \(u_{R}^{\ast}\) given by (7.29) constitutes our \(h^{(1)}_{\lambda}\)based suboptimal controller for the optimal control problem (7.4). Note that \(u_{R}^{\ast}\) synthesized this way turns out to be the unique optimal controller for the reduced problem (7.9) for the same reasons pointed out in Sect. 5.3.
The corresponding BVP associated with the reduced optimal control problem (7.10) can be derived in the same fashion; and we indicate below the necessary changes. In this case, the Hamiltonian associated with the cost functional (7.8) reads:
The resulting BVP reads:
where f(z,v _{ R }) denotes the RHS of (7.19), \(v^{\ast}_{R}\) is still given by (7.30), but in contrast to g _{ i } given by (5.30), the components \(\widetilde{g}_{i}\) of the vector field involved in the RHS of the pequations of (7.33a)–(7.33c), are now given by
Once the above BVP (7.33a)–(7.33c) is solved, we take \(u_{R}^{\ast}\) given by (7.29) with \(p^{\ast}_{R}\) obtained from (7.33a)–(7.33c) as the \(h^{(1)}_{\lambda}\)based suboptimal controller for the optimal control problem (7.5).
Control Performances: Numerical Results
To assess the ability of the \(h^{(1)}_{\lambda}\)based reduced optimal control problems (7.9) and (7.10) in synthesizing suboptimal controllers of good performance for respectively the optimal control problems (7.4) and (7.5), we consider the case where the characteristic function χ _{ Ω } is supported on the subdomain Ω=[0.2l,0.8l], and the target is taken to be the target Y used in (5.39) for the experiments of Sect. 5.5. As pointed out prior to Sect. 7.1, to achieve performances comparable to those achieved in Sect. 5.5, it turned out that fourdimensional \(h^{(1)}_{\lambda}\)based reduced systems were required for the design of suboptimal controllers, instead of the twodimensional reduced systems of Sect. 5.5. As explained above, this increase of the dimension of the resolved subspace \(\mathcal{H}^{\mathfrak{c}}\) results from the spatial localization of the controller dealt with here.
Figures 7 and 8 show the performances achieved by the resulting fourdimensional \(h^{(1)}_{\lambda}\)based suboptimal controllers, corresponding to the cost functional of terminalpayoff type (7.2). The left panel of Fig. 7 shows the PDE solution field driven by the corresponding suboptimal controller field shown on the right panel of the same figure. The left panel of Fig. 8 shows the finaltime solution profile, while the right panel shows the corresponding parameterization defect associated with \(h^{(1)}_{\lambda}\). The corresponding cost value and relative L ^{2}error of the final time solution profile compared with the target are given by
As a comparison, by using an mdimensional Galerkinbased reduced system with m=16 to design suboptimal solutions to (7.4), the corresponding cost value and relative L ^{2}error are given by
The above numerical results indicate thus that the 4dimensional \(h^{(1)}_{\lambda}\)based reduced problem (7.9) can be used to design a very good suboptimal controller (for the prescribed target Y given by (5.39)) for the optimal control problem (7.4) with performance comparable to the (more standard) higherdimensional Galerkinbased reduced systems. This success goes with the relatively small parameterization defect as well as with the relatively small energy kept in the highmodes (not shown); see right panel of Fig. 8. Note that for these experiments, the system parameters are chosen to be l=1.3π, λ=7λ _{ c }, ν=0.25, γ=2.5, while the final time is taken to be T=3. The parameters μ _{1} and μ _{2} in the cost functional (7.2) are taken to be μ _{1}=1 and μ _{2}=20. The initial datum is a scaled version of the corresponding positive steady state y ^{+} of the uncontrolled PDE, namely y _{0}=0.5y ^{+}.
The performances of the 4dimensional \(h^{(1)}_{\lambda}\)based suboptimal controller for (7.10) associated with the cost functional of tracking type (7.3) are illustrated in Figs. 9 and 10. The experimental conditions are here chosen to be: l=1.3π, λ=3λ _{ c }, ν=0.2, γ=2.5, while the final time is still taken to be T=3. The parameter μ _{1} in the cost functional (7.3) is taken to be μ _{1}=0.02 and the initial datum is y _{0}=0.8y ^{+}.
For these experiments, the corresponding cost value and relative L ^{2}error are given by
For a highdimensional Galerkinbased reduced problem with m=16, the corresponding cost value and relative L ^{2}error are given by
Here again, a fairly good performance of the suboptimal controller^{Footnote 20} as synthesized by solving the 4dimensional \(h^{(1)}_{\lambda}\)based reduced problem (7.10), is achieved. Due to the deterioration of the parameterization defect of \(h^{(1)}_{\lambda}\) that can be observed by comparing the right panel of Fig. 10 with the right panel of Fig. 8, the error estimate (4.10) suggests that such a success has to come with a noticeable reduction of the energy contained in the high modes of the PDE solution driven by the suboptimal controller synthesized for (7.10) compared to the PDE solution driven by the suboptimal controller synthesized for (7.9). Such theoretical prediction based on Corollary 2 can actually be empirically confirmed by looking at the numerical values of these highmode energies (not shown).
Finally, it is worth mentioning that similar to the globally distributed case, the performances of the \(h^{(1)}_{\lambda}\)based reduced systems and the associated parameterization defects of \(h^{(1)}_{\lambda}\) depend on the target and the length of the time horizon; cf. Figs. 2, 5 and 6. The dependence on the PDE initial datum turned out also to be an important factor. In particular, it has been observed that for both problems (7.4) and (7.5) the parameterization defects deteriorate when the scaling factors δ used in the construction of the initial datum y _{0}=δy ^{+} increases. Based on the results of Sect. 6 for the globally distributed case, it can be reasonably expected that PM functions such as \(h^{(2)}_{\lambda}\) that bring higherorder terms compared to \(h^{(1)}_{\lambda}\) (cf. Theorem 2) can allow to reach better performance for a broader range of initial data and target profiles; the parameterization defects being reasonably expected to get smaller.
Notes
 1.
 2.
Depending on the problem at hand; see e.g. [53].
 3.
In particular, nonlinearities including a loss of regularity compared to the ambient space \(\mathcal {H}\), are allowed; see e.g. Sect. 5 below.
 4.
 5.
Mainly in a stochastic context; see however [27, Sect. 4.5] for the deterministic setting.
 6.
In particular, the reduction techniques developed in this article should not be confused with the reduction techniques based on the slow manifold theory which have been used to deal with the reduction of optimal control problems arising in slowfast systems, where the separation of the dynamics holds in time rather than in space; see e.g. [69, 77, 87]. Furthermore, unlike slow manifolds, the finitehorizon PMs considered in this article are not invariant for the dynamics. To the contrary, they correspond to manifolds for which the dynamics wanders around, within some margin whose size (in a mean square sense) is strictly smaller than the energy unexplained by the \(\mathcal {H}^{\mathfrak{c} }\)modes.
 7.
Over the time interval [0,t ^{∗}].
 8.
 9.
So that \(h(z_{R}^{\ast})\) is a good approximation of the highmode projection \(P_{\mathfrak{s}}y^{\ast}\).
 10.
 11.
In the sense recalled in (5.10) below.
 12.
See [66] for more details about bvp4c. We also mention that all the numerical experiments performed in this article have been carried out by using the Matlab version 7.13.0.564 (R2011b).
 13.
As approximated from the 16dimensional Galerkinbased reduced optimal problem (A.10).
 14.
Note that, given a suboptimal controller, the computation of the parameterization defects here and in latter sections, has been performed by integrating the discrete form (5.37) of (5.1), and by using the formula (3.5), where the H ^{1}norm has been used in place of the ∥⋅∥_{ α }norm; see Definition 1 and Sect. 5.1 for the functional spaces defined in (5.6).
 15.
In contrast to the indirect method adopted above, BOCOP uses a direct method combining discretization and interiorpoint methods to solve the reduced optimal control problem (5.19) as implemented in the solver IPOPT [103]; see the webpage http://bocop.org for more information.
 16.
Using the symbols introduced here, \(h^{(1)}_{\lambda}(\xi_{1},\xi_{2}) = \boldsymbol{A} \xi_{1} \xi_{2} e_{3} + \boldsymbol{E} (\xi_{2})^{2} e_{4}\) from (5.22).
 17.
 18.
 19.
Here, 4 significant digits of the cost J are ensured with m=16 by comparing with cost values associated with higherdimensional suboptimal controller synthesized from (A.10).
 20.
For the optimal control (7.5).
 21.
For any T>0, a given continuous function \(\mathbf{z}: [0, T] \rightarrow \mathbb{R}^{2}\) is called a mild solution to the reduced system (5.27) if it satisfies the corresponding integral form of the system: \(\mathbf{z}(t) = \mathbf{z}(0) + \int_{0}^{t} \mathbf {F}(s,\mathbf{z}(s))\, \mathrm{d}s\), for all t∈[0,T], where z:=(z _{1},z _{2})^{tr} and F denotes the RHS of (5.27).
References
 1.
Abergel, F., Temam, R.: On some control problems in fluid mechanics. Theor. Comput. Fluid Dyn. 1, 303–325 (1990)
 2.
Amann, H.: Ordinary Differential Equations: An Introduction to Nonlinear Analysis. De Gruyter Studies in Mathematics, vol. 13. Walter de Gruyter & Co., Berlin (1990)
 3.
Armaou, A., Christofides, P.D.: Feedback control of the Kuramoto–Sivashinsky equation. Physica D 137(12), 49–61 (2000)
 4.
Armaou, A., Christofides, P.D.: Dynamic optimization of dissipative PDE systems using nonlinear order reduction. Chem. Eng. Sci. 57(24), 5083–5114 (2002)
 5.
Ascher, U.M., Mattheij, R.M.M., Russell, R.D.: Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. Classics in Applied Mathematics, vol. 13. SIAM, Philadelphia (1995)
 6.
Atwell, J.A., King, B.B.: Proper orthogonal decomposition for reduced basis feedback controllers for parabolic equations. Math. Comput. Model. 33, 1–19 (2001)
 7.
Baker, J., Armaou, A., Christofides, P.D.: Nonlinear control of incompressible fluid flow: application to Burgers’ equation and 2D channel flow. J. Math. Anal. Appl. 252, 230–255 (2000)
 8.
Bardi, M., CapuzzoDolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Springer, Berlin (2008)
 9.
Beeler, S.C., Tran, H.T., Banks, H.T.: Feedback control methodologies for nonlinear systems. J. Optim. Theory Appl. 107(1), 1–33 (2000)
 10.
Bensoussan, A., Da Prato, G., Delfour, M.C., Mitter, S.K.: Representation and Control of Infinite Dimensional Systems. Springer, Berlin (2007)
 11.
Berestycki, H., Kamin, S., Sivashinsky, G.: Metastability in a flame front evolution equation. Interfaces Free Bound. 3(4), 361–392 (2001)
 12.
Bergmann, M., Cordier, L.: Optimal control of the cylinder wake in the laminar regime by trustregion methods and pod reducedorder models. J. Comput. Phys. 227(16), 7813–7840 (2008)
 13.
Betts, J.T.: Survey of numerical methods for trajectory optimization. J. Guid. Control Dyn. 21(2), 193–207 (1998)
 14.
Betts, J.T.: Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, 2nd edn. Advances in Design and Control, vol. 19. SIAM, Philadelphia (2010)
 15.
Bewley, T.R., Moin, P., Temam, R.: DNSbased predictive control of turbulence: an optimal benchmark for feedback algorithms. J. Fluid Mech. 447, 179–225 (2001)
 16.
Bewley, T.R., Temam, R., Ziane, M.: A general framework for robust control in fluid mechanics. Physica D 138(3), 360–392 (2000)
 17.
Bonnans, F.J., Martinon, P., Grélard, V.: Bocop—a collection of examples. Tech. Rep. RR8053, INRIA (2012). http://hal.inria.fr/hal00726992
 18.
Bonnard, B., Chyba, M.: Singular Trajectories and Their Role in Control Theory. Mathématiques & Applications (Berlin), vol. 40. Springer, Berlin (2003)
 19.
Bonnard, B., Faubourg, L., Trélat, E.: Mécanique Céleste et Contrôle des Véhicules Spatiaux. Mathématiques & Applications (Berlin), vol. 51. Springer, Berlin (2006)
 20.
Boscain, U., Piccoli, B.: Optimal Syntheses for Control Systems on 2D Manifolds. Mathématiques & Applications (Berlin), vol. 43. Springer, Berlin (2004)
 21.
Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, New York (2011)
 22.
Brunovský, P.: Controlling the dynamics of scalar reaction diffusion equations by finitedimensional controllers. In: Modelling and Inverse Problems of Control for Distributed Parameter Systems, Laxenburg, 1989. Lecture Notes in Control and Inform. Sci., vol. 154, pp. 22–27. Springer, Berlin (1991)
 23.
Bryson, A.E. Jr., Ho, Y.C.: Applied Optimal Control. Hemisphere Publishing Corp., Washington (1975)
 24.
Cannarsa, P., Tessitore, M.E.: Infinitedimensional Hamilton–Jacobi equations and Dirichlet boundary control problems of parabolic type. SIAM J. Control Optim. 34(6), 1831–1847 (1996)
 25.
Carvalho, A.N., Langa, J.A., Robinson, J.C.: Attractors for InfiniteDimensional Nonautonomous Dynamical Systems. Applied Mathematical Sciences, vol. 182. Springer, New York (2013)
 26.
Chekroun, M.D., Liu, H., Wang, S.: Approximation of Invariant Manifolds: Stochastic Manifolds for Nonlinear SPDEs I. Springer Briefs in Mathematics. Springer, New York (2014). To appear
 27.
Chekroun, M.D., Liu, H., Wang, S.: Stochastic Parameterizing Manifolds and NonMarkovian Reduced Equations: Stochastic Manifolds for Nonlinear SPDEs II. Springer Briefs in Mathematics. Springer, New York (2014). To appear
 28.
Chekroun, M.D., Simonnet, E., Ghil, M.: Stochastic climate dynamics: random attractors and timedependent invariant measures. Physica D 240(21), 1685–1700 (2011)
 29.
Chen, C.C., Chang, H.C.: Accelerated disturbance damping of an unknown distributed system by nonlinear feedback. AIChE J. 38(9), 1461–1476 (1992)
 30.
Choi, H., Temam, R., Moin, P., Kim, J.: Feedback control for unsteady flow and its application to the stochastic Burgers equation. J. Fluid Mech. 253, 509–543 (1993)
 31.
Christofides, P.D., Armaou, A., Lou, Y., Varshney, A.: Control and Optimization of Multiscale Process Systems. Springer, Berlin (2008)
 32.
Christofides, P.D., Daoutidis, P.: Nonlinear control of diffusionconvectionreaction processes. Comput. Chem. Eng. 20, S1071–S1076 (1996)
 33.
Christofides, P.D., Daoutidis, P.: Finitedimensional control of parabolic PDE systems using approximate inertial manifolds. J. Math. Anal. Appl. 216(2), 398–420 (1997)
 34.
Constantin, P., Foias, C., Nicolaenko, B., Temam, R.: Integral Manifolds and Inertial Manifolds for Dissipative Partial Differential Equations. Applied Mathematical Sciences, vol. 70. Springer, New York (1989)
 35.
Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27(1), 1–67 (1992)
 36.
Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Burgers equation. Ann. Mat. Pura Appl. 178(1), 143–174 (2000)
 37.
Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Navier–Stokes equations. Modél. Math. Anal. Numér. 34, 459–475 (2000)
 38.
Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces, vol. 293. Cambridge University Press, Cambridge (2002)
 39.
Dacorogna, B.: Direct Methods in the Calculus of Variations, vol. 78. Springer, Berlin (2007)
 40.
Dedè, L.: Reduced basis method and a posteriori error estimation for parametrized linearquadratic optimal control problems. SIAM J. Sci. Comput. 32, 997–1019 (2010)
 41.
Evans, L.C.: Partial Differential Equations. Graduate Studies in Mathematics, vol. 19. American Mathematical Society, Providence (2010)
 42.
Eyre, D.J.: Unconditionally gradient stable time marching the Cahn–Hilliard equation. Mater. Res. Soc. Symp. Proc. 529, 39–46 (1998)
 43.
Fattorini, H.O.: Boundary control systems. SIAM J. Control 6(3), 349–385 (1968)
 44.
Fattorini, H.O.: Infinite Dimensional Optimization and Control Theory. Encyclopedia of Mathematics and Its Applications, vol. 62. Cambridge University Press, Cambridge (1999)
 45.
Flandoli, F.: Riccati equation arising in a boundary control problem with distributed parameters. SIAM J. Control Optim. 22(1), 76–86 (1984)
 46.
Foias, C., Manley, O., Temam, R.: Modelling of the interaction of small and large eddies in twodimensional turbulent flows. RAIRO. Anal. Numér. 22(1), 93–118 (1988)
 47.
Foias, C., Sell, G.R., Temam, R.: Inertial manifolds for nonlinear evolutionary equations. J. Differ. Equ. 73(2), 309–353 (1988)
 48.
Franke, T., Hoppe, R.H.W., Linsenmann, C., Wixforth, A.: Projection based model reduction for optimal design of the timedependent Stokes system. In: Constrained Optimization and Optimal Control for Partial Differential Equations, pp. 75–98. Springer, Berlin (2012)
 49.
Fursikov, A.V.: Optimal Control of Distributed Systems: Theory and Applications. Translations of Mathematical Monographs, vol. 187. Am. Math. Soc., Providence (2000)
 50.
Grepl, M.A., Kärcher, M.: Reduced basis a posteriori error bounds for parametrized linearquadratic elliptic optimal control problems. C. R. Acad. Sci., Ser. 1 Math. 349(15), 873–877 (2011)
 51.
Gunzburger, M.: Adjoint equationbased methods for control problems in incompressible, viscous flows. Flow Turbul. Combust. 65(34), 249–272 (2000)
 52.
Gunzburger, M.D.: Sensitivities, adjoints and flow optimization. Int. J. Numer. Methods Fluids 31(1), 53–78 (1999)
 53.
Henry, D.: Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, vol. 840. Springer, Berlin (1981)
 54.
Hinze, M., Kunisch, K.: On suboptimal control strategies for the Navier–Stokes equations. In: ESAIM: Proceedings, vol. 4, pp. 181–198 (1998)
 55.
Hinze, M., Kunisch, K.: Three control methods for timedependent fluid flow. Flow Turbul. Combust. 65, 273–298 (2000)
 56.
Hinze, M., Pinnau, R., Ulbrich, M., Ulbrich, S.: Optimization with PDE constraints. In: Mathematical Modelling: Theory and Applications, vol. 23. Springer, Berlin (2009)
 57.
Hinze, M., Volkwein, S.: Proper orthogonal decomposition surrogate models for nonlinear dynamical systems: error estimates and suboptimal control. In: Dimension Reduction of LargeScale Systems. Lect. Notes Comput. Sci. Eng., vol. 45, pp. 261–306. Springer, Berlin (2005)
 58.
Holmes, P., Lumley, J.L., Berkooz, G., Rowley, C.W.: Turbulence, Coherent Structures, Dynamical Systems and Symmetry, 2nd edn. Cambridge University Press, Cambridge (2012)
 59.
Hsia, C.H., Wang, X.: On a Burgers’ type equation. Discrete Contin. Dyn. Syst., Ser. B 6(5), 1121–1139 (2006)
 60.
Ito, K., Kunisch, K.: Lagrange Multiplier Approach to Variational Problems and Applications, vol. 15. SIAM, Philadelphia (2008)
 61.
Ito, K., Kunisch, K.: Reducedorder optimal control based on approximate inertial manifolds for nonlinear dynamical systems. SIAM J. Numer. Anal. 46(6), 2867–2891 (2008)
 62.
Ito, K., Ravindran, S.: Optimal control of thermally convected fluid flows. SIAM J. Sci. Comput. 19(6), 1847–1869 (1998)
 63.
Ito, K., Ravindran, S.S.: Reduced basis method for optimal control of unsteady viscous flows. Int. J. Comput. Fluid Dyn. 15(2), 97–113 (2001)
 64.
Ito, K., Schroeter, J.D.: Reduced order feedback synthesis for viscous incompressible flows. Math. Comput. Model. 33, 173–192 (2001)
 65.
Keller, H.B.: Numerical Solution of Two Point Boundary Value Problems. Regional Conference Series in Applied Mathematics, vol. 24. SIAM, Philadelphia (1976)
 66.
Kierzenka, J., Shampine, L.F.: A BVP solver based on residual control and the Matlab PSE. ACM Trans. Math. Softw. 27(3), 299–316 (2001)
 67.
Kirk, D.E.: Optimal Control Theory: An Introduction. Dover, New York (2012)
 68.
Knowles, G.: An Introduction to Applied Optimal Control. Mathematics in Science and Engineering, vol. 159. Academic Press, New York (1981)
 69.
Kokotović, P., Khalil, H.K., O’Reilly, J.: Singular Perturbation Methods in Control: Analysis and Design. Classics in Applied Mathematics, vol. 25. SIAM, Philadelphia (1999)
 70.
Kokotovic, P., O’Malley, R. Jr., Sannuti, P.: Singular perturbations and order reduction in control theory—an overview. Automatica 12(2), 123–132 (1976)
 71.
Kokotovic, P.V.: Applications of singular perturbation techniques to control problems. SIAM Rev. 26(4), 501–550 (1984)
 72.
Kokotovic, P.V., Sannuti, P.: Singular perturbation method for reducing the model order in optimal control design. IEEE Trans. Autom. Control 13(4), 377–384 (1968)
 73.
Krstic, M., Magnis, L., Vazquez, R.: Nonlinear control of the viscous Burgers equation: trajectory generation, tracking, and observer design. J. Dyn. Syst. Meas. Control 131(2), 021012 (2009), 8 pp.
 74.
Kunisch, K., Volkwein, S.: Control of the Burgers’ equation by a reducedorder approach using proper orthogonal decomposition. J. Optim. Theory Appl. 102, 345–371 (1999)
 75.
Kunisch, K., Volkwein, S.: Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM J. Numer. Anal. 40, 492–515 (2002)
 76.
Kunisch, K., Volkwein, S., Xie, L.: HJBPODbased feedback design for the optimal control of evolution problems. SIAM J. Appl. Dyn. Syst. 3(4), 701–722 (2004)
 77.
Lebiedz, D., Rehberg, M.: A numerical slow manifold approach to model reduction for optimal control of multiple time scale ODE (2013). ArXiv preprint arXiv:1302.1759
 78.
Lions, J.L.: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)
 79.
Lions, J.L.: Some Aspects of the Optimal Control of Distributed Parameter Systems. SIAM, Philadelphia (1972)
 80.
Lions, J.L.: Perturbations Singulières dans les Problèmes aux Limites et en Contrôle Optimal. Lecture Notes in Mathematics, vol. 323. Springer, Berlin (1973)
 81.
Lions, J.L.: Exact controllability, stabilization and perturbations for distributed systems. SIAM Rev. 30(1), 1–68 (1988)
 82.
Lunardi, A.: Analytic Semigroups and Optimal Regularity in Parabolic Problems. Birkhäuser, Basel (1995)
 83.
Ly, H.V., Tran, H.T.: Modeling and control of physical processes using proper orthogonal decomposition. Math. Comput. Model. 33, 223–236 (2001)
 84.
Ma, T., Wang, S.: Phase Transition Dynamics. Springer, Berlin (2014)
 85.
Medjo, T.T., Tebou, L.T.: Adjointbased iterative method for robust control problems in fluid mechanics. SIAM J. Numer. Anal. 42(1), 302–325 (2004)
 86.
Medjo, T.T., Temam, R., Ziane, M.: Optimal and robust control of fluid flows: some theoretical and computational aspects. Appl. Mech. Rev. 61(1), 010802 (2008), 23 pp.
 87.
Motte, I., Campion, G.: A slow manifold approach for the control of mobile robots not satisfying the kinematic constraints. IEEE Trans. Robot. Autom. 16(6), 875–880 (2000)
 88.
Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Macmillan & Co., New York (1964). Translated by D.E. Brown. A Pergamon Press Book
 89.
Ravindran, S.: A reducedorder approach for optimal control of fluids using proper orthogonal decomposition. Int. J. Numer. Methods Fluids 34(5), 425–448 (2000)
 90.
Ravindran, S.S.: Adaptive reducedorder controllers for a thermal flow system using proper orthogonal decomposition. SIAM J. Sci. Comput. 23(6), 1924–1942 (2002)
 91.
Roberts, S.M., Shipman, J.S.: TwoPoint Boundary Value Problems: Shooting Methods. Am. Elsevier, New York (1972)
 92.
Rosa, R.: Exact finite dimensional feedback control via inertial manifold theory with application to the Chafee–Infante equation. J. Dyn. Differ. Equ. 15(1), 61–86 (2003)
 93.
Rosa, R., Temam, R.: Finitedimensional feedback control of a scalar reactiondiffusion equation via inertial manifold theory. In: Foundations of Computational Mathematics, Rio de Janeiro, 1997, pp. 382–391. Springer, Berlin (1997)
 94.
Sano, H., Kunimatsu, N.: An application of inertial manifold theory to boundary stabilization of semilinear diffusion systems. J. Math. Anal. Appl. 196(1), 18–42 (1995)
 95.
Schättler, H., Ledzewicz, U.: Geometric Optimal Control: Theory, Methods and Examples. Interdisciplinary Applied Mathematics, vol. 38. Springer, New York (2012)
 96.
Shvartsman, S.Y., Kevrekidis, I.G.: Nonlinear model reduction for control of distributed systems: a computerassisted study. AIChE J. 44(7), 1579–1595 (1998)
 97.
Temam, R.: Navier–Stokes Equations: Theory and Numerical Analysis. Am. Math. Soc., Providence (1984)
 98.
Temam, R.: Inertial manifolds. Math. Intell. 12(4), 68–74 (1990)
 99.
Trélat, E.: Optimal control and applications to aerospace: some results and challenges. J. Optim. Theory Appl. 154(3), 713–758 (2012)
 100.
Tröltzsch, F.: Optimal Control of Partial Differential Equations: Theory, Methods and Applications. Graduate Studies in Mathematics, vol. 112. Am. Math. Soc., Providence (2010)
 101.
Tröltzsch, F., Volkwein, S.: POD a posteriori error estimates for linearquadratic optimal control problems. Comput. Optim. Appl. 44, 83–115 (2009)
 102.
Volkwein, S.: Distributed control problems for the Burgers equation. Comput. Optim. Appl. 18(2), 115–140 (2001)
 103.
Wächter, A., Biegler, L.T.: On the implementation of an interiorpoint filter linesearch algorithm for largescale nonlinear programming. Math. Program. 106(1), 25–57 (2006)
Acknowledgements
We are grateful to Monique Chyba and to Bernard Bonnard for their interest in our works on parameterizing manifolds, which led the authors to propose this article. MDC is also grateful to Denis Rousseau and Michael Ghil for the unique environment they provided to complete this work, at the CERESERTI, École Normale Supérieure, Paris. This work has been partly supported by the National Science Foundation grant DMS1049253 and Office of Naval Research grant N000141210911.
Author information
Affiliations
Corresponding author
Appendices
Appendix A: Suboptimal Controller Synthesis Based on Galerkin Projections and Pontryagin Maximum Principle
To assess the performance of the PMbased reduced systems considered in Sects. 5 and 6 in synthesizing suboptimal controllers in the context of a Burgerstype equation, we derive in this appendix suboptimal control problems associated with the globally distributed optimal control problem (5.9) based on Galerkin approximations. Section A.1 concerns a twomode Galerkin approximation; and Sect. A.2 deals with the more general mdimensional case. The former serves as a basis of comparison to analyze the performance achieved by the PMbased approach, while the latter can in principle provide a good indication of the true optimal controller of the underlying optimal control problems by taking the dimension sufficiently large. Results for the general mdimensional case will also be used in Sect. 7 to derive Galerkinbased reduced systems for the locally distributed problems (7.4) and (7.5).
A.1 Suboptimal Controller Based on a 2D Galerkin Reduced Optimal Problem
We first present the reduced optimal control problem based on a twomode Galerkin approximation of the underlying PDE (5.1), which can be derived by simply setting \(h^{(1)}_{\lambda}\) in (5.18)–(5.17) to zero. The corresponding operational forms for the cost functional and reduced system for the low modes can be obtained from (5.24)–(5.27) by setting α _{1}(λ) and α _{2}(λ) to be zero. The resulting cost functional reads:
where \(v = v_{1} e_{1} + v_{2} e_{2} \in L^{2}(0,T; \mathcal{H}^{\mathfrak {c}})\) is the state variable, \(u_{G} = u_{G,1} e_{1} + u_{G,2} e_{2} \in L^{2}(0,T; \mathcal {H}^{\mathfrak{c}})\) is the control, C _{ T } is the terminal payoff term defined by (5.26), and
The equations for v _{1} and v _{2} are given by:
which is subjected to the initial conditions:
where \(\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}\).
The corresponding Galerkinbased reduced optimal control problem for (5.9) reads:
It follows again from the Pontryagin maximum principle that for a given pair
to be optimal for the problem (A.5), it must satisfy the following conditions:
where \(v_{G,1}^{\ast}= \langle v_{G}^{\ast}, e_{i} \rangle\), \(u_{G,i}^{\ast}= \langle u_{G}^{\ast}, e_{i} \rangle\), i=1,2, and \(p_{G}^{\ast}= p_{G,1}^{\ast}e_{1} + p_{G,2}^{\ast}e_{2}\) denotes the costate associated with \(v_{G}^{\ast}\).
Thanks to (A.6e), we can express the controller \(u_{G,i}^{\ast}\) in (A.6a)–(A.6b) in terms of the costate \(p_{G,i}^{\ast}\), leading thus to the following BVP for \(v_{G}^{\ast}\) and \(p_{G}^{\ast}\):
subject to the boundary condition
where f _{3} and f _{4} are defined by (5.33), and the boundary condition for the costate is derived in the same way as in (5.34) thanks to the Pontryagin maximum principle. Once this BVP is solved, the corresponding controller \(u_{G}^{\ast}\) is determined by (A.6e) which provides the unique optimal controller for the Galerkinbased reduced optimal control problem (A.5), due again to the fact that the cost functional (A.1) is quadratic in u _{ G } and the dependence on the controller is affine for the system of Eqs. (A.3); see e.g. [67, Sect. 5.3] and [99]. Note also that analogous results to those presented in Lemma 4 hold for the reduced optimal control problem (A.5) as well.
A.2 Suboptimal Controller Based on an mDimensional Galerkin Reduced Optimal Problem
We derive now a more general reduced optimal control problem based on higherdimensional Galerkin approximation, where the subspace \(\mathcal {H}^{\mathfrak{c}}\) is taken to be spanned by the first m eigenmodes:
The main interest is that by choosing m sufficiently large, such a reduced problem can serve in principle to provide a good estimate of the true optimal controllers of the globally distributed optimal control problem (5.9), which can be taken then as a benchmark for the numerical experiments reported in Sects. 5 and 6. Analogous reduced problems associated with the locally distributed cases (7.4) and (7.5) considered in Sect. 7 can be derived in the same way (and actually the corresponding results are the same as those presented in Sect. 7.2 by setting \(h^{(1)}_{\lambda}\) therein to be zero).
The Galerkinbased reduced optimal control problem (A.5) when generalized to the case with m controlled modes reads:
where \(\mathcal{H}^{\mathfrak{c}}\) is the mdimensional reduced phase space defined in (A.9), and
The system of equations that \(v(\cdot; \widetilde{u}_{G})\) satisfies is given by:
which is subjected to the initial conditions:
where the matrix M _{ m×m } is the representation of the linear operator \(P_{\mathfrak{c}}\mathfrak{C}\) under the basis e _{1},…,e _{ m }, i.e. the elements of M are given by \(a_{ij} = \langle\mathfrak {C}e_{i}, e_{j} \rangle\) (see (5.16) for the case m=2) and \([M^{\mathrm{tr}}\widetilde{u}_{G}(t)]_{i}\) denotes the ithcomponent of the vector \(M^{\mathrm{tr}}\widetilde{u}_{G}(t)\).
As before, by using the Pontryagin maximum principle, we can derive the following BVP to be satisfied by any optimal pair \((v_{G}^{\ast}, \widetilde {u}^{\ast}_{G})\) of (A.10):
where the optimal controller \(\widetilde{u}^{\ast}_{G}\) is related to the corresponding costate \(p_{G}^{\ast}\) by
see (A.6e) for the case m=2. Here, f _{ i }, i=1,…,m, denotes the RHS of (A.13a) and we have used the nonlinear interactions (5.20) to derive the quadratic parts of f _{ i }. The formula for \(\frac{\partial f_{j}(v, p)}{\partial v_{i}}\) is given by:
where δ _{ ij } denotes the Kronecker delta, and
with ⌊x⌋ being the largest integer less than x and the coefficients ω _{ i,j } given by
Appendix B: Global Wellposedness for the Twodimensional \(h^{(1)}_{\lambda}\)based Reduced System (5.27)
In this appendix, we show that for any given initial datum and any fixed T>0, the \(h^{(1)}_{\lambda}\)based reduced system (5.27) admits a unique mild solution in the space \(C([0,T]; \mathbb{R}^{2})\).^{Footnote 21} The result follows from classical ODE theory [2] once we can establish a priori bounds for the solution (z _{1}(t),z _{2}(t)). Similar (but more tedious) estimates can be used to deal with the Cauchy problem associated with the \(h^{(2)}_{\lambda}\)based reduced system (6.17) derived in Sect. 6 and the more general mdimensional \(h^{(1)}_{\lambda}\)based reduced system (7.19) encountered in Sect. 7.
Let us first recall that the twodimensional \(h^{(1)}_{\lambda}\)based reduced system is given by:
where \(u_{R}(\cdot):=u_{R,1}(\cdot)e_{1} + u_{R,2}(\cdot)e_{2} \in L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})\) with T>0 being the fixed finite horizon, α _{1}(λ) and α _{2}(λ) are defined in (5.23), \(\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}\), and a _{ ij }, 1≤i,j≤2, are elements of the coefficients matrix M associated with the operator \(\mathfrak{C}\); see (5.15)–(5.16).
We check below by energy estimates that no finite time blowup can occur for solutions to the system (B.1a), (B.1b) emanating from any initial datum \((z_{1,0}, z_{2,0}) \in\mathbb{R}^{2}\). For this purpose, let us define
We claim that
It is clear that we only need to deal with those values of t such that z _{2}(t)>R. Assume that there exists such time instances, otherwise we are done. Let us fix an arbitrary interval [t _{∗},t ^{∗}]⊂[0,T] such that
Since R≥z _{2,0} and z _{2} depends continuously on t, we can reduce t _{∗} such that z _{2}(t _{∗})=R while the condition (B.3) remains true.
Now by multiplying z _{2}(t) on both sides of (B.1b), we obtain
where
It follows then that
Since z _{2}(t)≥R for all t∈[t _{∗},t ^{∗}] by the choices of t _{∗} and t ^{∗}, we get
where we have used \( \frac{\alpha}{z_{2}} \le\frac{\alpha}{R}\) and 2αα _{2}(λ)(z _{2})^{2}≤2αα _{2}(λ)R ^{2}, which follow from the definition of R and the fact that α>0 and α _{2}(λ)<0.
According again to the definition of R and the facts that α>0, α _{1}(λ)<0 and α _{2}(λ)<0, we get
We obtain then
By reporting the above estimate in (B.5) and using z _{2}(t _{∗})=R, we obtain
and (B.2) is thus proven.
Note also that by multiplying z _{1}(t) on both sides of (B.1a), we obtain for any t∈[0,T] at which z _{1}(t)≠0 that
It follows then from the boundedness of z _{2} and (B.6) that z _{1} can grow at most exponentially. Consequently, no finite time blowup can occur for the \(h^{(1)}_{\lambda}\)based reduced system (B.1a), (B.1b).
Rights and permissions
About this article
Cite this article
Chekroun, M.D., Liu, H. FiniteHorizon Parameterizing Manifolds, and Applications to Suboptimal Control of Nonlinear Parabolic PDEs. Acta Appl Math 135, 81–144 (2015). https://doi.org/10.1007/s1044001499491
Received:
Accepted:
Published:
Issue Date:
Keywords
 Parabolic optimal control problems
 Loworder models
 Error estimates
 Burgerstype equation
 Backward–forward systems