# Finite-Horizon Parameterizing Manifolds, and Applications to Suboptimal Control of Nonlinear Parabolic PDEs

## Abstract

This article proposes a new approach for the design of low-dimensional suboptimal controllers to optimal control problems of nonlinear partial differential equations (PDEs) of parabolic type. The approach fits into the long tradition of seeking for slaving relationships between the small scales and the large ones (to be controlled) but differ by the introduction of a new type of manifolds to do so, namely the finite-horizon parameterizing manifolds (PMs). Given a finite horizon [0,T] and a low-mode truncation of the PDE, a PM provides an approximate parameterization of the high modes by the controlled low ones so that the unexplained high-mode energy is reduced—in a mean-square sense over [0,T]—when this parameterization is applied.

Analytic formulas of such PMs are derived by application of the method of pullback approximation of the high-modes introduced in Chekroun et al. (2014). These formulas allow for an effective derivation of reduced systems of ordinary differential equations (ODEs), aimed to model the evolution of the low-mode truncation of the controlled state variable, where the high-mode part is approximated by the PM function applied to the low modes. The design of low-dimensional suboptimal controllers is then obtained by (indirect) techniques from finite-dimensional optimal control theory, applied to the PM-based reduced ODEs.

A priori error estimates between the resulting PM-based low-dimensional suboptimal controller $$u_{R}^{\ast}$$ and the optimal controller u are derived under a second-order sufficient optimality condition. These estimates demonstrate that the closeness of $$u_{R}^{\ast}$$ to u is mainly conditioned on two factors: (i) the parameterization defect of a given PM, associated respectively with the suboptimal controller $$u_{R}^{\ast}$$ and the optimal controller u ; and (ii) the energy kept in the high modes of the PDE solution either driven by $$u_{R}^{\ast}$$ or u itself.

The practical performances of such PM-based suboptimal controllers are numerically assessed for optimal control problems associated with a Burgers-type equation; the locally as well as globally distributed cases being both considered. The numerical results show that a PM-based reduced system allows for the design of suboptimal controllers with good performances provided that the associated parameterization defects and energy kept in the high modes are small enough, in agreement with the rigorous results.

## Introduction

In this article, we propose a new approach for the synthesis of low-dimensional suboptimal controllers for optimal control problems of nonlinear partial differential equations (PDEs) of parabolic type. Optimal control of PDEs has been extensively studied in the past few decades due largely to its broad applications in both engineering and various scientific disciplines, and fruitful results have been obtained; see e.g. the monographs [8, 10, 31, 44, 49, 56, 78, 100].

Due to the complexity of most applications, optimal control problems of parabolic PDEs are often solved numerically. Among the commonly used methods one finds methods that solve at once the associated optimality system using techniques such as the Newton or quasi-Newton methods [14, 56, 60], or methods that use optimization algorithms involving for instance an approximation to the gradient of the cost functional; see e.g. [13, 56, 60, 100]. In this case, the gradient can be approximated by using sensitivity methods or methods based on the adjoint equation; see e.g. [1, 15, 16, 51, 52, 62, 85, 86]. Efficient (and accurate) solutions can be designed by such methods [1, 7, 15, 30, 55, 85, 86] which may lead however to high-dimensional problems that can turn out to be computationally expensive to solve, especially for fluid flows applications. The task becomes even more challenging when a dynamic programming approach is adopted, involving typically to solve (infinite-dimensional) Hamilton–Jacobi–Bellman (HJB) equations [8, 9, 24, 3538].

As an alternative, various reduction techniques have been proposed in the literature to seek instead for low-dimensional suboptimal controllers. The main issue related to such techniques relies however on the ability to design suboptimal solutions close enough to the genuine optimal one [40, 50, 57, 61, 101], while keeping cheap enough the numerical efforts to do so. A general class of model reduction techniques used extensively in this context is the so-called reduced-order modeling (ROM) approach, based on approximating the nonlinear dynamics by a Galerkin technique relying on basis functions, possibly empirical [48, 54, 55, 89]. Various ROM techniques differ in the choice of the basis functions. One popular method that falls into this category is the so-called proper orthogonal decomposition (POD); see among many others [6, 12, 57, 58, 74, 75, 83, 90], and [50, 63, 64] for other methods in constructing the reduced basis. We refer also to  for suboptimal controllers designed from the solutions of low-dimensional HJB equations associated with POD-based Galerkin reduced-order models.

Such Galerkin/ROM-based techniques can lead to a synthesis of very efficient suboptimal controllers once, at a given truncation, the disregarded high-modes do not contribute significantly to the dynamics of the low modes. However, when this is not the case, the seeking of parameterizations of the disregarded modes in terms of the low ones becomes central for the design of surrogate low-dimensional models of good performances. The idea of seeking for slaving relationships between the unstable or least stable modes with the more stable ones has a long tradition in control theory of large-dimensional or distributed-parameter systems. For instance, by use of methods from singular perturbation theory, the authors in  investigated the construction of such slaving functions for slow-fast systems in terms of invariant (slow) manifolds.Footnote 1 Such manifolds are then used to decouple the slow and fast parts of the dynamics and to feed back the slow component of the state only. This is especially important since the fast components of the state are in general difficult to measure/estimate and consequently to feedback.

Complementary to singular perturbation methods, the authors of  used tools of center manifold and normal form theory to design a nonlinear controller and obtained a closed-loop center manifold for a truncated distributed-parameter system; in their case proximity to a bifurcation constitutes a guarantee to the separation of relevant time scales of the problem. In [32, 33], the authors have gone beyond the finite-dimensional singular perturbation work of  or center-manifold-based work of  to exploit approximate inertial manifolds (AIMs)  in the infinite-dimensional case; the latter are global manifolds in phase space that can be thought of as generalizations of slow/center manifolds. Using AIMs, the authors of  designed then observer-based nonlinear feedback controllers (through the corresponding closed-loop AIMs) and demonstrated their performance.

The potential usefulness of inertial manifolds (IMs) [34, 47, 98] or AIMs in control theory of nonlinear parabolic PDEs was actually quickly identified after IM theory started to be established [22, 32, 33, 94]; see e.g. [93, 96] for a state-of-the-art of the literature at the end of the 90s. However since these works, IMs or AIMs have been mainly employed to derive low-dimensional vector fields for the design of feedback controllers [3, 92]. To the exception of [4, 61], the use of IMs or AIMs to design suboptimal solutions to optimal control problems have been much less considered.

The main purpose of this article is to introduce a general framework—in the continuity but different from the AIM approach—for the effective derivation of suboptimal low-dimensional solutions to optimal control problems associated with nonlinear PDE such as (1.1) given below. To be more specific, given an ambient Hilbert space, $$\mathcal{H}$$, the control problems of PDEs we will consider hereafter take the following abstract form:

$$\frac{\mathrm{d}y}{\mathrm{d}t} = L y + F(y) + \mathfrak{C} u(t), \quad t \in(0, T],$$
(1.1)

where L denotes a linear operator, F some nonlinearity, and $$\mathfrak{C}$$ denotes a bounded linear operator on $$\mathcal {H}$$; the state variable y and the controller u living both in $$L^{2}(0,T; \mathcal{H})$$ for a given horizon T>0; see Sect. 2 for more details.

The underlying idea consists of seeking for manifolds $$\mathfrak{M}$$ aimed to provide—over a finite horizon [0,T]—an approximate parameterization of the small scales of the solutions to the uncontrolled PDE associated with Eq. (1.1), namely

$$\frac{\mathrm{d}y}{\mathrm{d}t} = L y + F(y),$$
(1.2)

in terms of their large scales, so that $$\mathfrak{M}$$ allows in turn to derive low-dimensional reduced models from which suboptimal controllers can be efficiently designed by standard methods of finite-dimensional optimal control theory such as found in e.g. [18, 23, 67, 68, 95]. In that respect, the notion of finite-horizon parameterizing manifold (PM) is introduced in Definition 1 below. Finite-horizon PMs distinguish from the more classical AIMs in the sense that they provide approximate parametrization of the small scales by the large ones in the L 2-sense (over [0,T]) rather than a hard ε-approximation to be valid for each time t∈[0,T], cf. . In particular, a finite-horizon PM allows to reduce the (cumulative) unexplained high-mode energy (over [0,T]) from the low modes to be controlled, in a way different from other slaving relationships considered so far; the high-mode energy being reduced in a mean-square sense in the case of finite-horizon PMs.

Obviously, the difficulty relies still on the ability of such an approach to give access to suboptimal controllers of good performance. A priori the task in not easy and a key feature to ensure that a “good” performance is achieved from such a suboptimal low-dimensional controller, $$u_{R}^{*}$$, relies on the ability of the manifold $$\mathfrak{M}$$ derived from the uncontrolled problem to still achieve a sufficiently “small” parameterization defect (over the horizon [0,T]) of the small scales by the large ones once a controller $$u_{R}^{*}$$ is used to drive the PDE (1.1); see (3.5) in Definition 1. This point is rigorously formulated as Theorem 1 in Sect. 4 (see also Corollary 2), which provides—under a second-order sufficient optimality condition—error estimates on how “close” a low-dimensional suboptimal controller $$u_{R}^{\ast}$$, designed from a PM-based reduced system, is to the optimal controller u . The error estimates (4.5) and (4.10) show in particular that the closeness of $$u_{R}^{\ast}$$ to u is mainly conditioned on two factors: (i) the parameterization defect of a given PM, associated respectively with the suboptimal controller $$u_{R}^{\ast}$$ and the optimal controller u ; and (ii) the energy kept in the high modes of the PDE solution either driven by $$u_{R}^{\ast}$$ or u itself.

The article is organized as follows. The functional framework associated with optimal control problems related to (1.1) is introduced in Sect. 2. The definition of finite-horizon PMs and a practical procedure to get access to such PMs are introduced in Sect. 3. In particular analytic formulas of leading-order PMs are provided; the latter being subject to a cross non-resonance condition (NR) to be satisfied between the high and the low modes; see Sect. 3.2. Section 4 is devoted, given an arbitrary PM, to the derivation of rigorous a priori error estimates between a low-dimensional PM-based suboptimal controller and the optimal one; see Theorem 1 and Corollary 2. The performance of the resulting PM-based reduction approach is numerically investigated on a Burgers-type equation in the context of globally and locally distributed control laws; see Sects. 56, and Sect. 7. As a main byproduct, the numerical results strongly indicate that a PM-based reduced system allows for a design of suboptimal controllers with good performances provided that the aforementioned parameterization defects and the energy contained in the high modes are small enough, in agreement with the theoretical predictions of Theorem 1 and Corollary 2. This is particularly demonstrated in Sect. 6, where analytic formulas derived in Theorem 2 give access to higher-order PMs with reduced parameterization defects compared to those of the leading-order PMs introduced in Sect. 3. In all the cases, the analytic formulas of the PMs used hereafter allows for an efficient design of suboptimal controllers by standard (and simple) application of the Pontryagin maximum principle [18, 19, 67, 88] to the PM-based reduced systems.

## Optimal Control of Nonlinear PDEs, and Functional Framework

The functional framework for the optimal control problem considered in this article takes place in Hilbert spaces. Let us first introduce the class of partial differential equations (PDEs) to be controlled. For a given Hilbert space $$\mathcal{H}$$, we consider $$\mathcal{H}_{1}$$ to be a subspace compactly and densely embedded in $$\mathcal{H}$$ such that $$A:\mathcal{H}_{1}\rightarrow\mathcal{H}$$ is a sectorial operator [53, Definition 1.3.1] satisfying

$$-A \mbox{ is stable in the sense that its spectrum satisfies } \operatorname{Re} \bigl(\sigma(-A) \bigr)< 0.$$

To include in our framework PDEs for which the nonlinear terms are responsible of a loss of regularity compared to the ambient space $$\mathcal{H}$$, we consider standard interpolated spaces $$\mathcal {H}_{\alpha}$$ between $$\mathcal{H}_{1}$$ and $$\mathcal{H}$$ (with α∈[0,1))Footnote 2 along with perturbations of the linear operator −A given by a one-parameter family, $$\{B_{\lambda}\}_{\lambda\in\mathbb {R}}$$, of bounded linear operators from $$\mathcal{H}_{\alpha}$$ to $$\mathcal{H}$$, that depend continuously on a real parameter λ.

By defining

$$L_{\lambda}:=-A+B_{\lambda},$$

we are thus left with a one-parameter family of sectorial operators $$\{ -L_{\lambda}\}_{\lambda\in\mathbb{R}}$$, each of them mapping $$\mathcal {H}_{1}$$ into $$\mathcal{H}$$. Finally, $$F: \mathcal{H}_{\alpha}\rightarrow \mathcal{H}$$ will denote a continuous k-linear mapping (k≥2) for some α∈[0,1).Footnote 3

The nonlinear evolution equation to be controlled takes then the following abstract form:

$$\frac{\mathrm{d}y}{\mathrm{d}t} = L_\lambda y + F(y) + \mathfrak{C} u(t), \quad t \in(0, T],$$
(2.1)

where $$y \in L^{2}(0,T; \mathcal{H})$$ denotes the state variable, $$u \in L^{2}(0,T; \mathcal{H})$$ denotes the controller; T>0 being a fixed horizon, and

$$\mathfrak{C}: \mathcal{H} \rightarrow\mathcal{H}$$
(2.2)

denoting a bounded (and non-zero) linear control operator. In particular, we will be mainly concerned with distributed control problems (control inside the domain) and not with problems involving a control on the boundary which leads typically to an unbounded control operator; see e.g. [10, Part V, Chaps. 2 and 3] and . The parameter λ governs typically the presence of (linearly) unstable modes for (2.1). In the application considered in Sects. 57, it will be chosen so that the linear operator, L λ , admits large-scale unstable modes.

We introduce next the cost functional $$J: L^{2}(0,T; \mathcal{H}) \times L^{2}(0,T; \mathcal{H}) \rightarrow\mathbb{R}$$ given by

$$J(y,u) := \int_0^T \bigl[ \mathcal{G}\bigl(y(t)\bigr) + \mathcal{E}\bigl(u(t)\bigr) \bigr] \,\mathrm{d}t,$$
(2.3)

where $$\mathcal{G}: \mathcal{H} \rightarrow\mathbb{R}^{+}$$ and $$\mathcal {E}: \mathcal{H} \rightarrow\mathbb{R}^{+}$$ are assumed to be continuous, and to satisfy the following conditions:

$$\mathcal{G} \mbox{ is uniformly Lipschitz on bounded sets of } \mathcal{H},$$
(C1)

and

$$\| u\|\leq\|v\| \Longrightarrow\mathcal{E}(u)\leq \mathcal{E}(v),$$
(C2)

where ∥⋅∥ denotes the $$\mathcal{H}$$-norm.

Given such a cost functional,Footnote 4 we will consider in this article the following type of optimal control problem:

To simplify the presentation, we will make the following assumptions on L λ and F throughout this article:

### Standing Hypothesis

L λ is self-adjoint, whose eigenvalues (arranged in descending order) are denoted by $$\{\beta_{i}(\lambda)\}_{i \in\mathbb{N}}$$; and the eigenvectors $$\{ e_{i}(\lambda)\}_{i \in\mathbb{N}}$$ of L λ form a Hilbert basis of $$\mathcal{H}$$. The eigenvectors are regular enough such that $$e_{i}(\lambda) \in\mathcal{H}_{\alpha}$$ for all $$i\in\mathbb{N}$$. The nonlinearity $$F: \mathcal{H}_{\alpha}\rightarrow\mathcal{H}$$ is a continuous k-linear mapping for some k≥2, and for some α∈[0,1). In particular, F(0)=0.

We also assume that for any initial datum $$y_{0}\in\mathcal{H}$$, any T>0, and any given $$u \in L^{2}(0, T; \mathcal{H})$$, the Cauchy problem

$$\frac{\mathrm{d}y}{\mathrm{d}t} = L_\lambda y + F(y) + \mathfrak{C} u(t), \quad y(0) = y_0 \in\mathcal{H},$$
(2.4)

has a unique solution $$y(\cdot,y_{0};u) \in C([0,T]; \mathcal{H}) \cap L^{2}(0,T; \mathcal{H}_{\alpha})$$, which lives furthermore in the space $$C^{1}((0,T]; \mathcal{H}) \cap C([0,T]; \mathcal{H}_{\alpha}) \cap L^{2}(0,T; \mathcal{H}_{1})$$ when $$y_{0} \in\mathcal{H}_{\alpha}$$; see e.g. [53, Chap. 3] and [82, Chap. 7] for conditions under which such properties are guaranteed. Sect. 5.1 below deals with such an example.

## Finite-Horizon Parameterizing Manifolds: Definition, Pullback Characterization and Analytic Formulas

This section is devoted to the definition of finite-horizon parameterizing manifolds (PMs) for a given PDE of type (2.4) and a general method to give access to explicit formulas of such finite-horizon PMs in practice through pullback limits associated with certain backward–forward systems built from the uncontrolled Eq. (1.2).

The key idea takes its roots in the notion of (asymptotic) parameterizing manifold introduced in ,Footnote 5 which reduces here of approximating—over some prescribed finite time interval [0,T]—the modes with “high” wave numbers as a pullback limit depending on the time-history of (some approximation of) the dynamics of the modes with “low” wave numbers. The cut between what is “low” and what is “high” is organized in an abstract setting as follows; we refer to Sect. 7 for a more concrete specification of such a cut in the case of locally distributed controls. The subspace $$\mathcal{H}^{\mathfrak{c}} \subset\mathcal{H}$$ defined by,

\begin{aligned} \mathcal{H}^{\mathfrak{c}} := \operatorname{span}\{e_1, \ldots, e_m\}, \end{aligned}
(3.1)

spanned by the m-leading modes will be considered as our subspace associated with the low modes. Its topological complements, $$\mathcal{H}^{\mathfrak{s}}$$ and $$\mathcal{H}^{\mathfrak{s} }_{\alpha}$$, in respectively $$\mathcal{H}$$ and $$\mathcal{H}_{\alpha}$$, will be considered as associated with the high modes, leading to the following decomposition

$$\mathcal{H} = \mathcal{H}^{\mathfrak{c}} \oplus\mathcal {H}^{\mathfrak{s}}, \qquad\mathcal {H}_\alpha= \mathcal{H}^{\mathfrak{c}} \oplus\mathcal {H}^{\mathfrak{s}}_\alpha.$$
(3.2)

We will use $$P_{\mathfrak{c}}$$ and $$P_{\mathfrak{s}}$$ to denote the canonical projectors associated with $$\mathcal{H}^{\mathfrak{c}}$$ and $$\mathcal {H}^{\mathfrak{s}}$$, respectively. Here, the usage of the eigenbasis in the decomposition of the phase space is employed for the sake of analytic formulations derived hereafter. In practice, the methodology presented below can be (numerically) adapted when the phase space $$\mathcal{H}$$ is decomposed by using other bases; see also Remark 1(ii).

### Finite-Horizon Parameterizing Manifolds

Let t >0 be fixed, $$\mathcal{V}$$ be an open set in $$\mathcal {H}_{\alpha}$$, and $$\mathcal{U}$$ an open set in $$L^{2}(0,t^{\ast}; \mathcal{H})$$. For a given PDE of type (2.4), a finite-horizon parameterizing manifold $$\mathfrak{M}$$ over the interval [0,t ] is defined as the graph of a function h pm from $$\mathcal{H}^{\mathfrak{c}}$$ to $$\mathcal{H}^{\mathfrak {s}}_{\alpha}$$, which is aimed to provide, for any y(t,y 0;u) solution of (2.4) with initial datum $$y_{0} \in\mathcal{V}$$ and control $$u \in\mathcal{U}$$, an approximate parameterization of its “high-frequency” part, $$y_{\mathfrak{s}}(t, y_{0};u)=P_{\mathfrak{s}} y(t, y_{0};u)$$, in terms of its “low-frequency” part, $$y_{\mathfrak{c}}(t, y_{0};u)=P_{\mathfrak{c}} y(t, y_{0}; u)$$, so that the mean-square error, $$\int_{0}^{t^{\ast}} \|y_{\mathfrak {s}}(t, y_{0}; u) - h^{\mathrm{pm}}(y_{\mathfrak{c}}(t, y_{0}; u)) \|_{\alpha}^{2} \,\mathrm{d}t$$, is strictly smaller than the high-mode energy of $$y_{\mathfrak{s}}$$, $$\int_{0}^{t^{\ast}} \|y_{\mathfrak{s}}(t, y_{0}; u)\|_{\alpha}^{2} \; \mathrm{d}t$$. Here the frequencies are understood in a spatial sense, i.e. in terms of wave numbers.Footnote 6 In statistical terms, a finite-horizon PM function h pm can thus be thought of as a slaving relationship between the high modes and the low ones such that the fraction of energyFootnote 7 of $$y_{\mathfrak{s}}$$ unexplained by $$h^{\mathrm{pm}}(y_{\mathfrak{c}})$$ (i.e. via this slaving relationship) is less than unity.

In more precise terms, we are left with the following definition:

### Definition 1

Let t >0 be fixed, $$\mathcal{V}$$ be an open set in $$\mathcal {H}_{\alpha}$$, and $$\mathcal{U}$$ an open set in $$L^{2}(0,t^{\ast}; \mathcal {H})$$. A manifold $$\mathfrak{M}$$ of the form

\begin{aligned} \mathfrak{M} := \bigl\{ \xi+ h^{\mathrm{pm}}(\xi) \bigm|\xi\in\mathcal {H}^{\mathfrak{c}}\bigr\} \end{aligned}
(3.3)

is called a finite-horizon parameterizing manifold (PM) over the time interval [0,t ] associated with the PDE (2.4) if the following conditions are satisfied:

1. (i)

The function $$h^{\mathrm{pm}}: \mathcal{H}^{\mathfrak {c}} \rightarrow \mathcal{H}^{\mathfrak{s}}_{\alpha}$$ is continuous.

2. (ii)

The following inequality holds for any $$y_{0} \in\mathcal {V}$$ and any $$u \in\mathcal{U}$$:

\begin{aligned} \int_0^{t^\ast} \bigl\| y_{\mathfrak{s}}(t,y_0; u) - h^{\mathrm{pm}}\bigl(y_{\mathfrak{c}}(t, y_0; u)\bigr) \bigr\| _\alpha^2 \, \mathrm{d}t < \int _0^{t^\ast} \bigl\| y_{\mathfrak{s}}(t, y_0; u) \bigr\| _\alpha^2 \, \mathrm{d}t, \end{aligned}
(3.4)

where $$y_{\mathfrak{c}}(\cdot, y_{0}; u)$$ and $$y_{\mathfrak{s}}(\cdot , y_{0}; u)$$ are the projections to respectively the subspaces $$\mathcal{H}^{\mathfrak {c}}$$ and $$\mathcal{H}^{\mathfrak{s}}_{\alpha}$$ of the solution y(⋅,y 0;u) for the PDE (2.4) driven by u emanating from y 0.

For a given initial datum y 0, if $$y_{\mathfrak{s}}(\cdot, y_{0}; u)$$ is not identically zero, the parameterization defect of $$\mathfrak{M}$$ over [0,t ], and associated with the control u, is defined as the following ratio:

$$\boxed{ Q\bigl(t^\ast, y_0; u\bigr) := \frac{\int_0^{t^\ast} \|y_{\mathfrak {s}}(t, y_0; u) - h^{\mathrm{pm}}(y_{\mathfrak{c}}(t, y_0; u)) \|_\alpha^2 \, \mathrm{d}t}{ \int_0^{t^\ast} \|y_{\mathfrak{s}}(t, y_0; u)\|_\alpha^2 \, \mathrm{d}t}.}$$
(3.5)

Note that in Sects. 5, 6 and 7, we will illustrate numerically that finite-horizon PMs can actually be obtained from the uncontrolled PDE (1.2), with still possibly small parameterization defects when a controller u is applied. The procedure to build in practice such PMs from the uncontrolled PDE (1.2) is described in the next section; see also [27, Sect. 4.5] for the construction of PMs over arbitrarily (and sufficiently) large horizons.

### Finite-Horizon Parameterizing Manifolds as Pullback Limits of Backward–Forward Systems: The Leading-Order Case

We consider now the important problem of the practical determination of finite-horizon PMs for PDEs of type (2.4). As mentioned above, following , the pullback approximation of the high modes in terms of the low ones via appropriate auxiliary systems associated with the uncontrolled PDE (1.2) will constitute the key ingredient to propose a solution to this problem; see also [27, Sect. 4.5]. In that respect, we consider first the following backward–forward system associated with the uncontrolled PDE (1.2):

\begin{aligned} &\frac{\mathrm{d} y^{(1)}_{\mathfrak{c}}}{\mathrm{d} s} = L_\lambda^{\mathfrak{c}} y^{(1)}_{\mathfrak{c}}, \quad s \in[ -\tau, 0],\ y^{(1)}_{\mathfrak{c}}(s) \vert_{s=0} = \xi, \end{aligned}
(3.6a)
\begin{aligned} & \frac{\mathrm{d} y^{(1)}_{\mathfrak{s}}}{\mathrm{d} s} = L_\lambda^{\mathfrak{s}} y^{(1)}_{\mathfrak{s}} + P_{\mathfrak{s}} F\bigl(y^{(1)}_{\mathfrak{c}}\bigr), \quad s \in[-\tau, 0], \ y^{(1)}_{\mathfrak{s}}(s) \vert_{s=-\tau }= 0, \end{aligned}
(3.6b)

where $$L_{\lambda}^{\mathfrak{c}} := P_{\mathfrak{c}} L_{\lambda}$$, $$L_{\lambda}^{\mathfrak{s}} := P_{\mathfrak{s}} L_{\lambda}$$, and $$\xi\in\mathcal{H}^{\mathfrak {c}}$$. We refer to Sect. 6 for other backward–forward systems used in the construction of higher-order finite-horizon PMs.

In the system above, the initial value of $$y_{\mathfrak{c}}^{(1)}$$ is prescribed at s=0, and the initial value of $$y_{\mathfrak{s}}^{(1)}$$ at s=−τ. The solution of this system is obtained by using a two-step backward–forward integration procedure—where Eq. (3.6a) is integrated first backward and Eq. (3.6b) is then integrated forward—made possible due to the partial coupling present in (3.6a), (3.6b) where $$y^{(1)}_{\mathfrak{c}}$$ forces the evolution equation of $$y^{(1)}_{\mathfrak{s}}$$ but not reciprocally. Due to this forcing introduced by $$y_{\mathfrak{c}}^{(1)}$$ which emanates (backward) from ξ, the solution process $$y_{\mathfrak{s}}^{(1)}$$ depends naturally on ξ. For that reason, we will emphasize this dependence as $$y_{\mathfrak{s} }^{(1)}[\xi]$$ hereafter.

It is clear that the solution to the above system is given by:

\begin{aligned} y^{(1)}_{\mathfrak{c}}(s) & = e^{s L_\lambda^{\mathfrak{c}}}\xi, \quad s \in[-\tau , 0], \ \xi\in \mathcal{H}^{\mathfrak{c}}, \\ y_{\mathfrak{s}}^{(1)}[\xi]{(-\tau, s)} & = \int_{-\tau}^s e^{(s-\tau') L_\lambda^{\mathfrak{s}}} P_{\mathfrak{s}} F\bigl(e^{\tau' L_\lambda ^{\mathfrak{c}}}\xi\bigr) \,\mathrm{d} \tau', \quad s \in[-\tau, 0]. \end{aligned}
(3.7)

The dependence in τ and s in $$y_{\mathfrak{s}}^{(1)}[\xi ]$$ is made apparent to emphasize the two-time description employed for the description of the non-autonomous dynamics inherent to (3.6b); see e.g. [25, 28]. Adopting the language of non-autonomous dynamical systems [25, 28], we then define $$h^{(1)}_{\lambda}(\xi)$$ as the following pullback limit of the $$y_{\mathfrak{s}}^{(1)}$$-component of the solution to the above system, i.e.,

$$\boxed{{h^{(1)}_\lambda(\xi) :=\lim _{\tau\rightarrow+\infty} y^{(1)}_{\mathfrak{s}}[\xi](-\tau, 0)} = \int _{-\infty}^0 e^{-\tau ' L^{\mathfrak{s}}_\lambda } P_{\mathfrak{s}} F\bigl( e^{\tau' L^{\mathfrak{c}}_\lambda}\xi\bigr) \,\mathrm{d} \tau', \quad \forall\xi\in \mathcal{H}^{\mathfrak{c}},}$$
(3.8)

when the latter limit exists. We derive hereafter necessary and sufficient conditions for such a limit to exist.

In that respect, first note that since L λ is self-adjoint, we have

$$e^{\tau' L^{\mathfrak{c}}_\lambda}\xi= \sum_{i = 1}^m e^{\tau' \beta_i(\lambda )} \xi_i e_i,$$
(3.9)

where ξ i =〈ξ,e i 〉, $$i \in\mathcal{I} :=\{1, \ldots , m\}$$ with $$m=\operatorname{dim}(\mathcal{H}^{\mathfrak{c}})$$, and 〈⋅,⋅〉 denoting the inner-product in the ambient Hilbert space $$\mathcal{H}$$.

Now for a fixed τ>0, by projecting $$y_{\mathfrak{s}}^{(1)}[\xi ](-\tau,0)$$ against each eigenmode e n for n>m, we obtain, by using (3.9) and the k-linear property of F,

\begin{aligned} y_{\mathfrak{s}}^{(1)}[\xi](- \tau,0) & = \sum_{n > m} \int_{-\tau}^0 e^{-\tau' \beta_n(\lambda)} \Biggl\langle F \Biggl( \sum_{i = 1}^m e^{\tau' \beta_i(\lambda)} \xi_i e_i \Biggr), e_n \Biggr\rangle \,\mathrm{d}\tau' \, e_n \\ & = \sum_{n > m} \sum_{(i_1, \ldots, i_k) \in\mathcal{I}^k} \int_{-\tau }^0 e^{- \beta_n(\lambda)\tau' + ( \sum_{j = 1}^k \beta _{i_j}(\lambda) ) \tau'} \,\mathrm{d} \tau' \bigl\langle F(e_{i_1}, \ldots, e_{i_k}), e_n \bigr\rangle e_n. \end{aligned}
(3.10)

From this identity, we infer that $$h^{(1)}_{\lambda}$$ is well defined if and only if each integral

$$\int_{-\infty}^0 e^{- \beta_n(\lambda) \tau' + ( \sum_{j = 1}^k \beta_{i_j}(\lambda) ) \tau'} \,\mathrm{d} \tau'$$

converges, whenever the corresponding nonlinear interaction $$F(e_{i_{1}}, \ldots, e_{i_{k}})$$ as projected against e n , is non-zero. Namely, $$h^{(1)}_{\lambda}$$ exists if and only if the following (weak) non-resonance condition holds:

\begin{aligned} & \forall (i_1, \ldots, i_k) \in\mathcal{I}^k, \ n > m, \text{it holds that} \\ &\quad \bigl(\bigl\langle F(e_{i_1}, \ldots, e_{i_k}), e_n \bigr\rangle \neq0 \bigr) \Longrightarrow \Biggl( \sum _{j = 1}^k \beta_{i_j}(\lambda) - \beta _n(\lambda) > 0 \Biggr); \end{aligned}
(NR)

Assuming the above (NR)-condition, it follows then from (3.8) and (3.10) that $$h^{(1)}_{\lambda}$$ takes the following form:

$$\boxed{h^{(1)}_\lambda(\xi) = \sum _{n > m} \sum_{(i_1, \ldots, i_k) \in \mathcal{I}^k} \frac{\xi_{i_1}\cdots\xi_{i_k}}{\sum_{j = 1}^k \beta _{i_j}(\lambda) - \beta_n(\lambda)} \bigl\langle F(e_{i_1}, \ldots, e_{i_k}), e_n \bigr\rangle e_n.}$$
(3.11)

In particular under the (NR)-condition, each e n -component of $$h^{(1)}_{\lambda}(\xi)$$ is—in the ξ-variable—an homogeneous polynomial of order k, the order of the nonlinearity F. For that reason, $$h^{(1)}_{\lambda}$$ will be referred to as the leading-order finite-horizon PM when appropriate, that is when the latter provides a finite-horizon PM. We clarify in the remaining of this section, some (idealistic) conditions under which such a property is met by the manifold function $$h^{(1)}_{\lambda}$$ for the PDE (2.4). In practice these conditions can be violated, while the manifold function $$h^{(1)}_{\lambda}$$ defined by (3.11) still constitutes a finite-horizon PM; see Sects. 5.5 and 7 for numerical illustrations.

To delineate conditions under which $$h^{(1)}_{\lambda}$$ is a finite-horizon PM is still valuable for the theory. This is the purpose of Lemma 1 below which relies on another key property of $$h^{(1)}_{\lambda}$$ such as defined by (3.8), that can be explained using the language of invariant manifold theory for PDEs [26, 84]. The latter states that the manifold function $$h^{(1)}_{\lambda}$$ constitutes—for the uncontrolled PDE (1.2)—the leading-order approximation of some local invariant manifold near the trivial steady state; see [84, Appendix A] and [26, Chap. 7]. Based on this result we formulate the following lemma about the existence of finite-horizon PMs.

### Lemma 1

Let λ be fixed and $$\mathcal{H}^{\mathfrak{c}}$$ be the subspace spanned by the first m eigenmodes of the linear operator L λ . Assume that the standing hypothesis of Sect2 holds, and that

$$\beta_m(\lambda) > 2k \beta_{m+1}(\lambda).$$
(3.12)

Assume furthermore that the non-resonance condition (NR) holds so that the pullback limit $$h^{(1)}_{\lambda}$$ defined by (3.8) exists.

Assume that $$h^{(1)}_{\lambda}$$ is non-degenerate in the sense that there exists C>0 such that

$$\bigl\| h^{(1)}_\lambda(\xi)\bigr\| _\alpha\ge C \| \xi\|_\alpha^k, \quad\xi \in \mathcal{H}^{\mathfrak{c}}.$$
(3.13)

Then, for any fixed t >0, there exist open neighborhoods $$\mathcal {V} \subset\mathcal{H}^{\mathfrak{s}}_{\alpha}$$ and $$\mathcal{U} \subset L^{2}(0, t^{\ast}; \mathcal{H})$$ containing the origins of the respective spaces, such that $$h^{(1)}_{\lambda}$$ is a finite-horizon parameterizing manifold over the time interval [0,t ] for the PDE (2.4) driven by any control $$u \in\mathcal{U}$$ and with initial data taken from $$\mathcal{V}$$.

### Proof

Let us first recall some related elements from . Note that the PDE (1.2) fits into the framework of [26, Corollary 7.1].Footnote 8 Since the nonlinearity F is assumed to be k-linear for some k≥2, according to [26, Corollary 7.1], under the assumption (3.12), there exists a local invariant manifold associated with the PDE (1.2) of the form,

$$\mathfrak{M}_{\lambda}^{\mathrm{loc}} := \bigl\{ \xi+ h_{\lambda }^{\mathrm{loc}} (\xi) \bigm|\xi\in\mathfrak{B}\bigr\} ,$$
(3.14)

where $$h_{\lambda}^{\mathrm{loc}}: \mathcal{H}^{\mathfrak{c}} \rightarrow\mathcal {H}^{\mathfrak{s}}_{\alpha}$$ is the corresponding local manifold function, $$\mathfrak{B} \subset\mathcal{H}^{\mathfrak{c}}$$ is an open neighborhood of the origin in $$\mathcal{H}^{\mathfrak{c}}$$, and $$h_{\lambda }^{\mathrm{loc}}(0)=0$$. Recall that the (NR)-condition ensures the pullback limit $$h^{(1)}_{\lambda}$$ given in (3.8) to be well-defined. According to [26, Corollary 7.1], the manifold function $$h^{(1)}_{\lambda}$$ under its form (3.11) provides then the leading order approximation of the local invariant manifold function $$h_{\lambda}^{\mathrm{loc}}$$, i.e.

$$\bigl\| h_{\lambda}^{\mathrm{loc}}(\xi) - h^{(1)}_\lambda( \xi)\bigr\| _\alpha = o\bigl(\| \xi\|_\alpha^k\bigr).$$
(3.15)

It follows from (3.15) that for all ε>0 sufficiently small, there exists a neighborhood $$\mathfrak{B}_{1} \subset \mathfrak{B}$$ such that

$$\bigl\| h_{\lambda}^{\mathrm{loc}}(\xi) - h^{(1)}_\lambda( \xi)\bigr\| _\alpha \le {\varepsilon} \|\xi\|_\alpha^{k+1}, \quad\xi\in\mathfrak{B}_1.$$
(3.16)

This together with the non-degeneracy condition on $$h^{(1)}_{\lambda}$$ given by (3.13) implies that

$$\bigl\| h_{\lambda}^{\mathrm{loc}}(\xi)\bigr\| _\alpha\ge \bigl\| h^{(1)}_\lambda (\xi)\bigr\| _\alpha- \bigl\| h_{\lambda}^{\mathrm{loc}}( \xi) - h^{(1)}_\lambda(\xi )\bigr\| _\alpha\ge C \|\xi \|_\alpha^{k} - {\varepsilon} \|\xi\|_\alpha^{k+1}.$$
(3.17)

By possibly choosing ε smaller, and $$\mathfrak{B}_{1}$$ to be a smaller neighborhood of the origin, we obtain

$$\bigl\| h_{\lambda}^{\mathrm{loc}}(\xi)\bigr\| _\alpha\ge \frac{C}{2} \|\xi\| _\alpha^{k}, \quad\xi\in \mathfrak{B}_1.$$
(3.18)

We show now that the condition (3.4) required in Definition 1 holds for solutions of the uncontrolled PDE (1.2) emanating from sufficiently small initial data on the local invariant manifold $$\mathfrak{M}^{\mathrm{loc}}_{\lambda}$$.

For this purpose, we note that for any fixed t >0, by continuous dependence of the solutions to (1.2) on the initial data, given any sufficiently small initial datum on the local invariant manifold $$\mathfrak{M}^{\mathrm{loc}}_{\lambda}$$, the solution stays on $$\mathfrak{M}^{\mathrm{loc}}_{\lambda}$$ over [0,t ]. Let $$\mathfrak{B}_{2} \subset\mathfrak{B}_{1}$$ be a neighborhood of the origin in $$\mathcal{H}^{\mathfrak{c}}$$ so that each initial datum of the form $$y_{0} :=\xi+h_{\lambda}^{\mathrm{loc}}(\xi)$$, $$\xi\in\mathfrak{B}_{2}$$, satisfies the aforementioned property, and the corresponding solution y(⋅,y 0;0) satisfies furthermore that

$$y_{\mathfrak{c}}(t, y_0; 0):=P_{\mathfrak{c}} y(t, y_0; 0) \in \mathfrak{B}_1, \quad \forall t \in \bigl[0, t^\ast\bigr],$$
(3.19)

where the latter property can be guaranteed by choosing $$\mathfrak {B}_{2}$$ properly thanks again to the continuous dependence of the solution on the initial data.

By the local invariant property of $$\mathfrak{M}^{\mathrm{loc}}_{\lambda}$$, we have

$$y_{\mathfrak{s}}(t, y_0; 0): = P_{\mathfrak{s}} y(t, y_0; 0) = h_{\lambda}^{\mathrm{loc}}\bigl(y_{\mathfrak{c}}(t, y_0; 0)\bigr), \quad \forall t \in\bigl[0, t^\ast\bigr].$$

Now, for each such chosen initial datum, thanks to (3.16) and (3.19), we get

\begin{aligned} &\int_0^{t^\ast} \bigl\| y_{\mathfrak{s}}(t,y_0; 0) - {h^{(1)}_\lambda } \bigl(y_{\mathfrak{c} }(t, y_0; 0)\bigr) \bigr\| _\alpha^2 \, \mathrm{d}t \\ &\quad = \int_0^{t^\ast} \bigl\| h_{\lambda}^{\mathrm{loc}}\bigl(\xi(t)\bigr) - {h^{(1)}_\lambda }\bigl(y_{\mathfrak{c}}(t, y_0; 0)\bigr) \bigr\| _\alpha^2 \, \mathrm{d}t \\ &\quad \le\int_0^{t^\ast} {\varepsilon} \bigl\| y_{\mathfrak{c}}(t, y_0; 0)\bigr\| _\alpha ^{2(k+1)} \, \mathrm{d}t \\ &\quad \le {\varepsilon} \max_{t \in[0, t^\ast]} \bigl\| y_{\mathfrak{c}}(t, y_0; 0)\bigr\| _\alpha^{2} \int_0^{t^\ast} \bigl\| y_{\mathfrak{c}}(t, y_0; 0) \bigr\| _\alpha ^{2k} \, \mathrm{d}t. \end{aligned}
(3.20)

Besides, by (3.18) we have

\begin{aligned} \int_0^{t^\ast} \bigl\| y_{\mathfrak{s}}(t, y_0; 0)\bigr\| _\alpha^2 \, \mathrm{d}t = \int_0^t \bigl\| h_{\lambda}^{\mathrm{loc}} \bigl(y_{\mathfrak{c}}(t, y_0; 0)\bigr) \bigr\| _\alpha^2 \, \mathrm{d}t \ge \frac{C}{2} \int_0^{t^\ast} \bigl\| y_{\mathfrak{c}}(t, y_0; 0)\bigr\| _\alpha ^{2k}\, \mathrm{d}t. \end{aligned}
(3.21)

We obtain then for all $$y_{0} = \xi+ h_{\lambda}^{\mathrm{loc}}(\xi)$$ with $$\xi\in\mathfrak{B}_{2}$$ that

$$\frac{\int_0^{t^\ast} \|y_{\mathfrak{s}}(t,y_0; 0) - h^{\mathrm{(1)}}_{\lambda}(y_{\mathfrak{c}}(t, y_0; 0)) \|_\alpha^2 \, \mathrm{d}t }{\int_0^{t^\ast} \|y_{\mathfrak{s}}(t, y_0; 0)\|_\alpha^2 \, \mathrm{d}t} \le\frac{2{\varepsilon}}{C} \max_{t \in[0, t^\ast]} \bigl\| y_{\mathfrak {c}}(t, y_0; 0)\bigr\| _\alpha^{2}.$$
(3.22)

The RHS can be made less than one by again the continuity argument and by possibly choosing $$\mathfrak{B}_{2}$$ to be an even smaller neighborhood.

By appealing to the continuous dependences on initial data y 0 and the control u of the solution y(0,y 0;u) to the controlled PDE (2.4), there exist an open set $$\mathcal{V}$$ in $$\mathcal {H}_{\alpha}$$ containing the set $$\{ y_{0} = \xi+ h_{\lambda}^{\mathrm{loc}}(\xi) \mid\xi\in\mathfrak{B}_{2}\}$$, and an open set $$\mathcal {U}$$ of the origin in $$L^{2}(0, t^{\ast}; \mathcal{H})$$, such that the solution y(0,y 0;u) satisfies (3.22) with the RHS of (3.22) staying less than one as y 0 various in $$\mathcal{V}$$ and the control u varies in $$\mathcal{U}$$. The proof is complete. □

We conclude this section by some remarks regarding possible ways of constructing more elaborated finite-horizon PMs as well as PMs relying on decompositions of the phase space $$\mathcal{H}$$ involving other bases than a standard eigenbasis.

### Remark 1

1. (i)

More elaborated backward–forward systems than (3.6a), (3.6b) can be imagined in order to design finite-horizon PMs of smaller parameterization defect than offered by $$h^{(1)}_{\lambda}$$; see [27, Sect. 4.3]. The idea remains however the same, namely to parameterize the high-modes as pullback limits of some approximation of the time-history of the dynamics of low modes. We refer to Sect. 6 for such a parameterization leading in particular to finite-horizon PMs whose e n -components are polynomials of higher order than for those constituting $$h^{(1)}_{\lambda}$$. As we will see in Sect. 6.2, such higher-order PMs can give rise to a better design of suboptimal solutions to a given optimal control problem (including terminal payoff terms) than those accessible from the leading order finite-horizon PM $$h^{(1)}_{\lambda}$$; see also Remark 4 below.

2. (ii)

Note also that the usage of the eigenbasis in the decomposition of the phase space $$\mathcal{H}$$ is not essential for the definition of the finite-horizon PMs as well as for the construction of PM candidates based on the backward–forward procedure presented in this section or discussed above. In practice, empirical bases such as the POD basis  can be adopted to decompose the phase space into resolved low-mode part and its orthogonal complement (the high-mode part). By doing so, the resulting subspaces $$\mathcal{H}^{\mathfrak{c}}$$ and $$\mathcal{H}^{\mathfrak {s}}$$ are not invariant subspaces of the linear operator L λ anymore, and explicit formulas such as (3.11) should be revised accordingly; this important point for applications will be addressed elsewhere.

## Finite-Horizon Parameterizing Manifolds for Suboptimal Control of PDEs

### Abstract Results

Given a finite-horizon PM, we present hereafter an abstract formulation of the corresponding reduced equations from which we will see how suboptimal solutions to the problem ($$\mathcal {P}$$) can be efficiently synthesized once an analytic formulation of such reduced equations is available; see Sects. 56 and 7.

The approach consists of reducing the PDE (2.4) governing the evolution of the state y(t) to an ordinary differential equation (ODE) system which is aimed to model the evolution of the low modes $$P_{\mathfrak{c}} y(t)$$, by substituting their interactions with the high modes $$P_{\mathfrak{s}}y(t)$$, by means of the parameterizing function h associated with a given PM.

For simplicity, we assume that the nonlinearity F is bilinear, denoted by B hereafter so that

$$B: \mathcal{H}_\alpha\times\mathcal{H}_\alpha\rightarrow \mathcal{H},$$

is thus a continuous bilinear mapping.

For the sake of readability, the notations introduced in the previous sections are completed by those summarized in Table 1 above. Note also that throughout this article, B(v) will be sometimes used in place of B(v,v) to simplify the presentation.

Recall that the subspace $$\mathcal{H}^{\mathfrak{c}}$$ is spanned by the first m dominant eigenmodes associated with the linear operator L λ for some positive integer m. We denote as before its topological complements in $$\mathcal{H}$$ and $$\mathcal{H}_{\alpha}$$ by $$\mathcal{H}^{\mathfrak{s}}$$ and $$\mathcal{H}^{\mathfrak{s}}_{\alpha}$$, respectively. Let $$h:\mathcal{H}^{\mathfrak{c}} \rightarrow\mathcal{H}^{\mathfrak {s}}_{\alpha}$$ be a finite-horizon PM function associated with (2.4); see Definition 1. The corresponding PM-based reduced optimal control problem ($$\mathcal {P}_{\mathrm {sub}}$$) below, is then built from the following m-dimensional PM-based reduced system:

\begin{aligned} \frac{\mathrm{d}z}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda z + P_{\mathfrak{c}} B \bigl(z + h(z)\bigr) + P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{c}} u (t), \quad t \in(0, T], \end{aligned}
(4.1a)

supplemented by

\begin{aligned} z(0) = P_{\mathfrak{c}} y_0 \in\mathcal{H}^{\mathfrak{c}}; \end{aligned}
(4.1b)

the system (4.1a) being aimed to model the dynamics of the low modes $$P_{\mathfrak{c}}y(t)$$ by z(t), and the dynamics of the high modes $$P_{\mathfrak{s} } y(t)$$ by h(z(t)). To avoid pathological situations, we will assume throughout this article that $$P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{c}}$$ is non-zero.

To simplify the presentation, we will assume furthermore that the PM function h has been chosen so that for any z(0) in $$\mathcal{H}^{\mathfrak{c}}$$, the problem (4.1a), (4.1b) admits a well-defined global ($$\mathcal{H}^{\mathfrak{c}}$$-valued) solution that is continuous in time. Such PM functions are identified in the case of a Burgers-type equation in Sects. 57; see also Appendix B for more details on the corresponding well-posedness problem for the associated reduced systems.

Note that only the low-mode projection of the controller u, $$P_{\mathfrak{c}} u$$, is kept in the above reduced model. In the following we denote by $$u_{R}:= P_{\mathfrak{c}} u \in L^{2}(0,T; \mathcal {H}^{\mathfrak{c}})$$ this m-dimensional controller. Then, the problem (4.1a), (4.1b) can be rewritten as:

\begin{aligned} & \frac{\mathrm{d}z}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda z + P_{\mathfrak{c}} B \bigl(z + h(z)\bigr) + P_{\mathfrak{c}} \mathfrak{C} u_{R}(t), \quad t \in(0, T], \end{aligned}
(4.2a)
\begin{aligned} & z(0) = P_{\mathfrak{c}} y_0 \in\mathcal{H}^{\mathfrak{c}}, \end{aligned}
(4.2b)

and the cost functional (2.3) is substituted by

$$J_R(z, u_{R}) := \int _0^T \bigl[ \mathcal{G} \bigl(z(t) + h\bigl(z(t) \bigr) \bigr) + \mathcal{E}\bigl( u_{R}(t)\bigr) \bigr] \,\mathrm{d}t.$$
(4.3)

The finite-horizon PM-based reduced optimal control problem is then given by:

Throughout this section, we assume that the original problem ($$\mathcal {P}$$) as well as its reduced form ($$\mathcal {P}_{\mathrm {sub}}$$) admit each an optimal control, denoted respectively by u and $$u_{R}^{\ast}$$. Theorem 1 below provides then an important a priori estimate for the theory. It gives indeed a measure on how far to the optimal control u a suboptimal control $$u^{*}_{R}$$ built on a given PM is. More precisely, under a second-order sufficient optimality condition on the cost functional J, an a priori estimate of $$\| u_{R}^{\ast}- u^{\ast}\|^{2}_{L^{2}(0,T; \mathcal{H})}$$ is expressed in terms of key quantities associated with a given PM on one hand, and key quantities associated with the optimal control u , on the other; see (4.5) below. These quantities involve the parameterization defects associated with u and $$u_{R}^{*}$$; the energy contained in the high modes of the optimal and suboptimal PDE trajectories associated with u and $$u_{R}^{*}$$, respectively; and the high-mode energy remainder $$\| P_{\mathfrak{s}}u^{\ast}\|_{L^{2}(0,T; \mathcal{H})}$$ of u . Our treatment is here inspired by  but differs however from the latter by the use of PMs instead of AIMs; the framework of PMs allowing for a natural interpretation of the error estimate (4.5) derived hereafter that as we will see in the applications, will help analyze the performances of a PM-based suboptimal controller; see Sects. 56, and Sect. 7.

### Theorem 1

Assume that the optimal control problem ($$\mathcal {P}$$) admits an optimal controller u , where the cost functional J defined in (2.3) satisfies the assumptions of Sect2.

Assume furthermore there exists σ>0 such that the following (local) second order sufficient optimality condition holds:

$$J\bigl(y(\cdot; v), v\bigr) - J\bigl(y^\ast, u^\ast\bigr) \ge\sigma\bigl\| v - u^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})},$$
(4.4)

where $$v \in L^{2}(0, T; \mathcal{H})$$ is chosen from some neighborhood $$\mathcal{U}$$ of u , and y(⋅;v) denotes the solution to (2.4) with v in place of the controller u.

Assume finally that the corresponding PM-based reduced optimal control problem ($$\mathcal {P}_{\mathrm {sub}}$$) admits an optimal controller $$u_{R}^{\ast}$$, which is furthermore contained in $$\mathcal{U}$$, and that the underlying PM function $$h: \mathcal{H}^{\mathfrak{c}}\rightarrow\mathcal{H}_{\alpha}^{\mathfrak{s}}$$ is locally Lipschitz.

Then, the suboptimal controller $$u_{R}^{\ast}$$ satisfies the following error estimate

\begin{aligned} \bigl\| u_{R}^\ast- u^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})} & \le\frac {\mathcal {C}}{\sigma} \Bigl( \sqrt{Q\bigl(T, y_0; u_{R}^\ast\bigr)} \bigl\| y_{R,\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} \\ &\quad{} + \sqrt{Q\bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} + {\|\mathfrak{C}\|\bigl\| P_{\mathfrak {s}}u^\ast\bigr\| _{L^2(0,T; \mathcal{H})}} \Bigr), \end{aligned}
(4.5)

where $$Q(T, y_{0}; u_{R}^{\ast})$$ and Q(T,y 0;u ) denote the parameterization defects of the finite-horizon PM function h associated with the controllers in Eq. (2.4) taken to be respectively $$u_{R}^{\ast}$$ and u ; $$y_{R,\mathfrak{s}}^{\ast}:= P_{\mathfrak{s}} y_{R}^{\ast}$$ and $$y_{\mathfrak{s}}^{\ast}:= P_{\mathfrak{s}} y^{\ast}$$ denote the high-mode projections of the suboptimal trajectory $$y_{R}^{\ast}$$ and the optimal trajectory y to Eq. (2.4) driven respectively by $$\mathfrak {C}u_{R}^{\ast}$$ and  $$\mathfrak{C}u^{\ast}$$; and $$\mathcal{C}$$ denotes a positive constant depending in particular on T and the local Lipschitz constant of h; see (4.38) below.

Besides the suboptimal trajectory $$y_{R}^{\ast}$$, another trajectory of theoretical interest is the “lifted” trajectory by the PM function h, of the (low-dimensional) optimal trajectory $$z_{R}^{\ast}:=z(\cdot, P_{\mathfrak{c}}y_{0}; u_{R}^{\ast})$$ of the reduced optimal control problem ($$\mathcal {P}_{\mathrm {sub}}$$). This lifted trajectory is defined as

$$l_R(t):= z_{R}^\ast(t) + h \bigl(z_{R}^\ast(t)\bigr),$$

for which if $$z_{R}^{\ast}$$ constitutes a good approximation of the low-mode projection $$P_{\mathfrak{c}}y^{\ast}$$ and h has a small parameterization defect,Footnote 9 l R provides a good approximation of the optimal trajectory y , itself.

This intuitive idea is made precise in Corollary 1 below that provides a general condition under which an error estimate regarding the distance $$\|y^{\ast}-l_{R} \|^{2}_{L^{2}(0,T; \mathcal{H})}$$, between the lifted trajectory l R and the optimal trajectory y , can be deduced from the error estimate (4.5) about the distance between the respective controllers; see (4.8) below. This condition concerns the L 2-response over the interval [0,T] of the PM-based reduced system (4.2a) with respect to perturbation of the control term $$\mathfrak{C} P_{\mathfrak {c}} u^{\ast}$$.

### Corollary 1

In addition to the assumptions of Theorem  1, assume that the PM-based reduced system (4.2a) satisfies the following sublinear response property:

There exist κ>0 and a neighborhood $$\mathcal{U} \subset L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})$$ of $$P_{\mathfrak{c}} u^{\ast}$$, such that the following inequality holds for all $$u_{R}\in\mathcal{U}$$:

\begin{aligned} \bigl\| z(\cdot, P_{\mathfrak{c}}y_0; u_{R}) - z^\ast\bigl(\cdot, P_{\mathfrak {c}}y_0; P_{\mathfrak{c}} u^\ast\bigr)\bigr\| _{L^2(0,T; \mathcal{H})} \le\kappa \bigl\| u_{R}- P_{\mathfrak{c}} u^\ast \bigr\| _{L^2(0,T; \mathcal{H})}, \end{aligned}
(4.6)

where $$z(\cdot, P_{\mathfrak{c}}y_{0}; u_{R})$$ denotes the solution to (4.2a), (4.2b) emanating from $$P_{\mathfrak{c}}y_{0}$$ and driven by $$\mathfrak{C} u_{R}$$.

Then, the following error estimate between the optimal trajectory $$z_{R}^{\ast}:=z(\cdot, P_{\mathfrak{c}}y_{0}; u_{R}^{\ast})$$ for the reduced optimal control problem ($$\mathcal {P}_{\mathrm {sub}}$$) and the low-mode projection $$y_{\mathfrak {c}}^{\ast}:= P_{\mathfrak{c}}y^{\ast}$$ of the optimal trajectory associated with ($$\mathcal {P}$$), holds:

\begin{aligned} & \bigl\| y_{\mathfrak{c}}^\ast- z_R^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})} \\ &\quad \le2 T \bigl( \widetilde{\mathcal{C}}_1 Q\bigl(T,y_0; u^\ast\bigr) \bigl\| y^\ast _{\mathfrak{s}}\bigr\| _{L^2(0,T; \mathcal{H}_\alpha)}^2 + {\widetilde {\mathcal {C}}_2 \|\mathfrak{C}\|^2 \bigl\| P_{\mathfrak{s}}u^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})}} \bigr) \\ &\qquad{} + \frac{2 \kappa^2 \mathcal{C}}{\sigma} \Bigl( \sqrt{Q\bigl(T, y_0; u_{R} ^\ast\bigr)} \bigl\| y_{R,{\mathfrak{s}}}^\ast \bigr\| _{L^2(0,T; \mathcal{H}_\alpha )} \\ &\qquad{}+ \sqrt {Q\bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} + {\|\mathfrak{C}\|\bigl\| P_{\mathfrak{s}}u^\ast\bigr\| _{L^2(0,T; \mathcal {H})}} \Bigr), \end{aligned}
(4.7)

where $$\mathcal{C}$$ is the same positive constant as given by (4.5) in Theorem  1 and $$\widetilde{\mathcal{C}}_{1}$$, $$\widetilde{\mathcal{C}}_{2}$$ are given by (4.11) in Lemma  2 below.

Moreover, the following error estimate regarding the distance $$\|y^{\ast}-l_{R} \|^{2}_{L^{2}(0,T; \mathcal{H})}$$, between the lifted trajectory l R and the optimal trajectory y , holds

\begin{aligned} & \bigl\| y^\ast-l_R \bigr\| ^2_{L^2(0,T; \mathcal{H})} \\ &\quad \le4 \bigl[C_\alpha^2 + \widetilde{\mathcal{C}}_1 T \bigl(1+ 2 \bigl(C_1C_\alpha \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\bigr)^2 \bigr) \bigr] Q \bigl(T,y_0; u^\ast\bigr) \bigl\| y^\ast_{\mathfrak{s}} \bigr\| _{L^2(0,T; \mathcal{H}_\alpha )}^2 \\ &\qquad{} + \frac{4 \kappa^2 \mathcal{C}}{\sigma} \bigl[1 + 2 \bigl(C_1C_\alpha \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\bigr)^2\bigr] \\ &\qquad{}\times \Bigl( \sqrt {Q\bigl(T, y_0; u_{R}^\ast\bigr)} \bigl\| y_{R,{\mathfrak{s}}}^\ast\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} + \sqrt {Q \bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast \bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} \Bigr) \\ &\qquad{} + 4 \bigl(1+ 2 \bigl(C_1C_\alpha \operatorname{Lip}(h)\vert _{V_{\mathfrak{c}}}\bigr)^2 \bigr) \\ &\qquad{}\times \biggl[ \widetilde{\mathcal{C}}_2 T \| \mathfrak{C}\| ^2 \bigl\| P_{\mathfrak{s}}u^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})} + \frac {\kappa^2 \mathcal {C}}{\sigma} \|\mathfrak{C}\|\bigl\| P_{\mathfrak{s}}u^\ast \bigr\| _{L^2(0,T; \mathcal{H})} \biggr], \end{aligned}
(4.8)

where C 1 and C α are some generic constants given by (4.18) and (4.34), respectively; and $$\operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}$$ is the local Lipschitz constant of the PM function h over some bounded set $$V_{\mathfrak{c}} \subset \mathcal{H}^{\mathfrak{c} }$$; see (4.30) and (4.33).

Finally, the last corollary concerns a refinement of the error estimate (4.5) which consists of identifying conditions under which the contribution of the high-mode energy remainder $$\| P_{\mathfrak{s}}u^{\ast}\|_{L^{2}(0,T; \mathcal{H})}$$ of the optimal control, can be removed in the upper bound of $$\|u_{R}^{\ast}- u^{\ast}\|^{2}_{L^{2}(0,T; \mathcal{H})}$$.

### Corollary 2

Assume that the assumptions given in Theorem  1 hold. Assume furthermore that the linear operator $$\mathfrak{C}$$ leaves stable the subspaces $$\mathcal{H}^{\mathfrak{c}}$$ and $$\mathcal {H}^{\mathfrak{s}}$$, i.e.

$$\mathfrak{C} \mathcal{H}^{\mathfrak{c}} \subset\mathcal {H}^{\mathfrak{c}} \quad \mathit{and}\quad \mathfrak{C} \mathcal{H}^{\mathfrak{s}} \subset\mathcal {H}^{\mathfrak{s}}.$$
(4.9)

Then, the error estimate (4.5) reduces to:

\begin{aligned} &\bigl\| u_{R}^\ast- u^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})} \\ &\quad \le\frac {\mathcal {C}}{\sigma} \Bigl( \sqrt{Q\bigl(T, y_0; u_{R}^\ast\bigr)} \bigl\| y_{R,\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} + \sqrt{Q \bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast \bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} \Bigr). \end{aligned}
(4.10)

Similarly, the corresponding results of Corollary 1 under the additional condition (4.9) amounts to dropping the terms involving $$P_{\mathfrak{s}}u^{\ast}$$ on the RHS of the estimates (4.7) and (4.8).

### Proofs of Theorem 1 and Corollaries 1 and 2

For the proofs of the above results, we will make use of the following preparatory lemma.

### Lemma 2

Given any control $$u \in L^{2}(0, T; \mathcal{H})$$, we denote by y(t) the corresponding solution to (2.4). Let $$h: \mathcal{H}^{\mathfrak{c}}\rightarrow\mathcal{H}_{\alpha}^{\mathfrak{s}}$$ be a PM function assumed to be locally Lipschitz, and z(t) be the solution to the corresponding PM-based reduced system (4.2a) driven by $${P_{\mathfrak{c}}} \mathfrak{C}P_{\mathfrak{c}}u$$ and emanating from $$P_{\mathfrak{c}}y(0)$$.

Then, there exists $$\widetilde{\mathcal{C}}_{1}, \widetilde{\mathcal {C}}_{2} > 0$$ such that

$$\bigl\| y_{\mathfrak{c}}(t) - z(t) \bigr\| ^2 \le\widetilde{ \mathcal{C}}_1 \int_0^t \bigl\| y_{\mathfrak{s} }(s) - h\bigl(y_{\mathfrak{c}}(s)\bigr)\bigr\| _\alpha^2 \,\mathrm{d}s + \widetilde{\mathcal{C}}_2 \|\mathfrak{C} \|^2 \int_0^t \bigl\| P_{\mathfrak{s}}u(s)\bigr\| ^2 \,\mathrm{d}s, \quad t \in[0, T],$$
(4.11)

where $$y_{\mathfrak{c}}:=P_{\mathfrak{c}}y$$, $$y_{\mathfrak {s}}:=P_{\mathfrak{s}}y$$; and $$\widetilde{\mathcal {C}}_{1}$$, $$\widetilde{\mathcal{C}}_{2}$$ depend in particular on T and the local Lipschitz constant of h; see (4.23) below.

### Proof

Let us introduce $$w(t):=y_{\mathfrak{c}}(t) - z(t)$$. By projecting (2.4) against the subspace $$\mathcal{H}^{\mathfrak{c}}$$, we obtain

$$\frac{\mathrm{d}y_{\mathfrak{c}}}{\mathrm{d}t} = L^{\mathfrak {c}}_\lambda y_{\mathfrak{c}} + P_{\mathfrak{c}}B(y_{\mathfrak{c}} + y_{\mathfrak{s} }) + P_{\mathfrak{c}}\mathfrak{C} u(t), \quad y_{\mathfrak{c}}(0) = P_{\mathfrak{c}} y_0 \in\mathcal {H}^{\mathfrak{c}}.$$

This together with (4.1a), (4.1b) implies that w satisfies the following problem:

$$\frac{\mathrm{d}w}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda w + P_{\mathfrak{c}} \bigl( B(y_{\mathfrak{c}} + y_{\mathfrak{s}}) - B\bigl(z + h(z) \bigr) \bigr) + {P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{s}}u}, \quad w(0) = 0,$$
(4.12)

recalling that $$u-P_{\mathfrak{c}}u= P_{\mathfrak{s}}u$$.

By taking the $$\mathcal{H}$$-inner product on both sides of (4.12) with w, we obtain:

$$\frac{1}{2}\frac{\mathrm{d}\|w\|^2}{\mathrm{d}t} = \bigl\langle L^{\mathfrak{c}}_\lambda w, w \bigr\rangle + \bigl\langle P_{\mathfrak{c}} \bigl( B(y_{\mathfrak{c}} + y_{\mathfrak{s}}) - B\bigl(z + h(z)\bigr) \bigr), w \bigr\rangle + {\langle P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak {s}}u, w \rangle}.$$
(4.13)

Since $$B: \mathcal{H}_{\alpha}\times\mathcal{H}_{\alpha}\rightarrow \mathcal{H}$$ is a continuous bilinear mapping, there exists C B >0 such that for any v 1 and v 2 in $$\mathcal{H}_{\alpha}$$, it holds that

\begin{aligned} \bigl\| B(v_1) - B(v_2)\bigr\| & = \bigl\| B(v_1, v_1) - B(v_2, v_2)\bigr\| \\ & \le\bigl\| B(v_1, v_1) - B(v_1, v_2) \bigr\| + \bigl\| B(v_1, v_2) - B(v_2, v_2)\bigr\| \\ & \le C_B\|v_1\|_\alpha\|v_1 - v_2\|_\alpha+ C_B \|v_1 - v_2\|_\alpha \|v_2\|_\alpha \\ & \le C_B \bigl( \|v_1\|_\alpha+ \|v_2 \|_\alpha\bigr) \|v_1 - v_2\|_\alpha. \end{aligned}
(4.14)

Thanks to the above bilinear estimate, we get thus

\begin{aligned} &\bigl\langle P_{\mathfrak{c}} \bigl( B(y_{\mathfrak{c}} + y_{\mathfrak {s}}) - B\bigl(z + h(z)\bigr) \bigr), w \bigr\rangle \\ &\quad\le C_B \bigl( \|y_{\mathfrak{c}} + y_{\mathfrak{s}}\|_\alpha+ \bigl\| z + h(z)\bigr\| _\alpha \bigr) \bigl\| y_{\mathfrak{c}} + y_{\mathfrak{s}} - z - h(z)\bigr\| _\alpha \|w\|. \end{aligned}
(4.15)

On the other hand, the assumptions made at the end of Sect. 2 and in this section regarding the well-posedness problem associated respectively with Eq. (2.4) and the reduced system (4.2a), ensure the existence of a bounded set V in $$\mathcal{H}_{\alpha}$$, such that y(t) and z(t)+h(z(t)) stay in V for all t∈[0,T]. As a consequence, there exists a constant C(V)>0, such that

$$C_B \bigl( \bigl\| y_{\mathfrak{c}}(t) + y_{\mathfrak{s}}(t)\bigr\| _\alpha+ \bigl\| z(t) + h\bigl(z(t)\bigr) \bigr\| _\alpha \bigr) \le C(V), \quad t \in[0,T].$$
(4.16)

Note also that by using the local Lipschitz property of h, we get

\begin{aligned} & \bigl\| y_{\mathfrak{c}}(t) + y_{\mathfrak{s}}(t) - z(t) - h\bigl(z(t)\bigr)\bigr\| _\alpha \\ &\quad \le\bigl\| y_{\mathfrak{c}}(t) - z(t)\bigr\| _\alpha+ \bigl\| y_{\mathfrak{s}}(t) - h \bigl(y_{\mathfrak{c}}(t)\bigr)\bigr\| _\alpha+ \bigl\| h\bigl(y_{\mathfrak{c}}(t) \bigr) - h\bigl(z(t)\bigr)\bigr\| _\alpha \\ &\quad \le\bigl(1 + \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\bigr) \bigl\| y_{\mathfrak {c}}(t) - z(t) \bigr\| _\alpha + \bigl\| y_{\mathfrak{s}}(t) - h\bigl(y_{\mathfrak{c}}(t) \bigr)\bigr\| _\alpha \\ &\quad \le C_1 \bigl(1 + \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\bigr) \bigl\| w(t)\bigr\| + \bigl\| y_{\mathfrak{s}}(t) - h\bigl(y_{\mathfrak{c}}(t)\bigr) \bigr\| _\alpha, \quad t \in[0,T], \end{aligned}
(4.17)

where $$V_{\mathfrak{c}}=P_{\mathfrak{c}}V$$, and C 1 in the last inequality denotes the generic positive constant for which

$$\|v\|_\alpha\le C_1 \|v\|, \quad \forall v \in\mathcal {H}^{\mathfrak{c}},$$
(4.18)

due to the finite-dimensional nature of $$\mathcal{H}^{\mathfrak{c}}$$.

By using now the estimates (4.16) and (4.17) in (4.15), we get

\begin{aligned} & \bigl\langle P_{\mathfrak{c}} \bigl( B\bigl(y_{\mathfrak{c}}(t) + y_{\mathfrak{s}}(t)\bigr) - B\bigl(z(t) + h \bigl(z(t)\bigr)\bigr) \bigr), w(t) \bigr\rangle \\ &\quad \le C_1 C(V) \bigl(1 + \operatorname{Lip}(h)\vert _{V_{\mathfrak{c}}}\bigr) \bigl\| w(t)\bigr\| ^2 + C(V) \bigl\| y_{\mathfrak{s}}(t) - h \bigl(y_{\mathfrak{c}}(t)\bigr)\bigr\| _\alpha\bigl\| w(t)\bigr\| \\ & \quad \le C_1 C(V) \bigl(1 + \operatorname{Lip}(h)\vert _{V_{\mathfrak{c}}}\bigr) \bigl\| w(t)\bigr\| ^2 + \frac{[C(V)]^2}{2} \bigl\| y_{\mathfrak{s}}(t) - h\bigl(y_{\mathfrak {c}}(t)\bigr)\bigr\| _\alpha^2 + \frac{1}{2}\bigl\| w(t)\bigr\| ^2, \end{aligned}
(4.19)

where we have applied the standard Young’s inequality $$ab < \frac {a^{2}}{2} +\frac{b^{2}}{2}$$ to derive the last inequality.

Since L λ is assumed to be self-adjoint with dominant eigenvalue β 1(λ), we obtain

$$\bigl\langle L^{\mathfrak{c}}_\lambda w(t), w(t) \bigr\rangle {= \sum_{i=1}^m \beta _i(\lambda) \bigl|w_i(t)\bigr|^2}\le \beta_1(\lambda) \bigl\| w(t)\bigr\| ^2.$$
(4.20)

Note also that

$$\langle P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{s}}u, w \rangle \le\|\mathfrak{C}\| \bigl\| P_{\mathfrak{s}}u(t)\bigr\| \bigl\| w(t)\bigr\| \le\frac{1}{2} \| \mathfrak{C}\|^2 \bigl\| P_{\mathfrak{s}}u(t)\bigr\| ^2 + \frac{1}{2} \bigl\| w(t)\bigr\| ^2.$$
(4.21)

Using (4.19)–(4.21) in (4.13), we obtain

\begin{aligned} \frac{1}{2}\frac{\mathrm{d}\|w(t)\|^2}{\mathrm{d}t} & \le \bigl( 1 + \beta_1(\lambda) + C_1 C(V) \bigl(1 + \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}\bigr) \bigr) \bigl\| w(t)\bigr\| ^2 \\ &\quad{} + \frac{[C(V)]^2}{2} \bigl\| y_{\mathfrak{s}}(t) - h \bigl(y_{\mathfrak{c}}(t)\bigr)\bigr\| _\alpha^2 + \frac{1}{2} \|\mathfrak{C}\|^2 \bigl\| P_{\mathfrak{s}}u(t) \bigr\| ^2. \end{aligned}
(4.22)

Now, by a standard application of the Gronwall’s inequality, we obtain for all t∈[0,T],

\begin{aligned} \bigl\| w(t)\bigr\| ^2 & = \bigl\| y_{\mathfrak{c}}(t) - z(t)\bigr\| ^2 \\ & \le\int_0^t e^{2[1 + \beta_1(\lambda) + C_1 C(V) (1 + \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}})](t-s)} \\ &\quad{}\times \bigl( \bigl[C(V)\bigr]^2 \bigl\| y_{\mathfrak{s}}(s) - h\bigl(y_{\mathfrak{c} }(s) \bigr)\bigr\| _\alpha^2 + \|\mathfrak{C}\|^2 \bigl\| P_{\mathfrak{s}}u(s)\bigr\| ^2 \bigr) \,\mathrm{d}s \\ & \le e^{2[1 + \beta_1(\lambda) + C_1 C(V) (1 + \operatorname{Lip}(h)\vert _{V_{\mathfrak{c}}})]T} \\ &\quad{}\times\biggl( \bigl[C(V)\bigr]^2\int _0^t \bigl\| y_{\mathfrak {s}}(s) - h \bigl(y_{\mathfrak{c}}(s)\bigr)\bigr\| _\alpha^2 \,\mathrm{d}s + \| \mathfrak{C}\|^2 \int_0^t \bigl\| P_{\mathfrak {s}}u(s)\bigr\| ^2 \,\mathrm{d}s \biggr), \end{aligned}
(4.23)

taking into account that $$w(0) = y_{\mathfrak{c}}(0) - z(0) = 0$$, by assumption. The estimate (4.11) is thus proved. □

We present now the proofs of Theorem 1 and Corollaries 1 and 2.

### Proof of Theorem 1

Let us denote by y in $$C^{1}([0,T]; \mathcal{H}) \cap C([0,T]; \mathcal {H}_{\alpha})$$ the optimal trajectory to the optimal control problem ($$\mathcal {P}$$), and by $$y^{\ast}_{R}$$ (in the same functional space) the trajectory of Eq. (2.4) corresponding to the control u taken to be the optimal (low-dimensional) controller $$u_{R}^{\ast}$$ of the reduced optimal control problem ($$\mathcal {P}_{\mathrm {sub}}$$).

Let us also introduce the lifted trajectories

$$l_R = z^\ast_R + h\bigl(z^\ast_R \bigr), \quad\mbox{and}\quad l^\ast= z^\ast+ h\bigl(z^\ast \bigr),$$
(4.24)

where $$z^{\ast}_{R}$$ and z are the solutions to (4.2a), (4.2b) driven respectively by $$P_{\mathfrak{c}}\mathfrak{C} u_{R}^{\ast}(t)$$ and $$P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{c}} u^{\ast}(t)$$, t∈[0,T].

Thanks to the second order optimality condition (4.4), the proof boils down to the derivation of a suitable upper bound for $$\varDelta:=J(y^{\ast}_{R}, u_{R}^{\ast}) - J(y^{\ast}, u^{\ast})$$, which is organized as follows.

In Step 1, we reduce the control of Δ to the control of $$J(y^{\ast}_{R}, u_{R}^{\ast}) - J(l_{R}, u_{R}^{\ast}) + J(l^{\ast}, u^{\ast}) - J(y^{\ast}, u^{\ast})$$ by using the optimality property of the pair $$(z^{\ast}_{R}, u_{R} ^{\ast})$$ for the reduced problem ($$\mathcal {P}_{\mathrm {sub}}$$). The main interest in doing so relies on the fact that only $$\| y^{\ast}_{R}-l_{R}\|$$ and ∥y l ∥ are then determining in the control of Δ; see Step 2. This leads in turn to an upper bound of Δ expressed in terms of key quantities for the design of suboptimal controller in our PM-based theory.

In that respect, the upper bound of Δ derived in (4.36) involves $$\| y^{\ast}_{R,\mathfrak{s}} - h(y^{\ast}_{R,\mathfrak {c}})\|_{L^{2}(0,T; \mathcal{H})}$$ and $$\| y_{\mathfrak{s}}^{\ast}- h(y_{\mathfrak {c}}^{\ast})\|_{L^{2}(0,T; \mathcal {H})}$$, the energy (over the interval [0,T]) of the high modes unexplained by the PM function when applied respectively to $$y^{\ast}_{R,\mathfrak{c}}$$ and $$y_{\mathfrak{c}}^{\ast}$$; and involves $$\| y^{\ast}_{R,\mathfrak{c}} - z^{\ast}_{R} \| _{L^{2}(0,T; \mathcal{H})}$$ and $$\|y_{\mathfrak{c}}^{\ast}- z^{\ast}\|_{L^{2}(0,T; \mathcal{H})}$$, the errors associated with the modeling of the $$y^{\ast}_{R,\mathfrak{c}}$$- and $$y_{\mathfrak{c}}^{\ast}$$-dynamics by the reduced system (4.2a).

Thanks to Lemma 2, we can bound the two latter quantities by the former ones together with a term involving the energy contained in the high modes of u . This is the purpose of Step 3. The desired result follows then by rewriting the relevant unexplained energies by using the parameterization defects associated with the PM function h and the controllers u and $$u^{\ast}_{R}$$.

Step 1. Since (y ,u ) is an optimal pair for ($$\mathcal {P}$$), we get

\begin{aligned} 0 & \le J\bigl(y^\ast_R, u_{R}^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr) \\ & = J\bigl(y^\ast_R, u_{R}^\ast \bigr) - J\bigl(l_R, u_{R}^\ast\bigr) + J \bigl(l_R, u_{R}^\ast \bigr) - J \bigl(l^\ast, u^\ast\bigr) + J\bigl(l^\ast, u^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr). \end{aligned}
(4.25)

Since $$(z^{\ast}_{R}, u_{R}^{\ast})$$ is an optimal pair for the reduced problem ($$\mathcal {P}_{\mathrm {sub}}$$), we obtain

$$J_R\bigl(z^\ast_R, u_{R}^\ast\bigr) - J_R\bigl(z^\ast, P_{\mathfrak{c}}u^\ast\bigr) \le0.$$
(4.26)

Note also that

$$J\bigl(l_R, u_{R}^\ast\bigr) = J_R\bigl(z^\ast_R, u_{R}^\ast \bigr),$$

and that according to (C2)

$$J\bigl(l^\ast, u^\ast\bigr) \ge J_R \bigl(z^\ast, P_{\mathfrak{c}}u^\ast\bigr),$$

since $$\|P_{\mathfrak{c}}u^{\ast}\|\leq\|u^{\ast}\|$$.

Consequently,

$$J\bigl(l_R, u_{R}^\ast\bigr) - J \bigl(l^\ast, u^\ast\bigr) \le0.$$
(4.27)

We obtain then from (4.25) that

$$0 \le J\bigl(y^\ast_R, u_{R}^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr) \le J\bigl(y^\ast_R, u_{R}^\ast \bigr) - J\bigl(l_R, u_{R}^\ast\bigr) + J\bigl(l^\ast, u^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr).$$
(4.28)

Step 2. Let $$V \subset\mathcal{H}_{\alpha}$$ be a bounded set such that

$$y^\ast_R(t), \qquad l_R(t), \qquad y^\ast(t), \qquad l^\ast(t) \in V \quad \forall t \in[0, T].$$
(4.29)

Let also

$$V_{\mathfrak{c}} = P_{\mathfrak{c}} V.$$
(4.30)

It is clear that $$P_{\mathfrak{c}}y^{\ast}_{R}(t)$$, $$P_{\mathfrak {c}}y^{\ast}(t)$$, $$z^{\ast}_{R}(t)$$ and z (t) are contained in $$V_{\mathfrak{c}}$$ for all t∈[0,T].

Recalling (C1), we denote by $$\operatorname{Lip}(\mathcal{G})\vert_{V}$$ the Lipschitz constant of $$\mathcal{G}:\mathcal{H} \rightarrow \mathbb {R}^{+}$$ restricted to the bounded set V. In (4.28), by applying Lipschitz estimates to the $$\mathcal{G}$$-part of the cost functional J, we obtain

\begin{aligned} 0 & \le J\bigl(y^\ast_R, u_{R}^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr) \\ & \le\operatorname{Lip}(\mathcal{G})\vert_{V} \bigl(\bigl\| y^\ast_R - l_R\bigr\| _{L^1(0,T; \mathcal{H})} + \bigl\| l^\ast- y^\ast \bigr\| _{L^1(0,T; \mathcal{H})}\bigr) \\ & \le\sqrt{T} \operatorname{Lip}(\mathcal{G})\vert_{V} \bigl( \bigl\| y^\ast_R - l_R\bigr\| _{L^2(0,T; \mathcal{H})} + \bigl\| l^\ast- y^\ast\bigr\| _{L^2(0,T; \mathcal{H})}\bigr), \end{aligned}
(4.31)

where the last inequality follows from Hölder’s inequality.

Recall that $$l_{R}(t) = z^{\ast}_{R}(t) + h(z^{\ast}_{R}(t))$$. Let us also rewrite $$y^{\ast}_{R}(t)$$ as $$y^{\ast}_{R,\mathfrak{c}}(t) + y^{\ast}_{R,\mathfrak{s}}(t)$$ with $$y^{\ast}_{R,\mathfrak{c}}(t)=P_{\mathfrak{c}}y^{\ast}_{R}(t)$$ and $$y^{\ast}_{R,\mathfrak{s}}(t)=P_{\mathfrak{s}}y^{\ast}_{R}(t)$$. We obtain then

\begin{aligned} \bigl\| y^\ast_R(t) - l_R(t)\bigr\| & \le\bigl\| y^\ast_{R,\mathfrak{c}}(t) - z^\ast _R(t) \bigr\| + \bigl\| y^\ast_{R,\mathfrak{s}}(t) - h\bigl(z^\ast_R(t)\bigr) \bigr\| \\ & \le\bigl\| y^\ast_{R,\mathfrak{c}}(t) - z^\ast_R(t) \bigr\| + \bigl\| y^\ast _{R,\mathfrak{s}}(t) - h\bigl(y^\ast_{R,\mathfrak{c}}(t) \bigr)\bigr\| + \bigl\| h\bigl(y^\ast_{R,\mathfrak{c}}(t)\bigr)- h \bigl(z^\ast_R(t)\bigr) \bigr\| . \end{aligned}
(4.32)

Let us denote by $$\operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}$$ the Lipschitz constant of $$h: \mathcal{H}^{\mathfrak{c}} \rightarrow\mathcal{H}^{\mathfrak {s}}_{\alpha}$$ restricted to the bounded set $$V_{\mathfrak{c}}$$. We get

\begin{aligned} \bigl\| h\bigl(y^\ast_{R,\mathfrak{c}}(t) \bigr)- h\bigl(z^\ast_R(t)\bigr) \bigr\| _\alpha& \le \operatorname{Lip}(h)\vert_{V_\mathfrak{c}} \bigl\| y^\ast_{R,\mathfrak{c}}(t) - z^\ast_R(t) \bigr\| _\alpha \\ & \le C_1 \operatorname{Lip}(h)\vert_{V_\mathfrak{c}} \bigl\| y^\ast _{R,\mathfrak{c}}(t) - z^\ast_R(t) \bigr\| , \quad t \in[0, T], \end{aligned}
(4.33)

where we have used the equivalence between the norms on $$\mathcal {H}^{\mathfrak{c} }$$; see (4.18).

Since $$\mathcal{H}_{\alpha}$$ is continuously embedded into $$\mathcal{H}$$, there exists a generic positive constant C α , such that

$$\|v\| \le C_\alpha\|v\|_\alpha, \quad \forall v \in\mathcal {H}_\alpha.$$
(4.34)

We obtain then

$$\bigl\| h\bigl(y^\ast_{R,\mathfrak{c}}(t)\bigr)- h \bigl(z^\ast_R(t)\bigr) \bigr\| \le C_1 C_\alpha \operatorname{Lip}(h)\vert_{V_\mathfrak{c}} \bigl\| y^\ast_{R,\mathfrak{c}}(t) - z^\ast_R(t) \bigr\| .$$
(4.35)

This together with (4.32) leads to

\begin{aligned} \bigl\| y^\ast_R(t) - l_R(t)\bigr\| \le&\bigl(1 + C_1 C_\alpha \operatorname{Lip}(h)\vert _{V_\mathfrak{c}}\bigr) \bigl\| y^\ast_{R,\mathfrak{c}}(t) - z^\ast_R(t) \bigr\| \\ &{} + \bigl\| y^\ast_{R,\mathfrak{s}}(t) - h\bigl(y^\ast_{R,\mathfrak{c}}(t) \bigr)\bigr\| , \quad t \in[0, T]. \end{aligned}

Similarly,

\begin{aligned} \bigl\| l^\ast(t) - y^\ast(t)\bigr\| \le& \bigl(1 + C_1 C_\alpha\operatorname{Lip}(h)\vert _{V_{\mathfrak{c}}} \bigr) \bigl\| y_{\mathfrak{c}}^\ast(t) - z^\ast(t) \bigr\| \\ &{} + \bigl\| y_{\mathfrak{s}}^\ast(t) - h\bigl(y_{\mathfrak{c}}^\ast(t) \bigr)\bigr\| , \quad t \in[0, T]. \end{aligned}

Reporting the above two estimates into (4.31), we obtain

\begin{aligned} 0 & \le J\bigl(y^\ast_R, u_{R}^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr) \\ & \le2 \sqrt{T} \operatorname{Lip}(\mathcal{G})\vert_{V} \bigl( \bigl\| y^\ast _{R,\mathfrak{s}} - h\bigl(y^\ast_{R,\mathfrak{c}} \bigr)\bigr\| _{L^2(0,T; \mathcal {H})} + \bigl\| y_{\mathfrak{s}}^\ast- h \bigl(y_{\mathfrak{c}}^\ast\bigr)\bigr\| _{L^2(0,T; \mathcal{H})} \\ &\quad{} + \bigl(1 + C_1 C_\alpha\operatorname{Lip}(h) \vert _{V_{\mathfrak{c}}}\bigr) \bigl( \bigl\| y^\ast_{R,\mathfrak{c}} - z^\ast_R \bigr\| _{L^2(0,T; \mathcal {H})} + \bigl\| y_{\mathfrak{c} }^\ast- z^\ast\bigr\| _{L^2(0,T; \mathcal{H})} \bigr) \bigr). \end{aligned}
(4.36)

Step 3. By using Lemma 2 (see (4.23) above), we obtain:

$${\bigl\| y^\ast_{R,\mathfrak{c}} - z^\ast_R\bigr\| _{L^2(0,T; \mathcal{H})} \le\sqrt{T} C(V) e^{[1 + \beta_1(\lambda) + C_1 C(V) (1 + \operatorname{Lip}(h)\vert _{V_{\mathfrak{c}}})]T} \bigl\| y^\ast_{R,\mathfrak{s}} - h\bigl(y^\ast _{R,\mathfrak{c}}\bigr)\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)},}$$

where we have used $$P_{\mathfrak{s}}u_{R}^{\ast}= 0$$ since $$u_{R}^{\ast}$$ lives in $$L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})$$; and the same lemma leads to

\begin{aligned} \bigl\| y_{\mathfrak{c}}^\ast- z^\ast\bigr\| _{L^2(0,T; \mathcal{H})} \le& \sqrt{T} e^{[1 + \beta_1(\lambda) + C_1 C(V) (1 + \operatorname{Lip}(h)\vert _{V_{\mathfrak{c} }})]T} \\ &{}\times\bigl(C(V) \bigl\| y_{\mathfrak{s}}^\ast- h\bigl(y_{\mathfrak{c}}^\ast \bigr) \bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} + \|\mathfrak{C}\| \bigl\| P_{\mathfrak{s}}u^\ast\bigr\| _{L^2(0,T; \mathcal {H})} \bigr). \end{aligned}

Now, by reporting these estimates in (4.36) and using again the property of continuous embedding (4.34), we obtain:

\begin{aligned} 0 & \le J\bigl(y^\ast_R, u_{R}^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr) \\ & \le\mathcal{C}\bigl(V, \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}, T\bigr) \bigl( \bigl\| y^\ast _{R,\mathfrak{s}} - h\bigl(y^\ast_{R,\mathfrak{c}} \bigr)\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} + \bigl\| y_{\mathfrak{s} }^\ast- h \bigl(y_{\mathfrak{c}}^\ast\bigr)\bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} \\ &\quad{} + {\| \mathfrak{C} \| \bigl\| P_{\mathfrak{s}}u^\ast\bigr\| _{L^2(0,T; \mathcal{H})} } \bigr), \end{aligned}
(4.37)

where

\begin{aligned} &\mathcal{C}\bigl(V, \operatorname{Lip}(h) \vert_{V_{\mathfrak{c}}}, T\bigr) \\ &\quad:= 2 C_\alpha\sqrt {T}\operatorname{Lip}( \mathcal{G})\vert_{V} \\ &\qquad{} + 2 {\max\bigl\{ C(V), 1\bigr\} } T \operatorname{Lip}(\mathcal{G}) \vert_{V} \bigl( 1 + C_1 C_\alpha\operatorname{Lip}(h) \vert_{V_{\mathfrak{c}}}\bigr) e^{[1 + \beta _1(\lambda) + C_1 C(V) (1 + \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}})]T}. \end{aligned}
(4.38)

In terms of parameterization defects defined in (3.5), the above estimate (4.37) can be rewritten as:

\begin{aligned} 0 & \le J\bigl(y^\ast_R, u_{R}^\ast\bigr) - J\bigl(y^\ast, u^\ast\bigr) \\ & \le\mathcal{C}\bigl(V, \operatorname{Lip}(h)\vert_{V_{\mathfrak{c}}}, T\bigr) \Bigl( \sqrt {Q\bigl(T, y_0; u_{R}^\ast \bigr)} \bigl\| y^\ast_{R,\mathfrak{s}}\bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} \\ &\quad{} + \sqrt{Q\bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} + {\|\mathfrak{C}\| \bigl\| P_{\mathfrak {s}}u^\ast\bigr\| _{L^2(0,T; \mathcal{H})} } \Bigr), \end{aligned}
(4.39)

where $$Q(T, y_{0}; u_{R}^{\ast})$$ and Q(T,y 0;u ) are the parameterization defects of the finite-horizon PM function h when the control in (2.4) is taken to be $$u_{R}^{\ast}$$ and u , respectively.

The proof is complete. □

### Proof of Corollary 1

The estimate given by (4.7) can be derived directly from Theorem 1 and Lemma 2 by noting that

$$\bigl\| y_{\mathfrak{c}}^\ast- z_R^\ast \bigr\| ^2_{L^2(0,T; \mathcal{H})} \le2 \bigl\| y_{\mathfrak{c} }^\ast- z^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})} + 2 \bigl\| z^\ast- z_R^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})}.$$

Indeed, the first term on the RHS above can be controlled as follows by Lemma 2:

\begin{aligned} \bigl\| y_{\mathfrak{c}}^\ast- z^\ast \bigr\| ^2_{L^2(0,T; \mathcal{H})} & \le \int_{0}^T \biggl( \widetilde{\mathcal{C}}_1 \int_0^t \bigl\| y^\ast_{\mathfrak{s}}(s) - h\bigl(y^\ast_{\mathfrak{c} }(s) \bigr)\bigr\| _\alpha^2 \,\mathrm{d}s + {\widetilde{ \mathcal{C}}_2 \| \mathfrak {C}\|^2 \int _0^t \bigl\| P_{\mathfrak{s}}u(s)\bigr\| ^2 \, \mathrm{d}s} \biggr) \,\mathrm{d}t \\ & \le T \bigl( \widetilde{\mathcal{C}}_1 \bigl\| y_{\mathfrak{s}}^\ast- h\bigl(y_{\mathfrak{c}}^\ast \bigr)\bigr\| ^2_{L^2(0,T; \mathcal{H}_\alpha)} + {\widetilde{\mathcal {C}}_2 \| \mathfrak{C}\|^2 \| P_{\mathfrak{s}}u\|^2_{L^2(0,T; \mathcal{H})}} \bigr) \\ & \le T \bigl( \widetilde{\mathcal{C}}_1 Q\bigl(T,y_0; u^\ast\bigr) \bigl\| y^\ast _{\mathfrak{s} }\bigr\| _{L^2(0,T; \mathcal{H}_\alpha)}^2 + {\widetilde{\mathcal {C}}_2 \| \mathfrak{C}\|^2 \| P_{\mathfrak{s}}u\|^2_{L^2(0,T; \mathcal{H})}} \bigr). \end{aligned}

For the term $$\|z^{\ast}- z_{R}^{\ast}\|^{2}_{L^{2}(0,T; \mathcal{H})}$$, according to the condition (4.6) on the sublinear response and Theorem 1, we obtain

\begin{aligned} \bigl\| z^\ast- z_R^\ast \bigr\| ^2_{L^2(0,T; \mathcal{H})} \le&\kappa^2 \bigl\| u_{R} ^\ast- P_{\mathfrak{c}} u^\ast\bigr\| _{L^2(0,T; \mathcal{H})}^2 \le \kappa^2 \bigl\| u_{R} ^\ast- u^\ast \bigr\| _{L^2(0,T; \mathcal{H})}^2 \\ \le&\frac{\mathcal{C}\kappa^2}{\sigma} \Bigl( \sqrt{Q\bigl(T, y_0; u_{R}^\ast\bigr)} \bigl\| y_{R,\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} \\ &{} + \sqrt{Q\bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast \bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} + {\|\mathfrak{C}\| \bigl\| P_{\mathfrak {s}}u^\ast\bigr\| _{L^2(0,T; \mathcal{H})} } \Bigr). \end{aligned}

We obtain then (4.7) by combining the above two estimates.

The estimate (4.8) follows from (4.7) by noting that

\begin{aligned} &\bigl\| y^\ast-\bigl( z_R^\ast+ h\bigl(z_R^\ast\bigr)\bigr) \bigr\| ^2_{L^2(0,T; \mathcal{H})}\\ &\quad \le 2 \bigl\| y_{\mathfrak{c}}^\ast- z_R^\ast \bigr\| ^2_{L^2(0,T; \mathcal{H})} + 2 \bigl\| y_{\mathfrak{s} }^\ast- h \bigl(z_R^\ast\bigr) \bigr\| ^2_{L^2(0,T; \mathcal{H})} \\ &\quad \le2 \bigl\| y_{\mathfrak{c}}^\ast- z_R^\ast \bigr\| ^2_{L^2(0,T; \mathcal {H})} + 4 \bigl\| y_{\mathfrak{s}}^\ast- h \bigl(y_{\mathfrak{c}}^\ast\bigr) \bigr\| ^2_{L^2(0,T; \mathcal{H})} + 4 \bigl\| h\bigl(y_{\mathfrak{c} }^\ast\bigr) - h\bigl(z_R^\ast \bigr) \bigr\| ^2_{L^2(0,T; \mathcal{H})} \\ &\quad \le2 \bigl\| y_{\mathfrak{c}}^\ast- z_R^\ast \bigr\| ^2_{L^2(0,T; \mathcal {H})} + 4 C_\alpha^2 \bigl\| y_{\mathfrak{s}}^\ast- h\bigl(y_{\mathfrak{c}}^\ast\bigr) \bigr\| ^2_{L^2(0,T; \mathcal {H}_\alpha)} + 4 \bigl\| h\bigl(y_{\mathfrak{c}}^\ast \bigr) - h\bigl(z_R^\ast\bigr) \bigr\| ^2_{L^2(0,T; \mathcal{H})}; \end{aligned}

and that

$$\bigl\| h\bigl(y_{\mathfrak{c}}^\ast\bigr) - h\bigl(z_R^\ast \bigr) \bigr\| _{L^2(0,T; \mathcal{H})} \le C_1 C_\alpha\operatorname{Lip}(h) \vert_{V_\mathfrak{c}} \bigl\| y_{\mathfrak {c}}^\ast- z_R^\ast \bigr\| _{L^2(0,T; \mathcal{H})};$$

see (4.35) for more details about the derivation of this last inequality (with $$y_{R,\mathfrak{c}}^{\ast}$$ therein replaced by $$y_{\mathfrak{c}}^{\ast}$$ here). □

### Proof of Corollary 2

Note that if $$\mathfrak{C}$$ leaves stable the two subspaces $$\mathcal {H}^{\mathfrak{c}}$$ and $$\mathcal{H}^{\mathfrak{s}}$$, then in Lemma 2, Eq. (4.12) satisfied by the difference $$w(t):=y_{\mathfrak {c}}(t) - z(t)$$ is simplified into the following:

$$\frac{\mathrm{d}w}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda w + P_{\mathfrak{c}} \bigl( B(y_{\mathfrak{c}} + y_{\mathfrak{s}}) - B\bigl(z + h(z)\bigr) \bigr), \quad w(0) = 0,$$

where the term $$P_{\mathfrak{c}} \mathfrak{C} P_{\mathfrak{s}}u$$ vanishes here. Consequently, the terms involving P s u in the subsequent estimates are dropped out, leading then to the estimate given in (4.10). □

## 2D-Suboptimal Controller Synthesis Based on the Leading-Order Finite-Horizon PM: Application to a Burgers-type Equation

We apply in this section and the next, the PM-based reduction approach introduced above for the design of suboptimal solutions to an optimal control problem of a Burgers-type equation, in the case of globally distributed control laws. The more challenging case of locally distributed control laws, is addressed in Sect. 7.

### Cost Functional of Terminal Payoff Type for a Burgers-Type Equation, and Existence of Optimal Solution

The model considered here takes the following form, which is posed on the interval (0,l) driven by a globally distributed control term $$\mathfrak{C} u(x, t)$$:

$$\frac{\mathrm{d} y}{\mathrm{d}t} = \nu y_{xx} + \lambda y - \gamma y y_x + \mathfrak{C} u(x, t), \quad(x,t) \in(0,l) \times(0, T],$$
(5.1)

where ν,λ and γ are positive parameters, the final time T>0 is fixed, and conditions on the linear operator $$\mathfrak{C}$$ are specified in Sect. 5.2 below.

The equation is supplemented with the Dirichlet boundary condition

$$y(0,t;u) = y(l,t;u) = 0, \quad t \in[0, T];$$
(5.2)

and appropriate initial condition

$$y(x, 0) = y_0(x), \quad x\in(0,l).$$
(5.3)

The classical Burgers equation (with λ=0 in (5.1)) has widely served as a theoretical laboratory to test various methodologies devoted to the design of optimal/suboptimal controllers of nonlinear distributed-parameter systems; see e.g. [7, 30, 73, 76, 102] and references therein. The inclusion of the term λy here allows for the presence of linearly unstable modes, which lead in turn to the existence of non-trivial (and nonlinearly) stable steady states for the uncontrolled version of (5.1) provided that λ is large enough; see . The latter property will be used in the choices of initial data and targets for the associated optimal control problems analyzed hereafter. From a physical perspective, we mention that (5.1) arises in the modeling of flame front propagation . This model will serve us here to demonstrate the effectiveness of the PM approach introduced above in the design of suboptimal solutions to optimal control problems.

In that respect, we consider the following cost functional associated with (5.1)–(5.3),

$$J(y, u) = \int_0^T \biggl( \frac{1}{2}\bigl\| {y(\cdot, t; y_0, u)}\bigr\| ^2 + \frac{\mu_1}{2}\bigl\| u(\cdot, t)\bigr\| ^2 \biggr) \,\mathrm{d}t + \frac{\mu _2}{2} \bigl\| y(\cdot , T; y_0, u) - Y\bigr\| ^2,$$
(5.4)

constituted by a running cost along the controlled trajectory and a terminal payoff term defining a penalty on the final state; here μ 1 and μ 2 are some positive constants, YL 2(0,l) is some given target profile, and ∥⋅∥ denotes the L 2(0,l)-norm.

Compared to the cost functional (2.3) associated with the optimal control problem ($$\mathcal {P}$$) given in Sect. 2, we have added here a terminal payoff $$\frac{\mu_{2}}{2} \|{y(\cdot, T; y_{0}, u) - Y}\|^{2}$$ to the running cost $$\int_{0}^{T} ( \frac{1}{2}\|{y(\cdot,t; y_{0}, u)}\|^{2} + \frac{\mu_{1}}{2}\|{u(\cdot ,t)}\|^{2} ) \,\mathrm{d}t$$. In Sect. 4, the optimal control problem ($$\mathcal {P}$$) involving only the latter type of running cost, has served to identify the determining quantities controlling the distance to an optimal control of a suboptimal solution to ($$\mathcal {P}$$) built from a PM-reduced system; see Theorem 1 and Corollary 2. For a functional cost of type (5.4), error estimates similar to (4.5) and (4.10) can be derived by controlling appropriately the contribution of the terminal payoff term to $$J(y^{\ast}_{R}, u_{R}^{\ast}) - J(y^{\ast}, u^{\ast})$$ in the estimate (4.31). For instance, the error estimate (4.10) becomes

\begin{aligned} \bigl\| u_{R}^\ast- u^\ast\bigr\| ^2_{L^2(0,T; \mathcal{H})} &\le\frac {\mathcal {C}}{\sigma} \Bigl( \sqrt{Q\bigl(T, y_0; u_{R}^\ast\bigr)} \bigl\| y_{R,\mathfrak{s}}^\ast\bigr\| _{L^2(0,T; \mathcal {H}_\alpha)} + \sqrt{Q \bigl(T,y_0; u^\ast\bigr)} \bigl\| y_{\mathfrak{s}}^\ast \bigr\| _{L^2(0,T; \mathcal{H}_\alpha)} \Bigr) \\ &\quad{} + \frac{|C_T(y^*_{R,T}, Y) - C_T(y_T^*, Y)|}{\sigma}, \end{aligned}
(5.5)

where $$C_{T}(v, Y) := \frac{\mu_{2}}{2} \|v - Y\|^{2}$$, $$y^{*}_{R,T} = y^{*}_{R}(T)$$ and $$y^{*}_{T} = y^{*}(T)$$. We dealt with the simpler situation of a single running cost type functional in Sect. 4 in order not to overburden the presentation. Furthermore, as we will see in this section and the forthcoming ones, the error estimates derived in Sect. 4 are sufficient enough to provide useful (and computable) insights to help analyze the performances of a PM-based suboptimal controller.Footnote 10

The interest of cost functionals such as (5.4) is that they arise naturally when the goal is to drive the state y(⋅;u) of (5.1) as close as possible to a target profile Y at the final time T, while keeping the cost of the control, expressed by $$\frac{\mu_{1}}{2} \int_{0}^{T} \|u(t)\|^{2} \,\mathrm{d}t$$, as low as possible. Here, the terminal payoff term gives a measurement of the “proximity” to the target Y at the final-time SPDE profile. If one can make μ 2=+∞, it means the problem is exactly controllable, if not the system is approximately controllable .

We turn now to the precise description of the optimal control problem considered in this section and the next. Adopting the notations of Sect. 2, the functional spaces are

$$\mathcal{H}:=L^2(0,l), \qquad\mathcal{H}_1:=H^2(0,l) \cap H_0^1(0,l), \qquad\mathcal{H}_{1/2}:= H_0^1(0,l),$$
(5.6)

the linear operator $$L_{\lambda}: \mathcal{H}_{1} \rightarrow\mathcal{H}$$ is given by

$$L_\lambda y := \nu\partial_{xx}^2 y + \lambda y,$$
(5.7)

and the nonlinearity F is expressed by the bilinear term

\begin{aligned} B{:}\quad & \mathcal{H}_{1/2} \times \mathcal{H}_{1/2} \rightarrow\mathcal {H} \\ & (y,y) \mapsto B(y,y) := - \gamma y \partial_x y, \end{aligned}
(5.8)

with slight abuse of notations, understanding (5.7) and y∂ x y in (5.8) within the appropriate weak sense.

The optimal control problem for which we will propose suboptimal solutions takes here the following form:

\begin{aligned} & \min J(y, u) \quad \text{with J defined in (5.4)} \quad\text{s.t.} \\ &\quad (y, u) \in L^2(0,T; \mathcal{H}) \times L^2(0,T; \mathcal{H}) \text{ solves the problem (5.1)--(5.3)}. \end{aligned}
(5.9)

It can be checked by standard energy estimates that for any given controller $$u \in L^{2}(0,T; \mathcal{H})$$, initial datum $$y_{0} \in \mathcal{H}$$ and any finite T>0, there exists a unique weak solutionFootnote 11 y(⋅;y 0,u) for the problem (5.1)–(5.3) such that $$y(\cdot; y_{0}, u) \in L^{2}(0,T; \mathcal {H}_{1/2})$$ and $$y'(\cdot; y_{0}, u) \in L^{2}(0,T; (\mathcal{H}_{1/2})^{-1})$$, where $$(\mathcal{H}_{1/2})^{-1} = H^{-1}(0,l)$$ is the dual of $$\mathcal {H}_{1/2} = H_{0}^{1}(0,l)$$; see e.g.  for the standard Burgers equation subject to affine control.

Note also that $$y(\cdot; y_{0}, u) \in C([0,T]; \mathcal{H})$$ thanks to the continuous embedding

$$\mathcal{W}:=\biggl\{ y \Bigm| y \in L^2(0,T; \mathcal{H}_{1/2}) \text{ and } \frac{\mathrm{d}y}{\mathrm{d}t} \in L^2\bigl(0,T; (\mathcal {H}_{1/2})^{-1}\bigr) \biggr\} \subset C\bigl([0,T]; \mathcal{H}\bigr);$$

see e.g. [41, Sect. 5.9, Theorem 3] for more details. This last property implies thus that the cost functional J given by (5.4) is well defined for any pair $$(y,u) \in\mathcal{W} \times L^{2}(0,T; \mathcal{H})$$ that satisfies the problem (5.1)–(5.3) in the weak sense (5.10).

Within this functional setting, the existence of an optimal pair to (5.9) in $$\mathcal{W} \times L^{2}(0,T; \mathcal{H})$$, can be achieved by application of the direct method of calculus of variations . The closest application of such a method that serves our purpose can be found in the proof of [102, Proposition 4] for the standard Burgers equation where the author considered cost functional of tracking type; the arguments being easily adaptable to cost functional of the form (5.4). We provide below a sketch of such arguments.

First note that given a minimizing sequence $$\{(y^{n}, u^{n})\} \in (\mathcal{W} \times L^{2}(0,T; \mathcal{H}))^{\mathbb{N}}$$, since the cost functional J defined by (5.4) is positive (and thus bounded from below) and satisfies

$$J(y,u) \rightarrow\infty\quad\text{if } \|y\|_{L^2(0,T; \mathcal{H})} \rightarrow \infty\quad\text{or}\quad\|u\| _{L^2(0,T; \mathcal{H})} \rightarrow\infty,$$

the minimizing sequence lives in a bounded subset of the functional space $$\mathcal{W} \times L^{2}(0,T; \mathcal{H})$$. We can then extract a subsequence, say $$\{(y^{n_{j}}, u^{n_{j}})\}$$, which converges weakly to some element $$(y^{\ast}, u^{\ast}) \in\mathcal{W} \times L^{2}(0,T; \mathcal {H})$$; see e.g. [21, Theorem 3.18]. By using the fact that $$\mathcal{W}$$ is compactly embedded in L 2(0,T;L (0,l)) , standard energy estimates on the nonlinear term allow to show that actually (y ,u ) satisfies (5.1)–(5.3) in the following weak sense, i.e. for any $$\varphi\in L^{2}(0,T; \mathcal{H}_{1/2})$$ and any T>0,

$$\int_0^T \biggl( \biggl\langle \frac{\mathrm{d} y^\ast}{\mathrm{d}t},\varphi \biggr\rangle _{\mathcal{H}_{1/2}^{-1}; \mathcal{H}_{1/2}} + \bigl\langle B \bigl(y^\ast ,y^\ast\bigr), \varphi \bigr\rangle _{\mathcal{H}}+ \nu \bigl\langle y^\ast, \varphi \bigr\rangle _{ \mathcal{H}_{1/2}} - \bigl\langle \lambda y^\ast+ \mathfrak{C} u^\ast,\varphi \bigr\rangle _{\mathcal{H}} \biggr)\,\mathrm{d} t =0,$$
(5.10)

with y (0)=y 0.

Invoking now the lower semi-continuity property of the norm in Banach space (see e.g. [21, Proposition 3.5(iii)]) with respect to the convergence in the weak topology, from the functional form of J given in (5.4) we conclude that (y ,u ) is an optimal pair for the optimal control problem (5.9). Having ensured the existence of an optimal pair to (5.9), we turn now to the design of low-dimensional suboptimal pairs based on the (leading-order) parameterizing manifold introduced in Sect. 3.2.

### Analytic Derivation of the $$h^{(1)}_{\lambda}$$-Based 2D Reduced System for the Design of Suboptimal Controllers

We present in this section the analytic derivation of the $$h^{(1)}_{\lambda}$$-based reduced system on which we will rely to design suboptimal solutions to problem (5.9). In this respect, we consider the particular case where the subspace $$\mathcal{H}^{\mathfrak{c}}$$ of the low-modes is chosen to be the subspace spanned by the first two eigenmodes of the linear operator L λ defined in (5.7). Recall that the eigenvalues of L λ are given by

$$\beta_n(\lambda) := \lambda- \frac{\nu n^2\pi^2}{l^2}, \quad n \in\mathbb{N},$$
(5.11)

and the corresponding eigenvectors are

$$e_n(x) := \sqrt{\frac{2}{l}}\sin \biggl( \frac{n\pi x}{l} \biggr), \quad x\in(0,l).$$
(5.12)

Throughout the numerical applications presented hereafter, we will choose λ to be bigger than the critical value $$\lambda_{c}:= \frac {\nu\pi^{2}}{l^{2}}$$ such that L λ admits one and only one unstable eigenmode. The subspace $$\mathcal{H}^{\mathfrak{c}}$$ given by

$$\mathcal{H}^{\mathfrak{c}} := \operatorname{span}\{e_1, e_2\},$$
(5.13)

is thus spanned by one unstable and one stable mode.

For the regimes considered hereafter, it can be checked that the (NR)-condition is satisfied, leading in particular to a well-defined $$h^{(1)}_{\lambda}$$. We take as a finite-horizon PM candidate, the manifold function $$h^{(1)}_{\lambda}$$ provided by the explicit formula (3.11) that we apply to the PDE (5.1). Recall that according to Lemma 1, the manifold function $$h^{(1)}_{\lambda}$$ provides a natural theoretical PM candidate. Numerical results reported in Fig. 2 will support that this choice is in fact relevant for the regimes analyzed hereafter for the PDE (5.1) leading in particular to manifold functions with parameterization defect less than unity as required in Definition 1.

To analyze the performances achieved by the $$h^{(1)}_{\lambda}$$-based reduced system in the design of suboptimal solutions to (5.9), we place ourselves within the conditions of Corollary 2. In particular, we assume that the continuous linear operator $$\mathfrak{C}: \mathcal{H} \rightarrow\mathcal{H}$$ leaves stable the subspaces $$\mathcal{H}^{\mathfrak{c}}$$ and $$\mathcal{H}^{\mathfrak{s}}$$:

$$\mathfrak{C} \mathcal{H}^{\mathfrak{c}} \subset\mathcal {H}^{\mathfrak{c}}, \qquad \mathfrak {C} \mathcal{H}^{\mathfrak{s}} \subset\mathcal{H}^{\mathfrak{s}}.$$
(5.14)

Recall that under such assumptions, the high-mode energy remainder $$\| P_{\mathfrak{s}}u^{\ast}\|_{L^{2}(0,T; \mathcal{H})}$$ of the (unknown) optimal controller u , does not contribute to the estimate of $$\|u_{R}^{\ast}- u^{\ast}\|^{2}_{L^{2}(0,T; \mathcal{H})}$$; leaving the parameterization defect as a key determining parameter in the control of the latter. In particular we will see in Sect. 6 that other manifold functions with a smaller parameterization defect than the one associated with $$h^{(1)}_{\lambda}$$, lead to a design of better suboptimal solutions to (5.9) than those based on $$h^{(1)}_{\lambda}$$.

To be more specific, the operator $$\mathfrak{C}$$ when restricted to $$\mathcal{H}^{\mathfrak{c}}$$ takes the following form

$$\mathfrak{C} e_1 = a_{11}e_1 + a_{12} e_2, \qquad \mathfrak{C} e_2 = a_{21}e_1 + a_{22} e_2,$$
(5.15)

where the coefficient matrix

$$M := \begin{pmatrix} a_{11} & a_{12}\\ a_{21}& a_{22} \end{pmatrix}$$
(5.16)

is chosen to be non-trivial to avoid pathological situations.

Corresponding to the cost functional (5.4), the cost associated with the $$h^{(1)}_{\lambda}$$-based reduced system takes the following form:

\begin{aligned} J_R(z, u_{R}) =& \int _0^T \biggl( \frac{1}{2}\bigl\| z(t) + h^{(1)}_\lambda\bigl(z(t; P_{\mathfrak{c}}y_0, u_R)\bigr)\bigr\| ^2 + \frac{\mu_1}{2} \bigl\| u_{R}(t)\bigr\| ^2 \biggr) \,\mathrm{d}t \\ &{}+ \frac {\mu_2}{2} \bigl\| z(T; P_{\mathfrak{c}}y_0, u_R) -P_{\mathfrak{c}}Y \bigr\| ^2, \end{aligned}
(5.17)

where $$Y \in\mathcal{H}$$ is some prescribed target.

Recall that following (4.2a), (4.2b), the $$h^{(1)}_{\lambda}$$-based reduced system intended to model the dynamics of the low modes $$P_{\mathfrak{c} }y$$, takes the following abstract form:

\begin{aligned} & \frac{\mathrm{d}z}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda z + P_{\mathfrak{c}} B \bigl(z + h^{(1)}_\lambda(z), z + h^{(1)}_\lambda(z) \bigr) + P_{\mathfrak{c}} \mathfrak{C} u_{R}(t), \\ & \quad{t \in(0, T]},\ z(0) = P_{\mathfrak{c}} y_0 \in\mathcal{H}^{\mathfrak{c}}, \end{aligned}
(5.18)

where y 0 is the initial datum of the original PDE (5.1), and $$u_{R}\in L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})$$ is a given control of the reduced system.

We are thus left with the following reduced optimal control problem associated with (5.9):

$$\min J_R(z, u_{R}) \quad \text{s.t.} \quad (z, u_{R}) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr) \times L^2 \bigl(0,T; \mathcal {H}^{\mathfrak{c}}\bigr) \text{ solves (5.18)}.$$
(5.19)

We turn now to the description of the analytic form of (5.19).

Analytic form of ( 5.19 ). We proceed with the explicit expression of $$h^{(1)}_{\lambda}$$ provided by (3.11) that we apply to the Burgers-type equation (5.1). In that respect, the nonlinear interactions between the $$\mathcal{H}^{\mathfrak{c}}$$-modes as projected onto the $$\mathcal{H}^{\mathfrak{s}}$$-modes given by

$$B_{i_1i_2}^n:=\bigl\langle B(e_{i_1}, e_{i_2}), e_n \bigr\rangle ,$$

constitute key quantities to determine. In the case of the Burgers-type equation (5.1), they take the following form:

$$B_{i_1i_2}^n = - \gamma\bigl\langle e_{i_1} ( e_{i_2})_x, e_n \bigr\rangle = \begin{cases} - \alpha i_2, & n = i_1 + i_2, \\ - \alpha i_2 \operatorname{sgn}(i_1-i_2), & n = |i_1-i_2|, \\ 0, & \text{otherwise}, \end{cases}$$
(5.20)

where

$$\alpha:= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}.$$
(5.21)

In particular, we have

$$\bigl\langle e_{i_1} ( e_{i_2})_x, e_n \bigr\rangle = 0,$$

for any n≥5 and i 1,i 2∈{1,2}.

By using the above nonlinear interaction relations in (3.11), we obtain thus the following expression of $$h^{(1)}_{\lambda}$$:

\begin{aligned} \boxed{h^{(1)}_\lambda(z_1 e_1 + z_2 e_2) = \alpha_1( \lambda) z_1 z_2 e_3 + \alpha_2( \lambda) (z_2)^2 e_4 , \quad(z_1, z_2) \in\mathbb{R}^2,} \end{aligned}
(5.22)

where

\begin{aligned} \alpha_1(\lambda) & := - \frac{ 3 \gamma\pi}{\sqrt{2}l^{3/2} (\beta _1(\lambda) + \beta_2(\lambda) - \beta_3(\lambda))}, \\ \alpha _2(\lambda ) & := - \frac{\sqrt{2} \gamma\pi}{l^{3/2}(2\beta_2(\lambda) - \beta _4(\lambda))}, \end{aligned}
(5.23)

with the β i (λ) given such as given by (5.11). Note that this set of eigenvalues obey the (NR)-condition for any λ-value of interest here (i.e. λ>λ c ). Note also that α 1(λ)<0 and α 2(λ)<0 for any such λ.

Now, by using (5.22), we can rewrite (5.17) into the following explicit form:

$$J_R(z, u_{R}) = \int _0^T \bigl[\mathcal{G}\bigl(z(t)\bigr) + \mathcal{E}\bigl(u_{R}(t)\bigr)\bigr] \,\mathrm{d}t + C_T \bigl(z(T), P_{\mathfrak{c}}Y\bigr),$$
(5.24)

where

\begin{aligned} & \mathcal{G}(z) = \frac{1}{2} \bigl\| z + h^{(1)}_\lambda(z)\bigr\| ^2 = \frac {1}{2} \bigl[(z_1)^2 + (z_2)^2 + \bigl( \alpha_1(\lambda) z_1 z_2 \bigr)^2 + \bigl(\alpha _2(\lambda)z_2^2 \bigr)^2\bigr], \\ & \mathcal{E}(u_{R}) = \frac{\mu_1}{2} \|u_{R} \|^2 = \frac{\mu _1}{2}\bigl[(u_{R,1})^2 + (u_{R,2})^2\bigr], \end{aligned}
(5.25)

and

$$C_T\bigl(z(T), P_{\mathfrak{c}} Y\bigr) := \frac{\mu_2}{2} \sum_{i=1}^m \bigl|z_i(T) - Y_i\bigr|^2,$$
(5.26)

with z i :=〈z,e i 〉, u R,i :=〈u R ,e i 〉, and Y i :=〈Y,e i 〉, i=1,2.

By using furthermore the expression of $$h^{(1)}_{\lambda}$$ given in (5.22) into (5.18), we obtain finally after projection onto $$\mathcal{H}^{\mathfrak{c}}$$, the following analytic formulation of (5.18):

\boxed{ \begin{aligned} & \frac{\mathrm{d}z_1}{\mathrm{d}t} = \beta_1(\lambda) z_1 + \alpha \bigl( z_1z_2 + \alpha_1(\lambda) z_1z_2^2 + \alpha_1(\lambda) \alpha_2(\lambda) z_1 z_2^3 \bigr) + a_{11}u_{R,1}(t) + a_{21}u_{R,2}(t), \\ & \frac{\mathrm{d}z_2}{\mathrm{d}t} = \beta_2(\lambda) z_2 + \alpha \bigl( - z_1^2 + 2 \alpha_1(\lambda) z_1^2z_2 + 2 \alpha_2(\lambda) z_2^3 \bigr) + a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t), \end{aligned} }
(5.27)

where α 1(λ) and α 2(λ) are defined in (5.23), and $$\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}$$.

Note that for any given initial datum (z 1,0,z 2,0) and any T>0, the $$h^{(1)}_{\lambda}$$-based reduced system (5.27) admits a unique solution in $$C([0,T]; \mathbb {R}^{2})$$; this is carried out through some simple but specific energy estimates that are provided in Appendix B for the sake of clarity.

### Synthesis of Suboptimal Controllers by a Pontryagin-Maximum-Principle Approach

The analytic form (5.27) of the $$h^{(1)}_{\lambda}$$-based reduced system (5.18) allows for the use of standard techniques from finite-dimensional optimal control theory to solve the related reduced optimal control problem (5.19) [18, 23, 67, 68, 95]. We follow below an indirect approach relying on the Pontryagin maximum principle (PMP); see e.g. [18, 20, 67, 68, 88, 95]. Usually, the use of the Pontryagin maximum principle allows to identify a set of necessary conditions to be satisfied by an optimal solution. However, as we will see, due to the particular form of the cost functionals considered here and the nature of the reduced control system (5.27), these conditions will turn out to be sufficient to ensure the existence of a (unique) optimal control for the reduced problem. Relying on a PMP approach allows also for theoretical insights that can be gained on the reduced optimal control problem (5.19) from the (costate-based) explicit formula of the (reduced) optimal controller reachable by such an approach; see (5.32) and Lemmas 3 and 4 below.

In that perspective, let us denote the $$h^{(1)}_{\lambda}$$-based reduced vector field involved in (5.27), by

$$f(z, u_{R}):= \bigl(f_1(z,u_{R}), f_2(z, u_{R})\bigr)^{\mathrm{tr}}.$$

We introduce now the following Hamiltonian associated with the reduced optimal control problem (5.19):

$$H(z, p, u_{R}) := \mathcal{G}(z) + \mathcal{E}(u_{R}) + p_1 f_1(z, u_{R}) + p_2 f_2(z, u_{R}),$$
(5.28)

where p:=(p 1,p 2)tr is the costate (or adjoint state) associated with the state z=(z 1,z 2)tr.

It follows from the Pontryagin maximum principle that for a given pair

$$\bigl(z_R^\ast, u_{R}^\ast\bigr) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr) \times L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr)$$

to be optimal for the reduced problem (5.19), it must satisfy the following constrained Hamiltonian system:

\begin{aligned} & \left . \begin{array}{l} \frac{\mathrm{d}z^\ast_{R}}{\displaystyle\mathrm{d}t} = \nabla _{p}H(z^\ast_{R}, p^\ast_{R}, u^\ast_{R}) = f(z^\ast_{R}, u_{R}^\ast),\\ \frac{\mathrm{d}p^\ast_{R}}{\displaystyle\mathrm{d}t} = - \nabla _{z}H(z^\ast_{R}, p^\ast_{R}, u^\ast_{R})= g(z^\ast_{R}, p^\ast_{R}), \end{array} \right \} \quad \bigl(\text{Hamiltonian system for }\bigl(z^\ast_R, p^\ast_R\bigr)\bigr) \end{aligned}
(5.29a)
\begin{aligned} & \nabla_{u_R} H\bigl(z^\ast_{R}, p^\ast_{R}, u^\ast_{R}\bigr) = 0, \quad (\text{1st-order optimality condition}) \end{aligned}
(5.29b)
\begin{aligned} & p_{R}^\ast(T) = \nabla_z C_T \bigl(z_R^\ast(T), P_{\mathfrak{c}}Y\bigr), \quad (\text{terminal condition}) \end{aligned}
(5.29c)

where ∇ x stands for the gradient operator along the x-direction, $$p_{R}^{\ast}= p_{R,1}^{\ast}e_{1} + p_{R,2}^{\ast}e_{2}$$ is the costate associated with $$z_{R}^{\ast}$$, and the vector field g=(g 1,g 2)tr has the following expression

\begin{aligned} g_1(z, p) & := - z_1 - \beta_1(\lambda) p_1 - \alpha p_1 z_2 + 2 \alpha p_2 z_1 - \alpha\alpha_1(\lambda) p_1 (z_2 )^2 \\ &\quad\,{} - 4 \alpha\alpha_1(\lambda) p_2 z_1 z_2 - \bigl(\alpha _1(\lambda) \bigr)^2 z_1 (z_2 )^2 - \alpha \alpha_1(\lambda) \alpha _2(\lambda ) p_1 (z_2 )^3 , \\ g_2(z , p) & := - z_2 - \beta_2(\lambda) p_2 - \alpha p_1 z_1 - 2 \alpha \alpha_1(\lambda) p_1 z_1 z_2 - 2 \alpha\alpha_1(\lambda) p_2 (z_1 )^2 \\ & \quad\,{} + 6 \alpha\alpha_2(\lambda) p_2 (z_2 )^2 - \bigl(\alpha _1(\lambda) \bigr)^2 (z_1 )^2 z_2 \\ & \quad\,{} - 3 \alpha\alpha_1(\lambda) \alpha_2( \lambda) p_1 z_1 (z_2 )^2 - 2 \bigl(\alpha_2(\lambda)\bigr)^2 (z_2 )^3. \end{aligned}
(5.30)

Note also that

$$\nabla_{u_R} H\bigl(z^\ast_{R}, p^\ast_{R}, u^\ast_{R}\bigr) = \bigl(\mu _1u_{R,1}^\ast+ a_{11} p_{R,1}^\ast+ a_{12} p_{R,2}^\ast , \mu _2u_{R,2}^\ast+ a_{21} p_{R,1}^\ast+ a_{22} p_{R,2}^\ast \bigr)^{\mathrm{tr}}.$$

The 1st-order optimality condition (5.29b) reduces then to

$$\bigl(u_{R,1}^\ast, u_{R,2}^\ast\bigr) = - \biggl( \frac{a_{11} p_{R,1}^\ast+ a_{12} p_{R,2}^\ast}{\mu_1}, \frac{a_{21} p_{R,1}^\ast+ a_{22} p_{R,2}^\ast}{\mu_1} \biggr),$$
(5.31)

which written into a compact form, gives

$$\boxed{u_R^\ast= - \frac{1}{\mu_1} M p_{R}^\ast,}$$
(5.32)

where M is the matrix introduced in (5.16).

Thanks to the relation (5.31) between $$u_{R}^{\ast}$$ and the costate $$p_{R}^{\ast}$$, we get

\begin{aligned} a_{11} u_{R,1}^\ast+ a_{21} u_{R,2}^\ast& = - \frac{1}{\mu_1} \bigl( (a_{11})^2 + (a_{21})^2 \bigr) p_{R,1}^\ast- \frac{1}{\mu_1} (a_{11}a_{12} + a_{21}a_{22}) p_{R,2}^\ast \\ & =: f_3\bigl(p_{R,1}^\ast,p_{R,2}^\ast \bigr), \\ a_{12} u_{R,1}^\ast+ a_{22} u_{R,2}^\ast& = - \frac{1}{\mu_1} (a_{11}a_{12} + a_{21}a_{22}) p_{R,1}^\ast- \frac{1}{\mu_1} \bigl( (a_{12})^2 + (a_{21})^2 \bigr) p_{R,2}^\ast \\ & =: f_4\bigl(p_{R,1}^\ast,p_{R,2}^\ast \bigr). \end{aligned}
(5.33)

Finally, the terminal condition (5.29c) leads to

$$p_{R,i}^\ast(T) = \mu_2 \bigl(z_{R,i}^\ast(T) - Y_i\bigr), \quad i = 1, 2.$$
(5.34)

By using the above relations, we can reformulate the set of necessary conditions (5.29a)–(5.29c) as the following boundary-value problem (BVP) to be satisfied by $$z_{R}^{\ast}$$ and $$p_{R}^{\ast}$$:

\begin{aligned} & \frac{\mathrm{d}z_1}{\mathrm{d}t} = \beta_1(\lambda) z_1 + \alpha z_1 z_2 + \alpha \alpha_1(\lambda) z_1 (z_2)^2 + \alpha\alpha_1(\lambda) \alpha _2(\lambda) z_1 (z_2)^3 + f_3(p_1, p_2), \\ & \frac{\mathrm{d}z_2}{\mathrm{d}t} = \beta_2(\lambda) z_2 - \alpha(z_1)^2 + 2 \alpha\alpha_1(\lambda) (z_1)^2z_2 + 2 \alpha\alpha_2( \lambda) (z_2)^3 + f_4(p_1, p_2), \\ & \frac{\mathrm{d}p_1}{\mathrm{d}t} = g_1(z, p), \\ & \frac{\mathrm{d}p_2}{\mathrm{d}t} = g_2(z, p), \end{aligned}
(5.35)

subject to the boundary conditions

\begin{aligned} &z_1(0) = \langle y_0, e_1 \rangle, \qquad z_2(0) = \langle y_0, e_2 \rangle, \\ &p_1(T) = \mu_2 \bigl(z_{1}(T) - Y_1\bigr), \qquad p_2(T) = \mu_2 \bigl(z_2(T) - Y_2\bigr), \end{aligned}
(5.36)

where f 3 and f 4 are given by (5.33), and g 1(z,p) and g 2(z,p) are given by (5.30).

Once this BVP is solved, the corresponding controller $$u_{R}^{\ast}$$ determined by (5.32) constitutes then a natural candidate to solve the $$h^{(1)}_{\lambda}$$-based reduced optimal control problem (5.19). For the problem at hand, since the cost functional (5.17) is quadratic in u R and the dependence on the controller is affine for the system of Eqs. (5.27), it is known that the controller $$u_{R}^{\ast}$$ so obtained is actually the unique optimal controller of the reduced problem (5.19); see e.g. [67, Sect. 5.3] and . This observation also holds for the other reduced optimal control problems derived in later sections.

It is worth mentioning that the solution of the above BVP depends on the coefficient matrix M defined in (5.16) associated with the linear operator $$\mathfrak{C}$$ through the expressions of f 3 and f 4 given in (5.33). However, due to the specific form of f 3 and f 4, different choices of M can lead to the same solution of the BVP. More precisely, the solutions of (5.35)–(5.36) remain unchanged as long as M stays in the group of 2×2 orthogonal matrices. The following lemma summarizes this result.

### Lemma 3

The solution of (5.35)(5.36) is the same for any MO(2).

### Proof

The result follows trivially by noting that given any MO(2), it holds that M tr M=I. In particular, the following basic identities hold:

$$(a_{11})^2 + (a_{21})^2 = (a_{12})^2 + (a_{22})^2 = 1, \qquad a_{11}a_{12} + a_{21}a_{22} = 0.$$

By using the above identities in (5.33), we obtain for any MO(2) that

$$f_3\bigl(p_{R,1}^\ast,p_{R,2}^\ast \bigr) = - \frac{1}{\mu_1}p_{R,1}^\ast \qquad f_4\bigl(p_{R,1}^\ast,p_{R,2}^\ast \bigr) = - \frac{1}{\mu_1}p_{R,2}^\ast,$$

which is independent of M. The desired result follows. □

In connection to the above lemma, let us make finally the following basic observation, which will be of some interest in the numerical experiments.

### Lemma 4

For any two bounded linear operators $$\mathfrak{C}_{i}: \mathcal{H} \rightarrow\mathcal{H}$$ (i=1,2), if they leave invariant the subspaces $$\mathcal{H}^{\mathfrak{c}}$$ and $$\mathcal{H}^{\mathfrak{s}}$$, and their actions on the low modes differs only by an orthogonal transformation, i.e.,

$$\mathfrak{C}_i \mathcal{H}^\mathfrak{c}\subset\mathcal {H}^\mathfrak{c}, \qquad\mathfrak {C}_i \mathcal{H}^\mathfrak{s} \subset\mathcal{H}^\mathfrak{s}, \qquad P_{\mathfrak{c}} \mathfrak {C}_1 = M P_{\mathfrak{c}} \mathfrak{C}_2 \quad\mathit{with}\ M \in O(2),$$

then the optimal pairs $$(z_{R}^{\ast}, u_{R}^{\ast})$$ and $$(\overline {z}_{R}^{\ast}, \overline{u}_{R}^{\ast})$$, corresponding to the reduced optimal control problem (5.19) with $$\mathfrak{C}$$ in (5.18) taken to be $$\mathfrak{C}_{1}$$ and $$\mathfrak{C}_{2}$$ respectively, satisfy the following relation:

$$z_R^\ast= \overline{z}_R^\ast, \qquad u_R^\ast= M^{-1} \overline {u}_R^\ast, \qquad J_R \bigl(z_R^\ast, z_R^\ast\bigr) = J_R\bigl(\overline {z}_R^\ast, \overline{u}_R^\ast\bigr).$$

If we assume furthermore that $$P_{\mathfrak{s}} \mathfrak{C}_{1}= P_{\mathfrak{s}} \mathfrak {C}_{2}$$, then analogous results hold for the original optimal control problem (5.9).

### Remark 2

The above result is not limited to the two-dimensional nature of $$\mathcal{H}^{\mathfrak{c}}$$ given by (5.13), and can be generalized to a higher dimension m, as long as $$\mathcal{H}^{\mathfrak{c}}$$ is spanned by the first m eigenmodes, and M lives in O(m).

### Suboptimal Pair $$(y_{R}^{\ast},u_{R}^{\ast})$$ to (5.9) Based on $$h^{(1)}_{\lambda}$$: Numerical Aspects

The method used to solve the reduced optimal control problem (5.19) being clarified in the previous section, we turn now to the practical aspects concerning the synthesis of an $$h^{(1)}_{\lambda}$$-based suboptimal pair $$(y_{R}^{\ast},u_{R}^{\ast})$$ to the optimal control problem (5.9) associated with the Burgers-type equation (5.1). This synthesis is organized in two steps. First, the BVP problem (5.35)–(5.36) is solved to get the $$h^{(1)}_{\lambda}$$-based suboptimal controller $$u_{R}^{\ast}$$ according to the costate-based explicit expression (5.32). Second, this suboptimal controller is then used in (5.1) to get the suboptimal trajectory $$y_{R}^{\ast}$$ driven by $$\mathfrak{C} u_{R}^{\ast}$$. We explain below how these steps are numerically carried out.

Recall that the uncontrolled Burgers-type equation admits two locally stable steady states y ± (emerging from a pitch-fork bifurcation) when λ is above the critical value $$\lambda_{c} = \frac{\nu\pi^{2}}{l^{2}}$$ at which the leading eigenmode e 1 loses its linear stability . In the experiments below we take y + as initial data y 0, the target Y being specified in Sect. 5.5.

Shooting and collocation methods are commonly used to solve two-point boundary value problems [5, 19, 23, 65, 91]. A convenient collocation code is the Matlab built-in solver bvp4c.m,Footnote 12 which is used to solve the aforementioned BVP (5.35)–(5.36) as well as other BVPs encountered in later sections.

The simulation of the Burgers equation (5.1) as driven by the 2D suboptimal controller $$u_{R}^{\ast}$$ is then performed by means of a semi-implicit Euler scheme where at each time step the nonlinear term yy x =(y 2) x /2 and the controller $$u_{R}^{\ast}(x,t)$$ are treated explicitly, while the linear term is treated implicitly. The Laplacian operator is discretized using a standard second-order central difference approximation. The resulting semi-implicit scheme now reads as follows:

$$y_{j}^{n+1} - y_{j}^{n} = \biggl( \nu\varDelta_d y_{j}^{n+1} + \lambda y_{j}^{n+1} - \frac{\gamma }{2} \nabla_d \bigl(\bigl(y_{j}^{n} \bigr)^2 \bigr) + u^{R,n}_j \biggr)\delta t, \quad j \in\{1, \ldots, N_x - 1\},$$
(5.37)

where $$y_{j}^{n}$$ denotes the discrete approximation of y(jδx,nδt); $$u^{R,n}_{j}$$, the discrete approximation of $$u_{R}^{\ast}(j\delta x, n\delta t)$$; δx, the mesh size of the spatial discretization; δt, the time step; while Δ d and ∇ d denote the discrete Laplacian and discrete first-order derivative given respectively by

$$\varDelta_d y_{j}^{n}= \frac{y_{j-1}^{n} - 2y_{j}^{n} + y_{j+1}^{n}}{(\delta x)^2}; \qquad\nabla_d \bigl( \bigl(y_j^{n} \bigr)^2 \bigr) = \frac{ (y_{j+1}^{n})^2 - (y_j^{n})^2 }{\delta x}, \quad j \in\{1, \ldots, N_x - 1\}.$$

The Dirichlet boundary condition (5.2) becomes

$$y_0^n=y_{N_x}^n=0,$$

where N x +1 is the number of grid points used for the discretization of the spatial domain [0,l].

The time-dependent (N x −1)-dimensional vector solution to (5.37) is denoted by Y n, and is intended to be an approximation of the suboptimal trajectory $$y_{R}^{\ast}$$ at time t=nδt. Let us also denote by U n the spatial discretization of $$u_{R}^{\ast}(x,n \delta t)$$ for x∈[δx,lδx], given by

$$\mathbf{U}^n := \bigl( u_{R}^\ast(\delta x,n \delta t), \ldots, u_{R}^\ast \bigl((N_x-1) \delta x,n \delta t\bigr) \bigr)^{\mathrm{tr}}.$$

Then after rearranging the terms, Eq. (5.37) can be rewritten into the following algebraic system:

$$\bigl( (1- \lambda\delta t) \mathbf{I} - \nu\delta t \mathbf{A} \bigr)\mathbf{Y}^{n+1} = \mathbf{Y}^{n} - \frac{\gamma}{2} \delta t \mathbf {B}\bigl[\mathbf{S}\bigl( \mathbf{Y}^{n}\bigr)\bigr] + \delta t \mathbf{U}^{n},$$
(5.38)

where I is the (N x −1)×(N x −1) identity matrix, A is the tridiagonal matrix associated with the discrete Laplacian Δ d , B is the matrix associated with the discrete spatial derivative ∇ d , and S(Y n) denotes the vector whose entries are the square of the corresponding entries of Y n.

Since the eigenvalues of A are given by $$\frac {2}{(\delta x)^{2}} ( \cos( \frac{j \pi\delta x}{l}) - 1 )$$ (j=1,…,N x −1) and the corresponding eigenvectors are the discretized version of the first N x −1 sine modes $$e_{1}, \ldots, e_{N_{x}-1}$$ given in (5.12), the eigenvalues of the matrix M:=(1−λδt)Iνδt A of the LHS of (5.38) can be obtained easily, and the corresponding eigenvectors are still the discretized sine functions. At each time step, the algebraic system (5.38) can thus be solved efficiently using the discrete sine transform. To do so, we first compute the discrete sine transform of the RHS and then divide the elements of the transformed vector by the eigenvalues of M to which the inverse discrete sine transform is applied to find Y n+1; see e.g. [42, Sect. 3.2] for more details. In the numerical results that follow, the discrete sine transform has been handled by using the Matlab built-in function dst.m.

Finally, it is worthwhile mentioning that we have used a uniform time mesh for the integration of the PDE, whereas the $$u_{R}^{\ast}$$ is defined on a non-uniform mesh due to the adaptive mesh feature of the bvp4c solver. This discrepancy is resolved by using linear interpolation to obtain the value of $$u_{R}^{\ast}$$ at the uniform mesh used in the PDE scheme.

For the sake of comparison, the synthesis of a suboptimal controller based on a two-mode Galerkin approximation has been carried out following the same steps and the same numerical treatment described above. The corresponding suboptimal controller $$u_{G}^{\ast}$$ associated with the 2D Galerkin-based reduced optimal problem (A.5) is also obtained via a PMP approach which leads to solving a BVP described in Appendix A.1; see (A.7). The same procedure is applied to higher-dimensional Galerkin-based reduced optimal control problems (A.10) derived in Appendix A.2.

### 2D-Suboptimal Controller Synthesis Based on $$h^{(1)}_{\lambda}$$, and Control Performances: Numerical Results

We assess in this section the control performances achieved by the $$h^{(1)}_{\lambda}$$-based suboptimal pair $$(y_{R}^{\ast},u_{R}^{\ast})$$ of the optimal control problem (5.9) such as synthesized according to the procedure described above. These performances are compared with those achieved by a suboptimal solution computed from the 2D Galerkin-based reduced optimal control problem (A.5). In that respect, the cost (5.4) evaluated at the suboptimal pair $$(y(\cdot; y_{0}, u_{R}^{\ast}), u_{R}^{\ast})$$ will be compared with the cost evaluated at the suboptimal pair $$(y(\cdot; y_{0}, u_{G}^{\ast}), u_{G}^{\ast})$$, where $$u_{G}^{\ast}$$ is the suboptimal controller synthesized from (A.5).

We also set the coefficient μ 2 weighting the terminal payoff part of the cost functional (5.4) to be sufficiently large so that the comparison of the solution profile at the final time T of (5.37)—driven by the corresponding synthesized controller—with the prescribed target profile Y, provides a way to visualize the performance of the synthesized suboptimal controller.

The simulations reported below, are performed for δt=0.001 and N x =251 with l=1.3π so that δx≈0.02. The system parameters are taken to be ν=1, γ=2.5, and λ=3λ c ≈1.78. The parameters μ 1 and μ 2 in the cost functional (5.4) are taken to be μ 1=1 and μ 2=20. For all the simulations conducted in this article, the relative tolerance for the bvp4c has been set to 10−8 and the BVP mesh size parameter has been set to 1.6E4. The linear operator $$\mathfrak{C}: \mathcal{H} \rightarrow\mathcal{H}$$ is taken to be the identity mapping for the sake of simplicity. According to Lemma 4, any operator $$\mathfrak{C}$$ such that $$P_{\mathfrak{c}} \mathfrak{C} \in O(2)$$ and $$P_{\mathfrak{s}}\mathfrak{C} = \mathrm{Id}_{\mathcal {H}^{\mathfrak{s}}}$$ can be reduced to this case.

The numerical results at the final time T=3 are reported in Fig. 1. The left panel of this figure presents for this final time, the solution profile to (5.37) as driven by $$u_{R}^{\ast}$$ and $$u_{G}^{\ast}$$, respectively. For these simulations, the target profile has been chosen to be given by

$$Y = -0.1\bigl\langle y^-, e_1\bigr\rangle e_1 + 1.6 \bigl\langle y^-, e_2 \bigr\rangle e_2.$$
(5.39)

The right panel of Fig. 1 shows the two components of the synthesized suboptimal controllers $$u_{R}^{\ast}$$ and $$u_{G}^{\ast}$$.

As can be observed, the (approximate) PDE final state $$y(T; u_{R} ^{\ast})$$ associated with the controller $$u_{R}^{\ast}$$ captures the main qualitative feature of the target, while $$y(T; u_{G}^{\ast})$$ associated with the controller $$u_{G}^{\ast}$$ fails in this task. At a more quantitative level, the relative L 2-errors between the respective driven PDE final states and the target Y are given by

$$\frac{\|y(T; y_0, u_R^\ast) - Y\|}{\|Y\|} = 22.81~\%, \quad\mbox{and}\quad \frac {\|y(T; y_0, u_G^\ast) - Y\|}{\|Y\|} = 76.28~\%.$$

This discrepancy in the control performance as revealed on the above relative L 2-errors, goes with a noticeable discrepancy between the respective numerical values of the cost, namely

$$J\bigl(y\bigl(\cdot; y_0, u_R^\ast\bigr), u_R^\ast\bigr) = 9.75, \quad\mbox{and}\quad J\bigl(y\bigl(\cdot; y_0, u_G^\ast\bigr), u_G^\ast \bigr) = 30.77.$$

These preliminary results clearly indicate that given a decomposition $$\mathcal{H}^{\mathfrak{c}}\oplus\mathcal{H}^{\mathfrak{s}}$$ of $$\mathcal{H}$$, the slaving relationships between the $$\mathcal{H}^{\mathfrak{s}}$$-modes and the $$\mathcal{H}^{\mathfrak{c}}$$-modes such as parameterized by $$h^{(1)}_{\lambda}$$, participate in improving the control performance of the suboptimal solutions synthesized from a reduced system involving only the (partial) interactions between the $$\mathcal{H}^{\mathfrak{c}}$$-modes as modeled by a low-dimensional Galerkin approximation.

To better assess the control performance achieved by the $$h^{(1)}_{\lambda}$$-based suboptimal pair $$(y_{R}^{\ast},u_{R}^{\ast})$$, we compared with the performance achieved by a (suboptimal) solution to (5.9) based on a high-dimensional Galerkin approximation of (5.1). In that respect, we checked that the cost associated with a suboptimal pair $$(y(\cdot; y_{0}, \widetilde{u}_{G}^{\ast}), \widetilde {u}_{G}^{\ast})$$, where $$\widetilde{u}_{G}^{\ast}$$ is a controller synthesized by solving the BVP (A.13a)–(A.13c) associated with an m-dimensional Galerkin-based reduced optimal problem (A.10), can serve as good estimate of the cost associated with the (genuine) optimal solution to the problem (5.9) provided that m is sufficiently large. We indeed observed that increasing the dimension beyond m=16 does not result in significant change of the cost value (up to six significant digits) and we thus retained the results obtained for m=16 as reference for providing a good approximation of the optimal solution to (5.9). For m=16, the corresponding values of the cost (5.4), and the relative L 2-error for the final time solution profile are given by

$$J\bigl(y\bigl(\cdot; y_0, \widetilde{u}_G^\ast \bigr), \widetilde{u}_G^\ast\bigr) = 8.41, \quad\mbox{and}\quad \frac{\|y(T; y_0, \widetilde{u}_G^\ast) - Y\|}{\|Y\|} = 13.75~\%.$$

These values when compared with those obtained for the two-dimensional $$h^{(1)}_{\lambda}$$-based reduced problem (5.19) indicates that the two-dimensional controller $$u_{R}^{\ast}$$ already provides a fairly good control performance but at a much cheaper expense.

On the other hand, the quantitative discrepancy observed on the cost values and relative L 2-errors between the results based on (5.19) and those for the original optimal control problem (as indicated by the results based on the high-dimensional Galerkin reduced problem) can be attributed to two main factors according to the theoretical results of Sect. 4; see Corollary 2 and in particular the error estimate (4.10). The first factor is related to the parameterization defect associated with the finite-horizon PM used here, namely $$h^{(1)}_{\lambda}$$; and the second concerns the energy kept in the high modes of the solution either driven by the suboptimal controller $$u_{R}^{\ast}$$ or the optimal controller u itself.

For the remaining part of this section, we report on detailed numerical results which further emphasize the practical relevance of the aforementioned theoretic results provided by Corollary 2. These numerical results shown in Figs. 2 and 3 are carried out by varying the final time T in the range [0.1,5] while keeping other parameters the same as used in Fig. 1.

Panel (a) of Fig. 2 shows the cost values, when T is varied, associated with the suboptimal pairs $$(y_{R}^{\ast},u_{R}^{\ast})$$ on one hand, and associated with the suboptimal pairs $$(\widetilde{y}_{G}^{\ast}, \widetilde{u}_{G}^{\ast})$$, on the other hand. As one can observe up to T=3, the suboptimal controllers $$u_{R}^{\ast}$$ synthesized from the $$h^{(1)}_{\lambda}$$-based reduce problem (5.19) gives access to suboptimal solutions whose cost values are close to those achieved by the optimal ones.Footnote 13 Such good performances starts however to noticeably deteriorate as T increases from T=3.

The reasons of this deterioration are actually rich of teaching, as we explain now. If the error estimate (4.10) is meaningful, analyzing its main constitutive elements should help understand what causes this deterioration. In that respect, we computed (i) the corresponding parameterization defectsFootnote 14 associated with $$h^{(1)}_{\lambda}$$ and a given suboptimal controller $$u_{R}^{\ast}$$, and (ii) the energy contained in the high modes of the PDE solution either driven by the suboptimal controller $$u_{R}^{\ast}$$ (leading to the suboptimal trajectory $$y_{R}^{\ast}$$) or the (sub)optimal controller $$\widetilde{u}_{G}^{*}$$ (leading to the (sub)optimal trajectory $$\widetilde{y}_{G}^{\ast}$$).

As a first result, the panels (b)–(f) of Fig. 2 show that $$h^{(1)}_{\lambda}$$ provides a finite-horizon PM for the whole range of T analyzed here. The parameterization defects of $$h^{(1)}_{\lambda}$$ is furthermore robust with respect to variations of T, reaching a (nearly) constant value of about 0.57 for T≥1. At the same time, a substantial growth of the energy contained in the high modes of the suboptimal trajectories $$y_{R}^{\ast}$$ (i.e. $$\|P_{\mathfrak{s}} y_{R}^{\ast}(t)\| _{H^{1}(0,l)}$$), is observed from T=3 to T=5 while $$\|P_{\mathfrak{s}} \widetilde{y}_{G}^{\ast}(t)\|_{H^{1}(0,l)}$$ does not change significantly; see Fig. 3. A closer look at the numbers reveals that

\begin{aligned} \left . \begin{array}{l@{\quad}l} Q(T, y_0; u_R^\ast) = 0.57, &\|P_{\mathfrak{s}} y_R^\ast\|_{L^2(0,T; H^1(0,l))} = 2.26, \\ Q(T, y_0; \widetilde{u}_G^\ast) = 0.63, &\|P_{\mathfrak{s}} \widetilde {y}_G^\ast\|_{L^2(0,T; H^1(0,l))} = 2.15, \\ \end{array} \right \} \quad\text{for } T = 3, \\ \left . \begin{array}{l@{\quad}l} Q(T, y_0; u_R^\ast) = 0.59, &\|P_{\mathfrak{s}} y_R^\ast\|_{L^2(0,T; H^1(0,l))} = 3.0, \\ Q(T, y_0; \widetilde{u}_G^\ast) = 0.57, &\|P_{\mathfrak{s}} \widetilde {y}_G^\ast\|_{L^2(0,T; H^1(0,l))} = 2.13, \end{array} \right \} \quad\text{for } T = 5, \end{aligned}

which clearly shows that the RHS of the error estimate (4.10) experiences a growth of about 15 % when T increases from T=3 to T=5. This growth of the RHS of (4.10) comes with a growth related to the low-mode part of the LHS of (4.10), i.e. $$\| P_{\mathfrak{c}}(u_{R}^{\ast}-\widetilde{u}_{G}^{\ast})\| _{L^{2}(0,T;L^{2}(0,l))}^{2}$$, of about 10 %. This deviation from $$\widetilde{u}_{G}^{\ast}$$, observed on its low-mode part, is consistent with the substantial growth observed on the cost value $$J(y_{R}^{\ast}, u_{R}^{\ast})$$ as shown in Fig. 2(a).

To summarize, the error estimate (4.10) given in Corollary 2 provides useful (and computable) insights that can be used to guide the design of PM-based suboptimal controllers with good control performance. In particular, it addresses the importance of constructing PMs with small parameterization defects on one hand, while keeping small the energy contained in the high-modes, on the other. While the latter factor can be conceivably alleviated by increasing the dimension of the reduced phase space $$\mathcal{H}^{\mathfrak{c}}$$, finite-horizon PMs with smaller parameterization defects than proposed by $$h^{(1)}_{\lambda}$$ can be thus expected to be even more useful for the design of low-dimensional suboptimal controllers with good performances. The next section addresses the construction of such finite-horizon PMs.

### Remark 3

We mention that the numerical results reported in Fig. 1 have been compared with those obtained by solving the reduced optimal control problem (5.19) with the BOCOP toolbox .Footnote 15 For the parameters used, the relative error under the L 2-norm between the controllers numerically obtained by this toolbox and by our calculations has been observed to be within a margin of 0.1 %. For the sake of reproducibility of the results for (5.19), we provide the following numerical values of the components of Y used in (5.39): 〈Y,e 1〉=0.2561 and 〈Y,e 2〉=−1.9193.

## 2D-Suboptimal Controller Synthesis Based on Higher-Order Finite-Horizon PMs

As illustrated in the previous section in the context of a Burgers-type equation, the finite-horizon PM $$h^{(1)}_{\lambda}$$ based on the simple one-layer backward forward system (3.6a), (3.6b), can be used efficiently to obtain low-dimensional suboptimal controllers with relatively good performances for certain cases. Figures 2 and 3 indicate that these performances can be altered when the parameterization defects associated with $$h^{(1)}_{\lambda}$$ is not specially small, while the energy contained in the high modes of the solution—either driven by the suboptimal controller $$u_{R}^{\ast}$$ or the optimal controller u itself—get large, in agreement with the theoretical predictions of Corollary 2. The error estimate (4.10) suggests that other finite-horizon PMs with smaller parameterization defects than $$h^{(1)}_{\lambda}$$ should help in the synthesis of suboptimal controllers with better performances. The main purpose of this section is to build effectively such PMs that in particular add higher-order terms to $$h^{(1)}_{\lambda}$$ (Theorem 2 below) which will turn out to play a crucial role to improve the performances of the $$h^{(1)}_{\lambda}$$-based suboptimal controllers encountered so far; see Remark 4 below.

### Higher-Order Finite-Horizon PMs Based on Two-Layer Backward–Forward System: Analytic Derivation

We follow [27, Chap. 7] and consider the following two-layer backward–forward system associated with the uncontrolled version of (5.1):

\begin{aligned} & \frac{\mathrm{d} y^{(1)}_{\mathfrak{c}}}{\mathrm{d} s} = L_\lambda^{\mathfrak{c}} y^{(1)}_{\mathfrak{c}}, \quad s \in[ -\tau, 0], \ y^{(1)}_{\mathfrak{c}}(s) \vert_{s=0} = \xi, \end{aligned}
(6.1a)
\begin{aligned} & \frac{\mathrm{d} y^{(2)}_{\mathfrak{c}}}{\mathrm{d} s} = L_\lambda^{\mathfrak{c}} y^{(2)}_{\mathfrak{c}} + P_{\mathfrak{c}} B\bigl( y^{(1)}_{\mathfrak{c}}, y^{(1)}_{\mathfrak{c}} \bigr), \quad s \in[ -\tau, 0], \ y^{(2)}_{\mathfrak{c}}(s) \vert_{s=0} = \xi, \end{aligned}
(6.1b)
\begin{aligned} & \frac{\mathrm{d} y^{(2)}_{\mathfrak{s}}}{\mathrm{d} s} = L_\lambda^{\mathfrak{s}} y^{(2)}_{\mathfrak{s}} + P_{\mathfrak{s}} B\bigl(y^{(2)}_{\mathfrak{c}}, y^{(2)}_{\mathfrak{c}} \bigr), \quad s \in[-\tau, 0], \ y^{(2)}_{\mathfrak{s}}(s) \vert_{s=-\tau }= 0, \end{aligned}
(6.1c)

where $$L_{\lambda}^{\mathfrak{c}} := P_{\mathfrak{c}} L_{\lambda}$$, $$L_{\lambda}^{\mathfrak{s}} := P_{\mathfrak{s}} L_{\lambda}$$, and $$\xi\in\mathcal{H}^{ \mathfrak{c}}$$.

Similar to the one-layer backward–forward system (3.6a), (3.6b), the above system is integrated using a two-step backward–forward integration procedure where Eqs. (6.1a), (6.1b) are integrated first backward, and Eq. (6.1c) is then integrated forward. We will emphasize the dependence on ξ of the high-mode component $$y_{\mathfrak{s}}^{(2)}$$ of this system as $$y_{\mathfrak {s}}^{(2)}[\xi]$$.

Theorem 2 below identifies non-resonance conditions (NR2) under which the pullback limit of $$y_{\mathfrak {s}}^{(2)}[\xi ]$$ exists as τ→∞. In particular, it provides an analytical expression of this pullback limit. As it will be supported by the numerical results of Sect. 6.2, this pullback limit will turn out to give access to finite-horizon PMs for a broad class of targets.

### Theorem 2

Consider the two-layer backward–forward system (6.1a)(6.1c) associated with the uncontrolled Burgers-type equation (5.1), i.e. with $$\mathfrak{C} = 0$$. Let $$\mathcal {H}^{\mathfrak{c} }$$ be the subspace spanned by the first two eigenmodes e 1 and e 2 of the corresponding linear operator L λ defined in (5.7). Assume that the eigenvalues of L λ satisfy the following non-resonance conditions:

\begin{aligned} & \beta_1( \lambda) + \beta_2(\lambda) - \beta_3(\lambda) > 0, \qquad \beta_1(\lambda) + 2 \beta_2(\lambda) - \beta_3(\lambda) > 0, \\ &3 \beta_1(\lambda) - \beta_3(\lambda) > 0 , \qquad 3 \beta _1(\lambda ) + \beta_2(\lambda) - \beta_3(\lambda) > 0, \\ & 2 \beta_1(\lambda) + \beta_2(\lambda) - \beta_4(\lambda) > 0, \qquad 4 \beta_1(\lambda) - \beta_4(\lambda) > 0, \\ & 2 \beta_2(\lambda) - \beta_4(\lambda) > 0. \end{aligned}
(NR2)

Then the pullback limit of the solution $$y_{\mathfrak{s}}^{(2)}[\xi ]$$ to (6.1a)(6.1c) exists and is given by:

$$\boxed{h^{(2)}_\lambda(\xi) :=\lim _{\tau\rightarrow+\infty} y^{(2)}_{\mathfrak{s}}[\xi]{(-\tau, 0)}= \int _{-\infty}^0 e^{-\tau' L^{\mathfrak{s}}_\lambda} P_{\mathfrak {s}} B \bigl(y^{(2)}_{\mathfrak{c} }\bigl(\tau'\bigr), y^{(2)}_{\mathfrak{c}}\bigl(\tau'\bigr) \bigr) \,\mathrm{d} \tau', \quad \forall\xi\in \mathcal{H}^{\mathfrak{c}}.}$$
(6.2)

Under the above conditions, $$h^{(2)}_{\lambda}$$ has furthermore the following analytic expression:

$$h^{(2)}_\lambda( \xi_1e_1+\xi_2e_2) = h^{(2),3}_\lambda(\xi_1,\xi_2) e_3 + h^{(2),4}_\lambda(\xi_1, \xi_2) e_4, \quad(\xi_1, \xi_2) \in \mathbb{R}^{2},$$
(6.3)

where

\begin{aligned} h^{(2),3}_\lambda(\xi_1,\xi_2) & := \bigl\langle h^{(2)}_\lambda(\xi _1e_1+ \xi_2e_2), e_3 \bigr\rangle \\ &\phantom{:} = \boldsymbol{A} \xi_1 \xi_2 + \boldsymbol{B} ( \xi_1)^3 + \boldsymbol{C} \xi_1(\xi _2)^2 + \boldsymbol{D} (\xi_1)^3 \xi_2, \end{aligned}
(6.4a)
\begin{aligned} h^{(2),4}_\lambda(\xi_1,\xi_2) & := \bigl\langle h^{(2)}_\lambda(\xi _1e_1+ \xi_2e_2), e_4 \bigr\rangle = \boldsymbol{E} ( \xi_2)^2 + \boldsymbol{F} (\xi _1)^2 \xi_2 + \boldsymbol{G} (\xi_1)^4, \end{aligned}
(6.4b)

with

\begin{aligned} \boldsymbol{A} & = - \frac{3\alpha}{\beta_{1}(\lambda) + \beta _{2}(\lambda) -\beta_{3}(\lambda)}, \\ \boldsymbol{B} & = - \frac{3\alpha^2}{(3 \beta_{1}(\lambda) -\beta _{3}(\lambda)) (\beta_{1}(\lambda) + \beta_{2}(\lambda) -\beta _{3}(\lambda))}, \\ \boldsymbol{C} & = \frac{3\alpha}{ (\beta_{1}(\lambda) + 2 \beta _{2}(\lambda ) -\beta_{3}(\lambda) ) (\beta_{1}(\lambda) + \beta_{2}(\lambda) -\beta _{3}(\lambda) )}, \\ \boldsymbol{D} & = \frac{3\alpha^3}{ (3 \beta_{1}(\lambda) - \beta _{3}(\lambda)) (\beta_{1}(\lambda) + \beta_{2}(\lambda) -\beta _{3}(\lambda) ) (\beta_{1}(\lambda) + 2 \beta_{2}(\lambda) -\beta _{3}(\lambda))} \\ &\quad{} + \frac{3\alpha^3 }{ (3 \beta_{1}(\lambda) - \beta_{3}(\lambda )) ( 3\beta_{1}(\lambda) + \beta_{2}(\lambda) -\beta_{3}(\lambda) )(\beta _{1}(\lambda) + 2 \beta_{2}(\lambda) -\beta_{3}(\lambda))}, \\ \boldsymbol{E} & = - \frac{2 \alpha}{\beta_{2}(\lambda) -\beta _{4}(\lambda )}, \qquad \boldsymbol{F} = - \frac{4 \alpha^2}{ (2 \beta _{1}(\lambda) + \beta_{2}(\lambda) -\beta_{4}(\lambda) ) (2 \beta _{2}(\lambda) -\beta_{4}(\lambda))}, \\ \boldsymbol{G} & = - \frac{4 \alpha^3}{ (4 \beta_{1}(\lambda) - \beta _{4}(\lambda) ) (2\beta_{1}(\lambda) + \beta_{2}(\lambda) -\beta _{4}(\lambda)) (2 \beta_{2}(\lambda) -\beta_{4}(\lambda))}, \end{aligned}
(6.5)

and

\begin{aligned} \alpha= \frac{\gamma\pi}{\sqrt{2} l^{3/2}}. \end{aligned}
(6.6)

### Remark 4

Note that the analytic expression of $$h^{(2)}_{\lambda}$$ given in (6.3) can be written as the sum of $$h^{(1)}_{\lambda}$$ given by (5.22)Footnote 16 associated with the one-layer backward–forward system (3.6a), (3.6b), and some other higher-order terms. It is worth noting that the extra five terms contained in the expression of $$h^{(2)}_{\lambda}$$ result from the nonlinear self-interactions between the low modes as brought by $$P_{\mathfrak {c}} B ( y^{(1)}_{\mathfrak{c}}, y^{(1)}_{\mathfrak{c}} )$$ in (6.1b). Numerical results of Sect. 6.2 below, support the fact that these extra terms can be interpreted as corrective terms to $$h^{(1)}_{\lambda}$$. Indeed, as we will illustrate for the optimal control problem (5.9), these terms can help design suboptimal low-dimensional controller of better performances than those built from $$h^{(1)}_{\lambda}$$-based reduced system; the $$h^{(2)}_{\lambda}$$-based reduced system bringing extra higher-order terms corresponding to “low-high” and “high-high” interactions absent from the $$h^{(1)}_{\lambda}$$-based reduced system. This last point can be observed by comparing (5.27) with (6.17) below, where both reduced systems are derived from the abstract formulation (4.2a), (4.2b) by setting the PM function h to be $$h^{(1)}_{\lambda}$$ or $$h^{(2)}_{\lambda}$$, respectively.

### Proof

A simple integration of (6.1a)–(6.1c) shows that for any τ>0 and $$\xi\in\mathcal{H}^{\mathfrak{c}}$$ the solution to the backward–forward system (6.1a)–(6.1c) is given by:

\begin{aligned} y^{(1)}_{\mathfrak{c}}(s) & = e^{s L_\lambda^{\mathfrak{c}}}\xi, \end{aligned}
(6.7a)
\begin{aligned} y^{(2)}_{\mathfrak{c}}(s) & = e^{s L_\lambda^{\mathfrak{c}}}\xi- \int _{s}^0 e^{(s-\tau') L_\lambda^{\mathfrak{c}}} P_{\mathfrak{c}} B \bigl(y^{(1)}_{\mathfrak{c}}\bigl(\tau'\bigr), y^{(1)}_{\mathfrak{c}}\bigl(\tau '\bigr) \bigr) \,\mathrm{d} \tau', \end{aligned}
(6.7b)
\begin{aligned} y_{\mathfrak{s}}^{(2)}[\xi]{(-\tau, s)} & = \int_{-\tau}^s e^{(s-\tau') L_\lambda^{\mathfrak{s}}} P_{\mathfrak{s}} B \bigl(y^{(2)}_{\mathfrak{c}}\bigl( \tau'\bigr), y^{(2)}_{\mathfrak{c}}\bigl(\tau '\bigr) \bigr) \,\mathrm{d}\tau', \end{aligned}
(6.7c)

for all s∈[−τ,0].

Due to (6.7c), the pullback limit of $$y_{\mathfrak {s}}^{(2)}[\xi](-\tau ,0)$$ takes the form given in (6.2) provided that the concerned integral exists. We show below that the (NR2)-condition is necessary and sufficient for such an integral to exist. In that respect, the fact that $$\mathcal{H}^{\mathfrak{c}}$$ is spanned by the first two eigenmodes facilitate some of the manipulations as described below.

First, note that the projections of $$y^{(1)}_{\mathfrak{c}}$$ onto e 1 and e 2, give respectively,

$$y^{(1)}_{1}(s) := \bigl\langle y^{(1)}_{\mathfrak{c}}(s), e_1 \bigr\rangle = e^{\beta _1(\lambda) s}\xi_1, \qquad y^{(1)}_{2}(s) := \bigl\langle y^{(1)}_{\mathfrak{c}}(s), e_2 \bigr\rangle = e^{\beta_2(\lambda) s}\xi_2,$$
(6.8)

where ξ i :=〈ξ,e i 〉, i=1,2.

To determine the projection of $$y^{(2)}_{\mathfrak{c}}$$ against e 1 and e 2, we need to recall that the nonlinear interaction laws (5.20), give here

$$B_{11}^1 = 0, \qquad B_{12}^1 = 2 \alpha, \qquad B_{21}^1 = - \alpha, \qquad B_{11}^2 = - \alpha, \qquad B_{12}^2 = B_{21}^2 = 0,$$
(6.9)

\begin{aligned} \bigl\langle B\bigl(y^{(1)}_{\mathfrak{c}}, y^{(1)}_{\mathfrak{c}}\bigr), e_1 \bigr\rangle =& \bigl\langle B \bigl(y^{(1)}_{1}e_1 + y^{(1)}_{2} e_2, y^{(1)}_{1}e_1 + y^{(1)}_{2} e_2 \bigr), e_1 \bigr\rangle \\ =& y^{(1)}_{1} y^{(1)}_{2}B_{12}^1 - y^{(1)}_{1}y^{(1)}_{2} B_{21}^1 = \alpha y^{(1)}_{1}y^{(1)}_{2}, \\ \bigl\langle B\bigl(y^{(1)}_{\mathfrak{c}}, y^{(1)}_{\mathfrak{c}} \bigr), e_1 \bigr\rangle =& \bigl( y^{(1)}_{1} \bigr)^2 B_{11}^2 = - \alpha \bigl( y^{(1)}_{1} \bigr)^2. \end{aligned}

The projection of $$y^{(2)}_{\mathfrak{c}}$$ against e 1 and e 2 are then given by

\begin{aligned} y^{(2)}_{1}(s) & := \bigl\langle y^{(2)}_{\mathfrak{c}}(s), e_1 \bigr\rangle = e^{\beta _1(\lambda) s}\xi_1 - \alpha\int_s^0 e^{\beta_1(\lambda)(s-\tau ')}y^{(1)}_{1}\bigl(\tau'\bigr) y^{(1)}_{2}\bigl(\tau'\bigr) \,\mathrm{d} \tau', \\ y^{(2)}_{2}(s) & := \bigl\langle y^{(2)}_{\mathfrak{c}}(s), e_2 \bigr\rangle = e^{\beta _2(\lambda) s}\xi_2 + \alpha\int _s^0 e^{\beta_2(\lambda)(s-\tau')} \bigl(y^{(1)}_{1} \bigl(\tau'\bigr)\bigr)^2 \,\mathrm{d}\tau'. \end{aligned}
(6.10)

Relying again on to the nonlinear interaction laws (5.20), we have

\begin{aligned} B_{11}^3 &= 0, \qquad B_{12}^3 = - 2\alpha, \qquad B_{21}^3 = - \alpha , \qquad B_{22}^3 = 0, \\ B_{11}^4 &= B_{12}^4 = B_{21}^4 = 0, \qquad B_{22}^4 = - 2 \alpha, \\ B_{ij}^n &= 0, \quad \forall i, j \in\{1,2\}, \ n \ge5, \end{aligned}
(6.11)

\begin{aligned} y^{(2)}_{3}[\xi]{(-\tau, s)} & := \bigl\langle y^{(2)}_{\mathfrak {s}}[\xi]{(-\tau , s)} , e_3 \bigr\rangle = - 3 \alpha\int_{-\tau}^s e^{\beta_3(\lambda )(s-\tau')} y^{(2)}_{1}\bigl(\tau'\bigr) y^{(2)}_{2}\bigl(\tau'\bigr) \,\mathrm{d} \tau', \\ y^{(2)}_{4}[\xi]{(-\tau, s)} & := \bigl\langle y^{(2)}_{\mathfrak {s}}[\xi]{(-\tau , s)} , e_4 \bigr\rangle = - 2\alpha\int_{-\tau}^s e^{\beta_4(\lambda )(s-\tau')} \bigl(y^{(2)}_{2}\bigl(\tau'\bigr) \bigr)^2 \,\mathrm{d}\tau'. \end{aligned}
(6.12)

By using the expressions of $$y^{(2)}_{1}$$ and $$y^{(2)}_{2}$$ given in (6.10) (and using also (6.8)), it can be checked that the limit $$h^{(2),3}_{\lambda}:= \lim_{\tau\rightarrow+\infty} y^{(2)}_{3}[\xi]{(-\tau, 0)}$$ exists if and only if the first four inequalities in the (NR2)-condition hold, while $$h^{(2),3}_{\lambda}$$ is given by (6.4a) under these conditions. Similarly, the limit $$h^{(2),4}_{\lambda}:= \lim_{\tau\rightarrow +\infty} y^{(2)}_{4}[\xi]{(-\tau, 0)}$$ exists if and only if the last three inequalities in the (NR2)-condition hold, and $$h^{(2),4}_{\lambda}$$ is given by (6.4b) under these conditions. The theorem is proved. □

### Controller Synthesis Based on $$h^{(2)}_{\lambda}$$, and Control Performances: Analytic Derivation and Numerical Results

Analytic derivation of the $$h^{(2)}_{\lambda}$$-based reduced optimal control problem. Following (4.2a), (4.2b), the $$h^{(2)}_{\lambda}$$-based reduced system intended to model the dynamics of the low modes $$P_{\mathfrak{c}}y$$ of (5.1), takes the following abstract form:

\begin{aligned} & \frac{\mathrm{d}z}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda z + P_{\mathfrak{c}} B \bigl(z + h^{(2)}_\lambda(z), z + h^{(2)}_\lambda(z) \bigr) + \mathfrak{C} u_{R}, \\ &\quad{t \in(0, T]},\ z(0) = P_{\mathfrak{c}} y_0 \in \mathcal{H}^{\mathfrak{c}}, \end{aligned}
(6.13)

where y 0 is the initial datum for the original PDE (5.1).

Analogous to (5.17), the cost functional associated with the reduced system (6.13) is given by

$$\widehat{J}_{R}(z, u_{R}) = \int _0^T \biggl( \frac{1}{2}\bigl\| z(t) + h^{(2)}_\lambda\bigl(z(t)\bigr)\bigr\| ^2 + \frac{\mu_1}{2} \bigl\| u_{R}(t)\bigr\| ^2 \biggr) \,\mathrm{d}t + C_T\bigl(z(T), P_{\mathfrak{c}}Y\bigr),$$
(6.14)

where $$C_{T}(z(T), P_{\mathfrak{c}} Y) := \frac{\mu_{2}}{2} \sum_{i=1}^{m} |z_{i}(T) - Y_{i}|^{2}$$ is the terminal payoff term as defined in (5.26), with Y being some prescribed target for (5.1).

By using the analytic expression of $$h^{(2)}_{\lambda}$$ given in (6.3)–(6.5), the cost functional (6.14) can be written into the following explicit form:

$$\widehat{J}_R(z, u_{R}) = \int _0^T \biggl[ \frac{1}{2} \mathcal {G} \bigl(z(t)\bigr) + \frac{\mu_1}{2} \mathcal{E}\bigl(u_{R}(t)\bigr) \biggr] \,\mathrm{d}t + C_T\bigl(z(T), P_{\mathfrak{c}} Y\bigr),$$
(6.15)

where

\begin{aligned} \mathcal{G}(z) &= \frac{1}{2} \bigl\| z + h^{(2)}_\lambda(z)\bigr\| ^2 = \frac {1}{2} \bigl[ (z_1)^2 + (z_2)^2 + \bigl(h^{(2),3}_\lambda(\xi_1,\xi _2) \bigr)^2 + \bigl(h^{(2),4}_\lambda(\xi_1, \xi_2)\bigr)^2 \bigr], \\ \mathcal{E}(u_{R}) & = \frac{\mu_1}{2} \|u_{R} \|^2 = \frac{\mu _1}{2} \bigl[(u_{R,1})^2 + (u_{R,2})^2\bigr], \end{aligned}
(6.16)

with z i :=〈z,e i 〉 and u R,i :=〈u R ,e i 〉, i=1,2.

Now, by using again the analytic expression

$$h^{(2)}_\lambda(\xi_1e_1+ \xi_2e_2) = h^{(2),3}_\lambda( \xi_1,\xi _2) e_3 + h^{(2),4}_\lambda( \xi_1,\xi_2) e_4$$

in (6.13) and projecting this equation against e 1 and e 2 respectively, we obtain, after simplification by using the nonlinear interaction laws (5.20), the following analytic formulation of the $$h^{(2)}_{\lambda}$$-based reduced system (6.13):

\boxed{ \begin{aligned} & \frac{\mathrm{d}z_1}{\mathrm{d}t} = \beta_1(\lambda) z_1 + \alpha \bigl( z_1z_2 + z_2 h^{(2),3}_\lambda(z_1,z_2) + h^{(2),3}_\lambda(z_1,z_2) h^{(2),4}_\lambda(z_1,z_2) \bigr)\\ &\phantom{\frac{\mathrm{d}z_1}{\mathrm{d}t} =}{} + a_{11}u_{R,1}(t) + a_{21}u_{R,2}(t), \\ & \frac{\mathrm{d}z_2}{\mathrm{d}t} = \beta_2(\lambda) z_2 - \alpha z_1^2 + 2 \alpha \bigl( z_1 h^{(2),3}_\lambda(z_1,z_2) + z_2 h^{(2),4}_\lambda(z_1,z_2) \bigr)\\ &\phantom{\frac{\mathrm{d}z_2}{\mathrm{d}t} =}{} + a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t), \end{aligned} }
(6.17)

with $$h^{(2),3}_{\lambda}(z_{1},z_{2})$$ and $$h^{(2),4}_{\lambda}(z_{1},z_{2})$$ given by (6.4a)–(6.4b)–(6.5).Footnote 17

The resulting reduced optimal control problem based on $$h^{(2)}_{\lambda}$$ is thus:

$$\min \widehat{J}_R(z, u_{R}) \quad \text{s.t.}\quad (z, u_{R}) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}} \bigr) \times L^2\bigl(0,T; \mathcal {H}^{\mathfrak{c}}\bigr) \text{ solves (6.17)}.$$
(6.18)

By following similar arguments as provided in Sect. 5.2 and applying the Pontryagin maximum Principle, we can conclude that for a given pair

$$\bigl(\widehat{z}_R^\ast, \widehat{u}_{R}^\ast \bigr) \in L^2\bigl(0,T; \mathcal {H}^{\mathfrak{c} }\bigr) \times L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr)$$

to be optimal for the $$h^{(2)}_{\lambda}$$-reduced optimal problem (6.18), it is necessary and sufficientFootnote 18 to satisfy the following set of conditions:

$$\boxed{\bigl(\widehat{u}^\ast_{R,1}, \widehat{u}^\ast_{R,2}\bigr) = - \biggl( \frac {a_{11} \widehat{p}_{R,1}^\ast+ a_{12} \widehat{p}_{R,2}^\ast}{\mu_1}, \frac{a_{21} \widehat{p}_{R,1}^\ast+ a_{22} \widehat{p}_{R,2}^\ast }{\mu _1} \biggr),}$$
(6.19)

where $$(\widehat{p}_{R,1}^{\ast}, \widehat{p}_{R,2}^{\ast})$$ is the costate associated with $$(\widehat{z}_{R,1}^{\ast}, \widehat{z}_{R,1}^{\ast})$$, both determined by solving the following BVP:

\begin{aligned} & \frac{\mathrm{d}z_1}{\mathrm{d}t} = \beta_1(\lambda) z_1 + \alpha \bigl( z_1z_2 + z_2 h^{(2),3}_\lambda(z_1,z_2) + h^{(2),3}_\lambda(z_1,z_2) h^{(2),4}_\lambda(z_1,z_2) \bigr) - \frac{1}{2} p_1, \\ & \frac{\mathrm{d}z_2}{\mathrm{d}t} = \beta_2(\lambda) z_2 - \alpha(z_1)^2 + 2 \alpha \bigl( z_1 h^{(2),3}_\lambda(z_1,z_2) + z_2 h^{(2),4}_\lambda (z_1,z_2) \bigr)- \frac{1}{2} p_2, \\ & \frac{\mathrm{d}p_1}{\mathrm{d}t} = g_1(z, p), \\ & \frac{\mathrm{d}p_2}{\mathrm{d}t} = g_2(z, p), \end{aligned}
(6.20)

subject to the boundary condition

\begin{aligned} &z_1(0) = \langle y_0, e_1 \rangle, \qquad z_2(0) = \langle y_0, e_2 \rangle, \\ &p_1(T) = \mu_2 \bigl(z_{1}(T) - Y_1\bigr), \qquad p_2(T) = \mu_2 \bigl(z_2(T) - Y_2\bigr), \end{aligned}
(6.21)

where

\begin{aligned} g_1(z, p) & := - z_1 - h^{(2),3}_\lambda(z_1,z_2) \frac{\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_1} - h^{(2),4}_\lambda(z_1,z_2) \frac{\partial h^{(2),4}_\lambda(z_1,z_2)}{\partial z_1} \\ &\quad{} - p_1 \biggl( \beta_1(\lambda) + \alpha z_2 + \alpha z_2 \frac{\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_1} + \alpha \frac {\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_1} h^{(2),4}_\lambda (z_1,z_2) \\ & \quad{} + \alpha h^{(2),3}_\lambda(z_1,z_2) \frac{\partial h^{(2),4}_\lambda(z_1,z_2)}{\partial z_1} \biggr) \\ & \quad{} - 2 \alpha p_2 \biggl( - z_1 + h^{(2),3}_\lambda (z_1,z_2) + z_1 \frac{\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_1} + z_2 \frac{\partial h^{(2),4}_\lambda(z_1,z_2)}{\partial z_1} \biggr), \\ g_2(z, p) & := - z_2 - h^{(2),3}_\lambda(z_1,z_2) \frac{\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_2} - h^{(2),4}_\lambda(z_1,z_2) \frac{\partial h^{(2),4}_\lambda(z_1,z_2)}{\partial z_2} \\ & \quad{} - \alpha p_1 \biggl( z_1 + h^{(2),3}_\lambda(z_1,z_2) + z_2 \frac{\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_2} + \frac {\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_2} h^{(2),4}_\lambda (z_1,z_2) \\ & \quad{} + h^{(2),3}_\lambda(z_1,z_2) \frac{\partial h^{(2),4}_\lambda(z_1,z_2)}{\partial z_2} \biggr) \\ & \quad{} - p_2 \biggl( \beta_2(\lambda) + 2 \alpha z_1 \frac {\partial h^{(2),3}_\lambda(z_1,z_2)}{\partial z_2} + 2 \alpha h^{(2),4}_\lambda(z_1,z_2) + 2 \alpha z_2 \frac{\partial h^{(2),4}_\lambda(z_1,z_2)}{\partial z_2} \biggr). \end{aligned}

The vector field (g 1,g 2) given above has been determined by evaluating $$-\nabla_{z} \widehat{H}(z,p,u)$$, with the following Hamiltonian $$\widehat{H}$$, formed by application of the PMP to (6.18)

$$\widehat{H}(z, p, u) := \mathcal{G}(z) + \mathcal{E}(u) + p_1 \widehat {f_1}(z, u) + p_2 \widehat{f_2}(z, u),$$

where $$(\widehat{f_{1}}, \widehat{f_{2}})$$ denotes the vector field constituting the RHS of the z-equations in (6.20).

Numerical results. The above BVP is solved again using bvp4c, and the resulting two-dimensional suboptimal controller $$\widehat{u}^{\ast}_{R}$$ is obtained according to (6.19). As before, the corresponding suboptimal trajectory $$\widehat{y}_{R}^{\ast}$$ of the PDE (5.1) is computed by driving (5.1) with $$\widehat{u}^{\ast}_{R}$$, following the numerical procedure described in Sect. 5.4.

The corresponding control performance is shown in Fig. 4, where the performance of the suboptimal controllers $$u^{\ast}_{R}$$ and $$u^{\ast}_{G}$$ associated with respectively the two-dimensional $$h^{(1)}_{\lambda}$$-based reduced optimal control problem (5.19) and the two-dimensional Galerkin-based one (A.5) are also reported for comparison. In panel (a) of Fig. 4, we present the PDE final time solution profile $$y(T,\widehat{u}_{R}^{\ast})$$, $$y(T,u_{R}^{\ast})$$, and $$y(T,u_{G}^{\ast})$$ driven respectively by $$\widehat{u}_{R}^{\ast}$$, $$u_{R}^{\ast}$$ and $$u _{G}^{\ast}$$, for T=3. For these simulations, the target profile Y has been chosen to be again spanned by the first two eigenfunctions, but given this time by

$$Y = -0.3\bigl\langle y^+, e_1\bigr\rangle e_1 - 0.1 \bigl\langle y^+, e_2 \bigr\rangle e_2;$$
(6.22)

the initial profile is taken to be the positive steady state y + for the uncontrolled PDE as used in Sect. 5.5, see panel (b). The two components of the synthesized suboptimal controllers are shown in panel (c), and the parameterization defects associated with respectively $$h^{(1)}_{\lambda}$$ and $$h^{(2)}_{\lambda}$$ are shown in panel (d). The corresponding cost values and final-time relative L 2-errors are given in Table 2 above.

The results of Fig. 4(a) and Table 2 illustrate that for a given reduced phase space—here the two-dimensional vector space $$\mathcal {H}^{\mathfrak{c}}$$—the slaving relationship of the high-modes (not in $$\mathcal{H}^{\mathfrak{c}}$$) by the low modes (in $$\mathcal {H}^{\mathfrak{c}}$$) as parameterized by $$h^{(2)}_{\lambda}$$ can turn out to be superior than the one proposed by $$h^{(1)}_{\lambda}$$ for the synthesis of suboptimal solutions to (5.9), and can turn out to be clearly advantageous compared to suboptimal solutions for which no slaving relationship whatsoever is involved such as for those built from the 2D Galerkin-based reduced optimal control problem (A.5). Again, Corollary 2 and the error estimate (4.10) provide theoretical insights that help understand why improving the quality of such a slaving relationship participates to improve the performance of a suboptimal controller. For instance, the improvement in getting closer to the prescribed target Y (Fig. 4(d))—accompanied with a noticeable reduction of the cost values (Table 2)—occurs when the PDE (5.1) is driven by the $$h^{(2)}_{\lambda}$$-based suboptimal controller $$\widehat{u}^{\ast}_{R}$$ instead of the $$h^{(1)}_{\lambda}$$-based one $$u_{R}^{\ast}$$, and goes with a parameterization defect (overall) smaller for $$h^{(2)}_{\lambda}$$ than for $$h^{(1)}_{\lambda}$$ (Fig. 4(d)). Interestingly, this reduction of the parameterization defect comes with the higher-order terms contained in $$h^{(2)}_{\lambda }$$ (see Theorem 2) that can be thus reasonably interpreted as correction terms to the parameterization proposed by $$h^{(1)}_{\lambda}$$; see also Remark 4.

However, such a statement has to be nuanced and an $$h^{(2)}_{\lambda }$$-based reduced system does not always lead to the significant advantages in the design of suboptimal solutions such as illustrated in Fig. 4. The caveat relies on the fact that the parameterization defect associated with $$h^{(2)}_{\lambda}$$ also depends on the target profile. For instance, with the sign-changing target (5.39) used in the experiments of Sect. 5.5, the suboptimal solutions designed from (6.18) achieve comparable performances to those designed from (5.19).

These remarks motivate further analysis to arbitrate whether the success achieved for the target prescribed in (6.22) are pathological or robust, to some extent. For that purpose, we considered deformations of the target (6.22) taken to be of the form

$$Y_{\sigma_1,\sigma_2}= - \sigma_1 \bigl\langle y^+, e_1 \bigr\rangle e_1 - \sigma _2 \bigl\langle y^+, e_2 \bigr\rangle e_2,$$
(6.23)

with σ 1∈[0.2,0.7] and σ 2∈[0.01,0.5], and we solved the corresponding $$h^{(2)}_{\lambda}$$-based (resp. $$h^{(1)}_{\lambda}$$-based) reduced optimal problem to provide the corresponding $$h^{(2)}_{\lambda}$$-based (resp. $$h^{(1)}_{\lambda }$$-based) suboptimal solutions. As a benchmark,Footnote 19 these solutions are compared with those obtained from the m-dimensional Galerkin-based reduced optimal problem (A.10) with m=16. The results are reported in Fig. 5 and in Fig. 6 above. Figure 5 shows for each (σ 1,σ 2) the corresponding relative L 2-errors at the final-time solution profiles compared with the target $$Y_{\sigma_{1},\sigma_{2}}$$; and Fig. 6 shows the cost values associated with the suboptimal controllers $$u_{R}^{\ast}$$ and $$\widehat{u}_{R}^{\ast}$$, on one hand, and $$\widetilde{u}_{G}^{\ast}$$ obtained from the m-dimensional Galerkin-based reduced problem, on the other.

Figures 5 and 6 show that the good performance achieved by the $$h^{(2)}_{\lambda}$$-based suboptimal controller shown in Fig. 4(a), is not isolated and can be even further improved within a broad region of the (σ 1,σ 2)-parameter space when $$Y_{\sigma_{1},\sigma_{2}}$$ is changed accordingly. Compared to the bad performances observed on Fig. 5 (top panel) for the $$h^{(1)}_{\lambda}$$-based suboptimal controllers, these $$h^{(2)}_{\lambda}$$-based results provide strong evidence that the higher-order terms brought by $$h^{(2)}_{\lambda}$$ with respect to $$h^{(1)}_{\lambda}$$, act as corrective terms in the high-mode parametrization proposed by $$h^{(1)}_{\lambda}$$.

These numerical results together with the theoretic results of Corollary 2 suggest that in order to design reduced problems whose solutions would provide even better control performance than those reported here, one can try to construct finite-horizon PMs with smaller parameterization defects than those achieved by $$h^{(2)}_{\lambda}$$. In that respect, the discussions and results of [27, Sects. 4.3–4.5], presented in the context of asymptotic PMs, can be valuable. In connection to the discussion concerning Figs. 2 and 3 in Sect. 5.5, the searching for better slaving relationships between the $$\mathcal{H}^{\mathfrak{s}}$$-modes and the $$\mathcal {H}^{\mathfrak{c}}$$-modes can be combined with the usage of higher dimensional reduced phase spaces $$\mathcal{H}^{\mathfrak{c}}$$ so that the energy kept in the high modes gets reduced. The next section shows that a moderate increase of $$\operatorname{dim}(\mathcal{H}^{\mathfrak{c}})$$ can actually already help improve the performances based on $$h^{(1)}_{\lambda}$$, in the case of locally distributed control laws.

## Synthesis of m-Dimensional Locally Distributed Suboptimal Controllers

In this last section, we consider the more challenging case of optimal locally distributed control problems associated with the Burgers-type equation (5.1). This situation corresponds to the case where the linear operator $$\mathfrak{C}$$ is associated with the characteristic function χ Ω of a subdomain Ω⊂[0,l], such that for any $$u \in\mathcal{H} = L^{2}(0,l)$$, the action of $$\mathfrak{C}$$ on u is defined by:

$$\mathfrak{C} u(x) = \chi_\varOmega(x) u(x), \quad \forall x \in[0,l].$$
(7.1)

As used in the fully distributed case in the previous sections, we will consider for some prescribed (time-independent) target Y, cost functionals of terminal payoff type such as:

$$J^{\mathrm{TP}}(y, u) = \int_0^T \biggl( \frac{1}{2} \bigl\| y(t; y_0, u)\bigr\| ^2 + \frac{\mu_1}{2}\bigl\| u(t)\bigr\| ^2 \biggr) \,\mathrm{d}t + \frac{\mu_2}{2} \bigl\| y(T; y_0, u) - Y\bigr\| ^2,$$
(7.2)

but also cost functionals of tracking type:

$$J^{\mathrm{track}}(y, u) = \int_0^T \biggl( \frac{1}{2} \bigl\| y(t; y_0, u) - Y\bigr\| ^2 + \frac{\mu_1}{2}\bigl\| u(t)\bigr\| ^2 \biggr) \,\mathrm{d}t,$$
(7.3)

where in both cases, μ 1 and μ 2 are some positive parameters.

The optimal control problem takes thus one of the following forms:

\begin{aligned} & \min J^{\mathrm{TP}}(y, u) \quad\text{with }J^{\mathrm{TP}}\mbox{ defined in (7.2)} \quad\text{s.t.} \\ &\quad (y, u) \in L^2(0,T; \mathcal{H}) \times L^2(0,T; \mathcal{H})\text{ solves the problem (5.1)--(5.3)}, \end{aligned}
(7.4)

or

\begin{aligned} &\min J^{\mathrm{track}}(y, u) \quad\text{with }J^{\mathrm{track}}\mbox{ defined in (7.3)} \quad\text{s.t.} \\ &\quad (y, u) \in L^2(0,T; \mathcal{H}) \times L^2(0,T; \mathcal{H}) \text{ solves the problem (5.1)--(5.3)}. \end{aligned}
(7.5)

The goal of this last section is to show that the PM-approach introduced above provides an efficient way to design suboptimal solutions for such optimal control problems associated with locally distributed control laws. For simplicity, we will focus on the performance achieved by the $$h^{(1)}_{\lambda}$$-based reduced system for the design of such suboptimal solutions, that is the following m-dimensional reduced system

$$\frac{\mathrm{d}z}{\mathrm{d}t} = L^{\mathfrak{c}}_\lambda z + P_{\mathfrak{c}} B \bigl(z + h^{(1)}_\lambda(z), z + h^{(1)}_\lambda(z) \bigr) + P_{\mathfrak{c}} \chi_{\varOmega} u_{R}(t), \quad{t \in(0, T],}$$
(7.6)

will be at the core of our synthesis of suboptimal controllers.

It is worthwhile to note that in general, the choice of the reduced dimension, m, depends typically on the system parameters such as the viscosity ν, the domain size l and the control parameter λ; and m is chosen so that the resolved modes explain a sufficient large portion of the energy contained in the PDE solution. For the particular case of locally distributed control laws, the size and the location of the subdomain Ω plays also a determining role in sizing “a good” m. For instance, the smaller the subdomain Ω will be, the larger the dimension m will need to be in order to obtain a reduced system useful for the design of good suboptimal controllers. Intuitively, this is related to the fact that further eigenmodes are needed in order to obtain a reasonably good approximation of the characteristic function χ Ω when the size of the support Ω is further reduced. This intuition will be numerically confirmed in Sect. 7.3 below, where a reduction of 40 percent of the domain compared to the globally distributed case analyzed in Sect. 5.5, led to a choice of m=4 for a design of suboptimal controllers with comparable performances than those achieved in Sect. 5.5, from two-dimensional reduced systems.

We now describe the $$h^{(1)}_{\lambda}$$-based reduced optimal control that will serve us to design the corresponding suboptimal controllers. First, note that the cost functional associated with (7.6) takes one of the following forms

$$J^{\mathrm{TP}}_R(z, u_{R}) = \int _0^T \biggl(\frac{1}{2} \bigl\| z + h^{(1)}_\lambda(z)\bigr\| ^2 + \frac{\mu_1}{2} \|u_{R}\|^2 \biggr) \mathrm{d}t + \frac {\mu_2}{2} \bigl\| z(T; z_0, u_{R}) - P_{\mathfrak{c}}Y\bigr\| ,$$
(7.7)

or

$$J^{\mathrm{track}}_R(z, u_{R}) = \int _0^T \biggl(\frac{1}{2} \bigl\| z + h^{(1)}_\lambda(z) - Y\bigr\| ^2 + \frac{\mu_1}{2} \|u_{R}\|^2 \biggr)\,\mathrm{d}t,$$
(7.8)

depending on whether (7.2) or (7.3) is considered.

The reduced optimal control problem for (7.4) reads then as follows:

$$\min J^{\mathrm{TP}}_R(z, u_{R}) \quad \text{s.t.}\quad (z, u_{R}) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}} \bigr) \times L^2\bigl(0,T; \mathcal {H}^{\mathfrak{c}}\bigr) \text{ solves (7.6)}.$$
(7.9)

Accordingly, the reduced optimal control problem for (7.5) reads:

$$\min J^{\mathrm{track}}_R(z, u_{R}) \quad \text{s.t.}\quad (z, u_{R}) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}} \bigr) \times L^2\bigl(0,T; \mathcal {H}^{\mathfrak{c}}\bigr) \text{ solves (7.6)}.$$
(7.10)

### Analytic Derivation of m-Dimensional $$h_{\lambda}^{(1)}$$-Based Reduced Systems for the Design of Suboptimal Controllers

In this subsection, we derive explicit forms of the reduced suboptimal control problems (7.9) and (7.10). Details are presented for (7.9), while the analogous derivation for (7.10) is left to the interested reader. For this purpose, let us first examine the existence of the finite-horizon PM candidate $$h^{(1)}_{\lambda}$$. We know from Sect. 3.2 that the pullback limit $$h^{(1)}_{\lambda}$$ associated with the backward–forward system (3.6a), (3.6b) exists when the (NR)-condition holds. For the Burgers equation considered here, due to the nonlinear interaction relations (5.20), the (NR)-condition reads as follows:

$$\forall n > m,\ \forall i \in\{1, \ldots, m\},\quad \bigl(n- i \in\{1, \ldots, m\} \bigr)\Longrightarrow \bigl(\beta_{i}(\lambda) + \beta _{n-i}(\lambda) - \beta_n(\lambda) > 0 \bigr).$$
(7.11)

By using the analytic expression of the eigenvalues as given in (5.11), we get

$$\beta_{i}(\lambda) + \beta_{n-i}(\lambda) - \beta_n(\lambda) = \lambda + \frac{\nu\pi^2 (n^2 - i^2 - (n-i)^2)}{l^2},$$
(7.12)

which is positive for all values of λ of interest here ($$\lambda> {\lambda_{c} := \frac{\nu\pi^{2}}{l^{2}}}$$). Consequently, the pullback limit $$h^{(1)}_{\lambda}$$ always exists for such given λ, and its analytic form provided in (3.11) reads as follows for the problem considered here:

$$h^{(1)}_\lambda(\xi) = \sum _{n > m} h^{(1),n}_\lambda(\xi) e_n,$$
(7.13)

where

$$\boxed{h^{(1),n}_\lambda(\xi) = \sum _{ \substack{ i_1 + i_2 = n \\ 1\le i_1, i_2 \le m}} \frac{\xi_{i_1} \xi_{i_2}}{\beta_{i_1}(\lambda) + \beta _{i_2}(\lambda) - \beta_n(\lambda)} \bigl\langle B(e_{i_1}, e_{i_2}), e_n \bigr\rangle .}$$
(7.14)

From (7.14), it is clear that $$h^{(1),n}_{\lambda}= 0$$ for all n>2m. Note also that it follows from the nonlinear interaction laws (5.20) that

$$\bigl\langle B(e_{i}, e_{n -i}), e_n \bigr\rangle + \bigl\langle B(e_{n-i}, e_{i}), e_n \bigr\rangle = - n \alpha,$$

where $$\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}$$. By using this identity, we can rewrite $$h^{(1),n}_{\lambda}$$ for n=m+1,…,2m as follows:

$$h^{(1),n}_\lambda(\xi) = \begin{cases} {- n \alpha\sum_{i = n - m}^{(n-1)/2} \frac{\xi_{i} \xi _{n-i}}{\beta_{i}(\lambda) + \beta_{n-i}(\lambda) - \beta _n(\lambda)}}, & \text{if n is odd}, \\ {- \frac{n \alpha}{2} ( \sum_{i = n - m}^{(n-2)/2} \frac{2 \xi_{i} \xi_{n-i}}{\beta_{i}(\lambda) + \beta _{n-i}(\lambda) - \beta_n(\lambda)} + \frac{(\xi_{n/2})^2}{2\beta_{n/2}(\lambda) - \beta _n(\lambda)} ) } , & \text{if n is even}. \\ \end{cases}$$
(7.15)

where the convention that the sum is zero when the lower bound of the summation index is greater than its upper bound, has been adopted.

Let us denote by M the matrix whose components are given by

$$M(i,j) := \langle{\chi_\varOmega} e_i, e_j \rangle,\quad1 \le i,j \le m.$$
(7.16)

Let us also introduce

$$v_{R}(t):=M^{\mathrm{tr}} u_{R}(t).$$
(7.17)

By rewriting the reduced system (7.6) as

$$\frac{\mathrm{d}z_i}{\mathrm{d}t} = \beta_i(\lambda) z_i + \bigl\langle B \bigl(z + h^{(1)}_\lambda(z), z + h^{(1)}_\lambda(z) \bigr), e_i \bigr\rangle + v_{R,i}(t), \quad{t \in(0,T ]}, \ i = 1, \ldots, m,$$
(7.18)

and by using the expansions

$$z = \sum_{i=1}^m z_i e_i, \qquad h^{(1)}_\lambda(z) = \sum _{n = m+1}^{2m} h^{(1),n}_\lambda(z)e_n,$$

along with the nonlinear interaction relations (5.20), the above system of equations becomes:

\begin{aligned} \boxed{ \begin{array}{rcl} \displaystyle \frac{\mathrm{d}z_i}{\mathrm{d}t} & =& \beta_i(\lambda) z_i + \overbrace{i \alpha \Biggl( - \sum_{j = 1}^{\lfloor i/2 \rfloor} \omega _{i,j} z_j z_{i-j} + \sum_{j = i+1}^m z_j z_{j-i} \Biggr)}^{(\mathbf{a})} + \overbrace{ i \alpha\sum_{j = m-i+1}^{m} z_j h_{\lambda }^{(1),j+i}(z)}^{(\mathbf{b})}\\ &&{} + \underbrace{i \alpha\sum_{n = m+1}^{2m-i} h_{\lambda}^{(1),n}(z) h_{\lambda}^{(1),n+i}(z)}_{(\mathbf{c})} + v_{R,i}(t), \quad {t \in(0,T]}, \ i = 1, \ldots, m, \end{array} } \end{aligned}
(7.19)

where ⌊x⌋ denotes the largest integer less than x; $$h_{\lambda}^{(1),n}$$ is provided by (7.15); and the coefficients ω i,j are given by

$$\omega_{i,j} := \begin{cases} 1, & \text{if i is odd, or if i is even and }j \neq i/2, \\ 1/2, & \text{if i is even and }j = i/2. \end{cases}$$

In the above system, the terms gathered in (a) correspond to the self-interactions between the low modes: 〈B(z,z),e i 〉, the terms gathered in (b) correspond to the cross-interactions between the low and (unresolved) high modes such as parameterized by $$h_{\lambda}^{(1)}$$: $$\langle B(z, h^{(1)}_{\lambda}(z)), e_{i} \rangle+ \langle B(h^{(1)}_{\lambda}(z),z), e_{i} \rangle$$, and the terms gathered in (c) correspond to the self-interactions between the high modes (still such as parameterized by $$h_{\lambda}^{(1)}$$) as projected onto $$\mathcal{H}^{\mathfrak{c}}$$: $$\langle B(h^{(1)}_{\lambda}(z), h^{(1)}_{\lambda}(z)), e_{i} \rangle$$.

Note that in the case m=2 the system (7.19) takes the same functional form as the $$h^{(1)}_{\lambda}$$-based reduced system (5.27) derived in Sect. 5.2 for the globally distributed control case, only the matrices given in (5.16) and (7.16) differ. We refer again to Appendix B for an analysis of the Cauchy problem associated with (7.19), leaving to the interested reader the generalization to the m-dimensional case.

### Synthesis of m-Dimensional Locally Distributed Suboptimal Controllers

We apply once more the Pontryagin maximum principle to derive boundary value problems to be satisfied by an $$h^{(1)}_{\lambda}$$-based suboptimal controller. We focus again on the case with terminal payoff given by (7.9), and indicate necessary changes for the case of tracking type (7.10) at the end of this subsection.

Let us denote the RHS of (7.19) by f(z,v R ). The Hamiltonian associated with the cost functional (7.7) reads then as follows:

$$H(z, p, u_{R}) := \frac{1}{2} \bigl\| z + h^{(1)}_\lambda(z)\bigr\| ^2 + \frac {\mu _1}{2} \|u_{R}\|^2 + p^{\mathrm{tr}} f(z, v_R)$$
(7.20)

where p:=(p 1,…,p m )tr is the costate, and v R =M tr u R ; see (7.17).

Recall also that the terminal payoff, denoted by $$C_{T}(z(T), P_{\mathfrak{c}}Y)$$, reads in this case:

$$C_T\bigl(z(T), P_{\mathfrak{c}} Y\bigr):= \frac{\mu_2}{2} \sum_{i=1}^m \bigl|z_i(T) - Y_i\bigr|^2.$$
(7.21)

It follows from the Pontryagin maximum principle that for a given pair

$$\bigl(z_R^\ast, v_R^\ast\bigr) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr) \times L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr)$$

to be optimal for the reduced problem (7.9), it must satisfy the following conditions for all i=1,…,m (see e.g. [67, Chap. 5]):

\begin{aligned} & \frac{\mathrm{d}z^\ast_{R}}{\mathrm{d}t} = \nabla_{p}H\bigl(z^\ast _{R}, p^\ast_{R}, {v^\ast_{R}} \bigr) = f\bigl(z^\ast_{R}, v^\ast_R \bigr), \end{aligned}
(7.22a)
\begin{aligned} & \frac{\mathrm{d}p^\ast_{R}}{\mathrm{d}t} = - \nabla_{z}H\bigl(z^\ast _{R}, p^\ast_{R}, {v^\ast_{R}} \bigr)= g\bigl(z^\ast_{R}, p^\ast_{R} \bigr), \end{aligned}
(7.22b)
\begin{aligned} & \nabla_{u_R} H\bigl(z^\ast_{R}, p^\ast_{R}, {v^\ast_{R}}\bigr) = 0, \end{aligned}
(7.22c)
\begin{aligned} & p_{R}^\ast(T) = \nabla_z C_T \bigl(z_R^\ast(T), P_{\mathfrak{c}}Y\bigr), \end{aligned}
(7.22d)

where $$v_{R}^{\ast}=M^{\mathrm{tr}} u_{R}^{\ast}$$; $$p_{R}^{\ast}=\sum_{i=1}^{m} p_{R,i}^{\ast}e_{i}$$ denotes the costate associated with $$z_{R}^{\ast}$$; and the vector field (g 1,…,g m )tr is defined by

\begin{aligned} &g_i(z, p) := - { \frac{\partial H (z,p,v_R)}{\partial z_i}}= - z_i - \sum_{n=m+1}^{2m} h_\lambda^{(1),n}(z)\frac{\partial h_\lambda ^{(1),n}(z)}{\partial z_i} - \sum _{j = 1}^m p_j \frac{\partial f_j(z,v_R)}{\partial z_i}, \\ &\quad i = 1, \ldots, m. \end{aligned}
(7.23)

Here the partial derivatives $$\frac{\partial h_{\lambda}^{(1),n}(z)}{\partial z_{i}}$$ can be obtained by using the expression of $$h_{\lambda}^{(1),n}$$ given in (7.15) which leads to

$$\frac{\partial h_\lambda^{(1),n}(z)}{\partial z_i} = \begin{cases} \frac{-j\alpha z_{n-i}}{\displaystyle\beta_i(\lambda) + \beta_{n-i}(\lambda) - \beta_n(\lambda)}, & \text{if }n \in\{m+1, \ldots, 2m\}\mbox{ and }i \in\{n-m, \ldots, m\}, \\ 0, & \text{otherwise}. \end{cases}$$
(7.24)

The formula for $$\frac{\partial f_{j}(z,v_{R})}{\partial z_{i}}$$ can be obtained by taking the corresponding partial derivative of the RHS of (7.19) form which we obtain after simplifications

$$\frac{\partial f_j(z,v_R)}{\partial z_i} = \beta_{j}(\lambda) \delta _{ij} + j \alpha\bigl( I_{j,i}^a + I_{j,i}^b + I_{j,i}^c \bigr),$$
(7.25)

where δ ij denotes the Kronecker delta, and $$I_{j,i}^{a}$$, $$I_{j,i}^{b}$$ and $$I_{j,i}^{c}$$ are given by

\begin{aligned} I^a_{j,i} =& \frac{\partial}{\partial z_i} \Biggl( - \sum _{k = 1}^{\lfloor j/2 \rfloor} \omega_{j,k} z_k z_{j-k} + \sum_{k = j+1}^m z_k z_{k-j} \Biggr) = \begin{cases} z_{i-j}, & \text{if }i > j, \\ z_{i+j}, & \text{if }i = j\mbox{ and }i+j\le m, \\ z_{i+j} - z_{j-i}, & \text{if }i < j\mbox{ and }i+j\le m, \\ -z_{j-i}, & \text{if }i < j\mbox{ and }i+j > m, \\ 0, & \text{otherwise}; \end{cases} \end{aligned}
(7.26)
\begin{aligned} I^b_{j,i} =& \frac{\partial}{\partial z_i} \Biggl( \sum _{k = m-j+1}^{m} z_k h_{\lambda}^{(1),k+j}(z) \Biggr) = \begin{cases} h_\lambda^{(1),i+j} + \sum_{k = m-j+1}^{m} z_k \frac{\partial h_{\lambda }^{(1),k+j}(z)}{\partial z_i}, & \text{if }i + j > m,\\ \sum_{k = m-j+1}^{m} z_k \frac{\partial h_{\lambda }^{(1),k+j}(z)}{\partial z_i}, & \text{if }i + j \le m; \end{cases} \end{aligned}
(7.27)

and

\begin{aligned} I^c_{j,i} =& \frac{\partial}{\partial z_i} \Biggl( \sum _{n = m+1}^{2m-j} h_{\lambda}^{(1),n}(z) h_{\lambda}^{(1),n+j}(z) \Biggr) \\ =& \sum_{n = m+1}^{2m-j} \biggl( \frac{\partial h_{\lambda }^{(1),n}(z)}{\partial z_i} h_{\lambda}^{(1),n+j}(z) + h_{\lambda}^{(1),n}(z) \frac{\partial h_{\lambda }^{(1),n+j}(z)}{\partial z_i} \biggr). \end{aligned}
(7.28)

We derive next a relation between $$u_{R}^{\ast}$$ and $$p_{R}^{\ast}$$, which when used in (7.22a)–(7.22d) leads to a BVP for $$(z_{R}^{\ast}, p_{R}^{\ast})$$ to be solved in order to find $$u_{R}^{\ast}$$. To this end, note that from the expression of the Hamiltonian H given in (7.20), we obtain the following expression of $$\nabla _{u_{R}} H(z^{\ast}_{R}, p^{\ast}_{R}, u^{\ast}_{R})$$, which written component-wise, gives:

\begin{aligned} \frac{\partial H}{\partial u_{R,i}}\bigl(z^\ast_{R}, p^\ast_{R}, u^\ast_{R}\bigr) =& \mu_1 u_{R,i}^\ast+ \sum_{j=1}^m p^\ast_{R,j} \frac{\partial f_j}{\partial u_{R,i}}\bigl(z_R^\ast, M^{\mathrm{tr}}u_{R}^\ast \bigr) \\ =& \mu_1 u_{R,i}^\ast+ \sum _{j=1}^m p^\ast_{R,j} M(i,j), \quad i \in\{1, \ldots, m\}. \end{aligned}

The first-order optimality condition (7.22c) leads to

$$u_{R}^\ast= - \frac{1}{\mu_1} M p_R^\ast,$$
(7.29)

where M is given by (7.16).

It follows then that the controller $$v_{R}^{\ast}$$ in (7.22a) takes the form:

$$v^\ast_R = M^{\mathrm{tr}} u_{R}^\ast= - \frac{1}{\mu_1} M^{\mathrm{tr}} M p_R^\ast.$$
(7.30)

To summarize, corresponding to the $$h^{(1)}_{\lambda}$$-based reduced optimal control problem (7.9), we have derived the following BVP to be satisfied by the optimal trajectory $$z_{R}^{\ast}$$ and its costate $$p_{R}^{\ast}$$:

\begin{aligned} & \frac{\mathrm{d}z^\ast_{R,i}}{\mathrm{d}t} = f_i \bigl( z^\ast _{R}, v^\ast_R \bigr), \quad{t \in(0, T]}, \end{aligned}
(7.31a)
\begin{aligned} & \frac{\mathrm{d}p^\ast_{R,i}}{\mathrm{d}t} = g_i\bigl(z^\ast_{R}, p^\ast_{R}\bigr), \quad {t \in(0, T]}, \end{aligned}
(7.31b)
\begin{aligned} & z^\ast_{R,i}(0) = y_{0,i}, \qquad p^\ast_{R,i}(T) = \mu_2 \bigl(z^\ast _{R_i}(T) - Y_{i}\bigr), \quad i = 1, \ldots, m, \end{aligned}
(7.31c)

where $$v^{\ast}_{R}$$ is given by (7.30), y 0,i is the projection of the initial data y 0 for the underlying PDE (5.1) against e i , and the boundary condition for $$p^{\ast}_{R}$$ is derived from the terminal condition (7.22d) by using the expression of the terminal payoff C T given in (7.21). Once (7.31a)–(7.31c) is solved, the m-dimensional controller $$u_{R}^{\ast}$$ given by (7.29) constitutes our $$h^{(1)}_{\lambda}$$-based suboptimal controller for the optimal control problem (7.4). Note that $$u_{R}^{\ast}$$ synthesized this way turns out to be the unique optimal controller for the reduced problem (7.9) for the same reasons pointed out in Sect. 5.3.

The corresponding BVP associated with the reduced optimal control problem (7.10) can be derived in the same fashion; and we indicate below the necessary changes. In this case, the Hamiltonian associated with the cost functional (7.8) reads:

$$\widetilde{H}(z, p, u_{R}) := \frac{1}{2} \bigl\| z + h^{(1)}_\lambda(z) - Y\bigr\| ^2 + \frac{\mu_1}{2} \|u_{R}\|^2 + p^{\mathrm{tr}} f(z, v_R).$$
(7.32)

\begin{aligned} & \frac{\mathrm{d}z^\ast_{R,i}}{\mathrm{d}t} = f_i \bigl( z^\ast _{R}, v^\ast_R \bigr), \quad{t \in(0, T]}, \end{aligned}
(7.33a)
\begin{aligned} & \frac{\mathrm{d}p^\ast_{R,i}}{\mathrm{d}t} = \widetilde {g}_i\bigl(z^\ast_{R}, p^\ast _{R}\bigr), \quad{t \in(0, T]}, \end{aligned}
(7.33b)
\begin{aligned} & z^\ast_{R,i}(0) = y_{0,i}, \qquad p^\ast_{R,i}(T) = 0, \quad i = 1, \ldots, m, \end{aligned}
(7.33c)

where f(z,v R ) denotes the RHS of (7.19), $$v^{\ast}_{R}$$ is still given by (7.30), but in contrast to g i given by (5.30), the components $$\widetilde{g}_{i}$$ of the vector field involved in the RHS of the p-equations of (7.33a)–(7.33c), are now given by

\begin{aligned} &\widetilde{g}_i(z, p) := - \frac{\partial\widetilde{H}}{\partial z_{i}} = - (z_i - Y_i) - \sum _{n=m+1}^{2m} \bigl(h_\lambda^{(1),n}(z) - Y_n\bigr) \frac{\partial h_\lambda^{(1),n}(z)}{\partial z_i} - \sum_{j = 1}^m p_j \frac{\partial f_j(z,v_R)}{\partial z_i}, \\ & \quad i = 1, \ldots, m. \end{aligned}
(7.34)

Once the above BVP (7.33a)–(7.33c) is solved, we take $$u_{R}^{\ast}$$ given by (7.29) with $$p^{\ast}_{R}$$ obtained from (7.33a)–(7.33c) as the $$h^{(1)}_{\lambda}$$-based suboptimal controller for the optimal control problem (7.5).

### Control Performances: Numerical Results

To assess the ability of the $$h^{(1)}_{\lambda}$$-based reduced optimal control problems (7.9) and (7.10) in synthesizing suboptimal controllers of good performance for respectively the optimal control problems (7.4) and (7.5), we consider the case where the characteristic function χ Ω is supported on the subdomain Ω=[0.2l,0.8l], and the target is taken to be the target Y used in (5.39) for the experiments of Sect. 5.5. As pointed out prior to Sect. 7.1, to achieve performances comparable to those achieved in Sect. 5.5, it turned out that four-dimensional $$h^{(1)}_{\lambda}$$-based reduced systems were required for the design of suboptimal controllers, instead of the two-dimensional reduced systems of Sect. 5.5. As explained above, this increase of the dimension of the resolved subspace $$\mathcal{H}^{\mathfrak{c}}$$ results from the spatial localization of the controller dealt with here.

Figures 7 and 8 show the performances achieved by the resulting four-dimensional $$h^{(1)}_{\lambda}$$-based suboptimal controllers, corresponding to the cost functional of terminal-payoff type (7.2). The left panel of Fig. 7 shows the PDE solution field driven by the corresponding suboptimal controller field shown on the right panel of the same figure. The left panel of Fig. 8 shows the final-time solution profile, while the right panel shows the corresponding parameterization defect associated with $$h^{(1)}_{\lambda}$$. The corresponding cost value and relative L 2-error of the final time solution profile compared with the target are given by

$$J^{\mathrm{TP}}\bigl(y\bigl(\cdot; y_0, u_R^\ast \bigr), u_R^\ast\bigr) = 1.49, \qquad \frac {\|y(T; y_0, u_R^\ast) - Y\|}{\|Y\|} = 9.52~\%.$$

As a comparison, by using an m-dimensional Galerkin-based reduced system with m=16 to design suboptimal solutions to (7.4), the corresponding cost value and relative L 2-error are given by

$$J^{\mathrm{TP}}\bigl(y\bigl(\cdot; y_0, \widetilde{u}_G^\ast \bigr), \widetilde {u}_G^\ast\bigr) = 1.37, \qquad \frac{\|y(T; y_0, \widetilde{u}_G^\ast) - Y\| }{\|Y\|} = 6.68~\%.$$

The above numerical results indicate thus that the 4-dimensional $$h^{(1)}_{\lambda}$$-based reduced problem (7.9) can be used to design a very good suboptimal controller (for the prescribed target Y given by (5.39)) for the optimal control problem (7.4) with performance comparable to the (more standard) higher-dimensional Galerkin-based reduced systems. This success goes with the relatively small parameterization defect as well as with the relatively small energy kept in the high-modes (not shown); see right panel of Fig. 8. Note that for these experiments, the system parameters are chosen to be l=1.3π, λ=7λ c , ν=0.25, γ=2.5, while the final time is taken to be T=3. The parameters μ 1 and μ 2 in the cost functional (7.2) are taken to be μ 1=1 and μ 2=20. The initial datum is a scaled version of the corresponding positive steady state y + of the uncontrolled PDE, namely y 0=0.5y +.

The performances of the 4-dimensional $$h^{(1)}_{\lambda}$$-based suboptimal controller for (7.10) associated with the cost functional of tracking type (7.3) are illustrated in Figs. 9 and 10. The experimental conditions are here chosen to be: l=1.3π, λ=3λ c , ν=0.2, γ=2.5, while the final time is still taken to be T=3. The parameter μ 1 in the cost functional (7.3) is taken to be μ 1=0.02 and the initial datum is y 0=0.8y +.

For these experiments, the corresponding cost value and relative L 2-error are given by

$$J^{\mathrm{track}}\bigl(y\bigl(\cdot; y_0, u_R^\ast \bigr), u_R^\ast\bigr) = 0.032, \qquad \frac{\|y(T; y_0, u_R^\ast) - Y\|}{\|Y\|} = 12.32~\%.$$

For a high-dimensional Galerkin-based reduced problem with m=16, the corresponding cost value and relative L 2-error are given by

$$J^{\mathrm{track}}\bigl(y\bigl(\cdot; y_0, \widetilde{u}_G^\ast \bigr), \widetilde {u}_G^\ast\bigr) = 0.025, \qquad \frac{\|y(T; y_0, \widetilde{u}_G^\ast) - Y\|}{\|Y\|} = 10.86~\%.$$

Here again, a fairly good performance of the suboptimal controllerFootnote 20 as synthesized by solving the 4-dimensional $$h^{(1)}_{\lambda}$$-based reduced problem (7.10), is achieved. Due to the deterioration of the parameterization defect of $$h^{(1)}_{\lambda}$$ that can be observed by comparing the right panel of Fig. 10 with the right panel of Fig. 8, the error estimate (4.10) suggests that such a success has to come with a noticeable reduction of the energy contained in the high modes of the PDE solution driven by the suboptimal controller synthesized for (7.10) compared to the PDE solution driven by the suboptimal controller synthesized for (7.9). Such theoretical prediction based on Corollary 2 can actually be empirically confirmed by looking at the numerical values of these high-mode energies (not shown).

Finally, it is worth mentioning that similar to the globally distributed case, the performances of the $$h^{(1)}_{\lambda}$$-based reduced systems and the associated parameterization defects of $$h^{(1)}_{\lambda}$$ depend on the target and the length of the time horizon; cf. Figs. 2, 5 and 6. The dependence on the PDE initial datum turned out also to be an important factor. In particular, it has been observed that for both problems (7.4) and (7.5) the parameterization defects deteriorate when the scaling factors δ used in the construction of the initial datum y 0=δy + increases. Based on the results of Sect. 6 for the globally distributed case, it can be reasonably expected that PM functions such as $$h^{(2)}_{\lambda}$$ that bring higher-order terms compared to $$h^{(1)}_{\lambda}$$ (cf. Theorem 2) can allow to reach better performance for a broader range of initial data and target profiles; the parameterization defects being reasonably expected to get smaller.

1. 1.

See also [79, Chap. 5] and  for the use of singular perturbation techniques for optimal control of PDEs.

2. 2.

Depending on the problem at hand; see e.g. .

3. 3.

In particular, nonlinearities including a loss of regularity compared to the ambient space $$\mathcal {H}$$, are allowed; see e.g. Sect. 5 below.

4. 4.

We refer to Sects. 57 for other type of cost functional including a terminal cost.

5. 5.

Mainly in a stochastic context; see however [27, Sect. 4.5] for the deterministic setting.

6. 6.

In particular, the reduction techniques developed in this article should not be confused with the reduction techniques based on the slow manifold theory which have been used to deal with the reduction of optimal control problems arising in slow-fast systems, where the separation of the dynamics holds in time rather than in space; see e.g. [69, 77, 87]. Furthermore, unlike slow manifolds, the finite-horizon PMs considered in this article are not invariant for the dynamics. To the contrary, they correspond to manifolds for which the dynamics wanders around, within some margin whose size (in a mean square sense) is strictly smaller than the energy unexplained by the $$\mathcal {H}^{\mathfrak{c} }$$-modes.

7. 7.

Over the time interval [0,t ].

8. 8.

Equation (1.2) corresponds to a deterministic situation dealt with in  by setting the noise amplitude to zero.

9. 9.

So that $$h(z_{R}^{\ast})$$ is a good approximation of the high-mode projection $$P_{\mathfrak{s}}y^{\ast}$$.

10. 10.

Note that in practice, although the second order optimality condition (4.4) is difficult to check, the error estimates such as (4.10) will still demonstrate their relevance for the performance analysis; see Sect. 5.5.

11. 11.

In the sense recalled in (5.10) below.

12. 12.

See  for more details about bvp4c. We also mention that all the numerical experiments performed in this article have been carried out by using the Matlab version 7.13.0.564 (R2011b).

13. 13.

As approximated from the 16-dimensional Galerkin-based reduced optimal problem (A.10).

14. 14.

Note that, given a suboptimal controller, the computation of the parameterization defects here and in latter sections, has been performed by integrating the discrete form (5.37) of (5.1), and by using the formula (3.5), where the H 1-norm has been used in place of the ∥⋅∥ α -norm; see Definition 1 and Sect. 5.1 for the functional spaces defined in (5.6).

15. 15.

In contrast to the indirect method adopted above, BOCOP uses a direct method combining discretization and interior-point methods to solve the reduced optimal control problem (5.19) as implemented in the solver IPOPT ; see the webpage http://bocop.org for more information.

16. 16.

Using the symbols introduced here, $$h^{(1)}_{\lambda}(\xi_{1},\xi_{2}) = \boldsymbol{A} \xi_{1} \xi_{2} e_{3} + \boldsymbol{E} (\xi_{2})^{2} e_{4}$$ from (5.22).

17. 17.

Using this analytic formulation, we mention that the Cauchy problem for (6.17) can be dealt with by carrying out similar (but more tedious) energy estimates as presented in Appendix B for the two-dimensional $$h^{(1)}_{\lambda}$$-based reduced system (5.27).

18. 18.

The sufficient part is again due to the fact that the cost functional (6.14) is quadratic in u R and the dependence on the controller is affine for the system of Eqs. (6.17); see e.g. [67, Sect. 5.3] and .

19. 19.

Here, 4 significant digits of the cost J are ensured with m=16 by comparing with cost values associated with higher-dimensional suboptimal controller synthesized from (A.10).

20. 20.

For the optimal control (7.5).

21. 21.

For any T>0, a given continuous function $$\mathbf{z}: [0, T] \rightarrow \mathbb{R}^{2}$$ is called a mild solution to the reduced system (5.27) if it satisfies the corresponding integral form of the system: $$\mathbf{z}(t) = \mathbf{z}(0) + \int_{0}^{t} \mathbf {F}(s,\mathbf{z}(s))\, \mathrm{d}s$$, for all t∈[0,T], where z:=(z 1,z 2)tr and F denotes the RHS of (5.27).

## References

1. 1.

Abergel, F., Temam, R.: On some control problems in fluid mechanics. Theor. Comput. Fluid Dyn. 1, 303–325 (1990)

2. 2.

Amann, H.: Ordinary Differential Equations: An Introduction to Nonlinear Analysis. De Gruyter Studies in Mathematics, vol. 13. Walter de Gruyter & Co., Berlin (1990)

3. 3.

Armaou, A., Christofides, P.D.: Feedback control of the Kuramoto–Sivashinsky equation. Physica D 137(1-2), 49–61 (2000)

4. 4.

Armaou, A., Christofides, P.D.: Dynamic optimization of dissipative PDE systems using nonlinear order reduction. Chem. Eng. Sci. 57(24), 5083–5114 (2002)

5. 5.

Ascher, U.M., Mattheij, R.M.M., Russell, R.D.: Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. Classics in Applied Mathematics, vol. 13. SIAM, Philadelphia (1995)

6. 6.

Atwell, J.A., King, B.B.: Proper orthogonal decomposition for reduced basis feedback controllers for parabolic equations. Math. Comput. Model. 33, 1–19 (2001)

7. 7.

Baker, J., Armaou, A., Christofides, P.D.: Nonlinear control of incompressible fluid flow: application to Burgers’ equation and 2D channel flow. J. Math. Anal. Appl. 252, 230–255 (2000)

8. 8.

Bardi, M., Capuzzo-Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Springer, Berlin (2008)

9. 9.

Beeler, S.C., Tran, H.T., Banks, H.T.: Feedback control methodologies for nonlinear systems. J. Optim. Theory Appl. 107(1), 1–33 (2000)

10. 10.

Bensoussan, A., Da Prato, G., Delfour, M.C., Mitter, S.K.: Representation and Control of Infinite Dimensional Systems. Springer, Berlin (2007)

11. 11.

Berestycki, H., Kamin, S., Sivashinsky, G.: Metastability in a flame front evolution equation. Interfaces Free Bound. 3(4), 361–392 (2001)

12. 12.

Bergmann, M., Cordier, L.: Optimal control of the cylinder wake in the laminar regime by trust-region methods and pod reduced-order models. J. Comput. Phys. 227(16), 7813–7840 (2008)

13. 13.

Betts, J.T.: Survey of numerical methods for trajectory optimization. J. Guid. Control Dyn. 21(2), 193–207 (1998)

14. 14.

Betts, J.T.: Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, 2nd edn. Advances in Design and Control, vol. 19. SIAM, Philadelphia (2010)

15. 15.

Bewley, T.R., Moin, P., Temam, R.: DNS-based predictive control of turbulence: an optimal benchmark for feedback algorithms. J. Fluid Mech. 447, 179–225 (2001)

16. 16.

Bewley, T.R., Temam, R., Ziane, M.: A general framework for robust control in fluid mechanics. Physica D 138(3), 360–392 (2000)

17. 17.

Bonnans, F.J., Martinon, P., Grélard, V.: Bocop—a collection of examples. Tech. Rep. RR-8053, INRIA (2012). http://hal.inria.fr/hal-00726992

18. 18.

Bonnard, B., Chyba, M.: Singular Trajectories and Their Role in Control Theory. Mathématiques & Applications (Berlin), vol. 40. Springer, Berlin (2003)

19. 19.

Bonnard, B., Faubourg, L., Trélat, E.: Mécanique Céleste et Contrôle des Véhicules Spatiaux. Mathématiques & Applications (Berlin), vol. 51. Springer, Berlin (2006)

20. 20.

Boscain, U., Piccoli, B.: Optimal Syntheses for Control Systems on 2-D Manifolds. Mathématiques & Applications (Berlin), vol. 43. Springer, Berlin (2004)

21. 21.

Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, New York (2011)

22. 22.

Brunovský, P.: Controlling the dynamics of scalar reaction diffusion equations by finite-dimensional controllers. In: Modelling and Inverse Problems of Control for Distributed Parameter Systems, Laxenburg, 1989. Lecture Notes in Control and Inform. Sci., vol. 154, pp. 22–27. Springer, Berlin (1991)

23. 23.

Bryson, A.E. Jr., Ho, Y.C.: Applied Optimal Control. Hemisphere Publishing Corp., Washington (1975)

24. 24.

Cannarsa, P., Tessitore, M.E.: Infinite-dimensional Hamilton–Jacobi equations and Dirichlet boundary control problems of parabolic type. SIAM J. Control Optim. 34(6), 1831–1847 (1996)

25. 25.

Carvalho, A.N., Langa, J.A., Robinson, J.C.: Attractors for Infinite-Dimensional Non-autonomous Dynamical Systems. Applied Mathematical Sciences, vol. 182. Springer, New York (2013)

26. 26.

Chekroun, M.D., Liu, H., Wang, S.: Approximation of Invariant Manifolds: Stochastic Manifolds for Nonlinear SPDEs I. Springer Briefs in Mathematics. Springer, New York (2014). To appear

27. 27.

Chekroun, M.D., Liu, H., Wang, S.: Stochastic Parameterizing Manifolds and Non-Markovian Reduced Equations: Stochastic Manifolds for Nonlinear SPDEs II. Springer Briefs in Mathematics. Springer, New York (2014). To appear

28. 28.

Chekroun, M.D., Simonnet, E., Ghil, M.: Stochastic climate dynamics: random attractors and time-dependent invariant measures. Physica D 240(21), 1685–1700 (2011)

29. 29.

Chen, C.C., Chang, H.C.: Accelerated disturbance damping of an unknown distributed system by nonlinear feedback. AIChE J. 38(9), 1461–1476 (1992)

30. 30.

Choi, H., Temam, R., Moin, P., Kim, J.: Feedback control for unsteady flow and its application to the stochastic Burgers equation. J. Fluid Mech. 253, 509–543 (1993)

31. 31.

Christofides, P.D., Armaou, A., Lou, Y., Varshney, A.: Control and Optimization of Multiscale Process Systems. Springer, Berlin (2008)

32. 32.

Christofides, P.D., Daoutidis, P.: Nonlinear control of diffusion-convection-reaction processes. Comput. Chem. Eng. 20, S1071–S1076 (1996)

33. 33.

Christofides, P.D., Daoutidis, P.: Finite-dimensional control of parabolic PDE systems using approximate inertial manifolds. J. Math. Anal. Appl. 216(2), 398–420 (1997)

34. 34.

Constantin, P., Foias, C., Nicolaenko, B., Temam, R.: Integral Manifolds and Inertial Manifolds for Dissipative Partial Differential Equations. Applied Mathematical Sciences, vol. 70. Springer, New York (1989)

35. 35.

Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27(1), 1–67 (1992)

36. 36.

Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Burgers equation. Ann. Mat. Pura Appl. 178(1), 143–174 (2000)

37. 37.

Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Navier–Stokes equations. Modél. Math. Anal. Numér. 34, 459–475 (2000)

38. 38.

Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces, vol. 293. Cambridge University Press, Cambridge (2002)

39. 39.

Dacorogna, B.: Direct Methods in the Calculus of Variations, vol. 78. Springer, Berlin (2007)

40. 40.

Dedè, L.: Reduced basis method and a posteriori error estimation for parametrized linear-quadratic optimal control problems. SIAM J. Sci. Comput. 32, 997–1019 (2010)

41. 41.

Evans, L.C.: Partial Differential Equations. Graduate Studies in Mathematics, vol. 19. American Mathematical Society, Providence (2010)

42. 42.

Eyre, D.J.: Unconditionally gradient stable time marching the Cahn–Hilliard equation. Mater. Res. Soc. Symp. Proc. 529, 39–46 (1998)

43. 43.

Fattorini, H.O.: Boundary control systems. SIAM J. Control 6(3), 349–385 (1968)

44. 44.

Fattorini, H.O.: Infinite Dimensional Optimization and Control Theory. Encyclopedia of Mathematics and Its Applications, vol. 62. Cambridge University Press, Cambridge (1999)

45. 45.

Flandoli, F.: Riccati equation arising in a boundary control problem with distributed parameters. SIAM J. Control Optim. 22(1), 76–86 (1984)

46. 46.

Foias, C., Manley, O., Temam, R.: Modelling of the interaction of small and large eddies in two-dimensional turbulent flows. RAIRO. Anal. Numér. 22(1), 93–118 (1988)

47. 47.

Foias, C., Sell, G.R., Temam, R.: Inertial manifolds for nonlinear evolutionary equations. J. Differ. Equ. 73(2), 309–353 (1988)

48. 48.

Franke, T., Hoppe, R.H.W., Linsenmann, C., Wixforth, A.: Projection based model reduction for optimal design of the time-dependent Stokes system. In: Constrained Optimization and Optimal Control for Partial Differential Equations, pp. 75–98. Springer, Berlin (2012)

49. 49.

Fursikov, A.V.: Optimal Control of Distributed Systems: Theory and Applications. Translations of Mathematical Monographs, vol. 187. Am. Math. Soc., Providence (2000)

50. 50.

Grepl, M.A., Kärcher, M.: Reduced basis a posteriori error bounds for parametrized linear-quadratic elliptic optimal control problems. C. R. Acad. Sci., Ser. 1 Math. 349(15), 873–877 (2011)

51. 51.

Gunzburger, M.: Adjoint equation-based methods for control problems in incompressible, viscous flows. Flow Turbul. Combust. 65(3-4), 249–272 (2000)

52. 52.

Gunzburger, M.D.: Sensitivities, adjoints and flow optimization. Int. J. Numer. Methods Fluids 31(1), 53–78 (1999)

53. 53.

Henry, D.: Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, vol. 840. Springer, Berlin (1981)

54. 54.

Hinze, M., Kunisch, K.: On suboptimal control strategies for the Navier–Stokes equations. In: ESAIM: Proceedings, vol. 4, pp. 181–198 (1998)

55. 55.

Hinze, M., Kunisch, K.: Three control methods for time-dependent fluid flow. Flow Turbul. Combust. 65, 273–298 (2000)

56. 56.

Hinze, M., Pinnau, R., Ulbrich, M., Ulbrich, S.: Optimization with PDE constraints. In: Mathematical Modelling: Theory and Applications, vol. 23. Springer, Berlin (2009)

57. 57.

Hinze, M., Volkwein, S.: Proper orthogonal decomposition surrogate models for nonlinear dynamical systems: error estimates and suboptimal control. In: Dimension Reduction of Large-Scale Systems. Lect. Notes Comput. Sci. Eng., vol. 45, pp. 261–306. Springer, Berlin (2005)

58. 58.

Holmes, P., Lumley, J.L., Berkooz, G., Rowley, C.W.: Turbulence, Coherent Structures, Dynamical Systems and Symmetry, 2nd edn. Cambridge University Press, Cambridge (2012)

59. 59.

Hsia, C.H., Wang, X.: On a Burgers’ type equation. Discrete Contin. Dyn. Syst., Ser. B 6(5), 1121–1139 (2006)

60. 60.

Ito, K., Kunisch, K.: Lagrange Multiplier Approach to Variational Problems and Applications, vol. 15. SIAM, Philadelphia (2008)

61. 61.

Ito, K., Kunisch, K.: Reduced-order optimal control based on approximate inertial manifolds for nonlinear dynamical systems. SIAM J. Numer. Anal. 46(6), 2867–2891 (2008)

62. 62.

Ito, K., Ravindran, S.: Optimal control of thermally convected fluid flows. SIAM J. Sci. Comput. 19(6), 1847–1869 (1998)

63. 63.

Ito, K., Ravindran, S.S.: Reduced basis method for optimal control of unsteady viscous flows. Int. J. Comput. Fluid Dyn. 15(2), 97–113 (2001)

64. 64.

Ito, K., Schroeter, J.D.: Reduced order feedback synthesis for viscous incompressible flows. Math. Comput. Model. 33, 173–192 (2001)

65. 65.

Keller, H.B.: Numerical Solution of Two Point Boundary Value Problems. Regional Conference Series in Applied Mathematics, vol. 24. SIAM, Philadelphia (1976)

66. 66.

Kierzenka, J., Shampine, L.F.: A BVP solver based on residual control and the Matlab PSE. ACM Trans. Math. Softw. 27(3), 299–316 (2001)

67. 67.

Kirk, D.E.: Optimal Control Theory: An Introduction. Dover, New York (2012)

68. 68.

Knowles, G.: An Introduction to Applied Optimal Control. Mathematics in Science and Engineering, vol. 159. Academic Press, New York (1981)

69. 69.

Kokotović, P., Khalil, H.K., O’Reilly, J.: Singular Perturbation Methods in Control: Analysis and Design. Classics in Applied Mathematics, vol. 25. SIAM, Philadelphia (1999)

70. 70.

Kokotovic, P., O’Malley, R. Jr., Sannuti, P.: Singular perturbations and order reduction in control theory—an overview. Automatica 12(2), 123–132 (1976)

71. 71.

Kokotovic, P.V.: Applications of singular perturbation techniques to control problems. SIAM Rev. 26(4), 501–550 (1984)

72. 72.

Kokotovic, P.V., Sannuti, P.: Singular perturbation method for reducing the model order in optimal control design. IEEE Trans. Autom. Control 13(4), 377–384 (1968)

73. 73.

Krstic, M., Magnis, L., Vazquez, R.: Nonlinear control of the viscous Burgers equation: trajectory generation, tracking, and observer design. J. Dyn. Syst. Meas. Control 131(2), 021012 (2009), 8 pp.

74. 74.

Kunisch, K., Volkwein, S.: Control of the Burgers’ equation by a reduced-order approach using proper orthogonal decomposition. J. Optim. Theory Appl. 102, 345–371 (1999)

75. 75.

Kunisch, K., Volkwein, S.: Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM J. Numer. Anal. 40, 492–515 (2002)

76. 76.

Kunisch, K., Volkwein, S., Xie, L.: HJB-POD-based feedback design for the optimal control of evolution problems. SIAM J. Appl. Dyn. Syst. 3(4), 701–722 (2004)

77. 77.

Lebiedz, D., Rehberg, M.: A numerical slow manifold approach to model reduction for optimal control of multiple time scale ODE (2013). ArXiv preprint arXiv:1302.1759

78. 78.

Lions, J.L.: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)

79. 79.

Lions, J.L.: Some Aspects of the Optimal Control of Distributed Parameter Systems. SIAM, Philadelphia (1972)

80. 80.

Lions, J.L.: Perturbations Singulières dans les Problèmes aux Limites et en Contrôle Optimal. Lecture Notes in Mathematics, vol. 323. Springer, Berlin (1973)

81. 81.

Lions, J.L.: Exact controllability, stabilization and perturbations for distributed systems. SIAM Rev. 30(1), 1–68 (1988)

82. 82.

Lunardi, A.: Analytic Semigroups and Optimal Regularity in Parabolic Problems. Birkhäuser, Basel (1995)

83. 83.

Ly, H.V., Tran, H.T.: Modeling and control of physical processes using proper orthogonal decomposition. Math. Comput. Model. 33, 223–236 (2001)

84. 84.

Ma, T., Wang, S.: Phase Transition Dynamics. Springer, Berlin (2014)

85. 85.

Medjo, T.T., Tebou, L.T.: Adjoint-based iterative method for robust control problems in fluid mechanics. SIAM J. Numer. Anal. 42(1), 302–325 (2004)

86. 86.

Medjo, T.T., Temam, R., Ziane, M.: Optimal and robust control of fluid flows: some theoretical and computational aspects. Appl. Mech. Rev. 61(1), 010802 (2008), 23 pp.

87. 87.

Motte, I., Campion, G.: A slow manifold approach for the control of mobile robots not satisfying the kinematic constraints. IEEE Trans. Robot. Autom. 16(6), 875–880 (2000)

88. 88.

Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Macmillan & Co., New York (1964). Translated by D.E. Brown. A Pergamon Press Book

89. 89.

Ravindran, S.: A reduced-order approach for optimal control of fluids using proper orthogonal decomposition. Int. J. Numer. Methods Fluids 34(5), 425–448 (2000)

90. 90.

Ravindran, S.S.: Adaptive reduced-order controllers for a thermal flow system using proper orthogonal decomposition. SIAM J. Sci. Comput. 23(6), 1924–1942 (2002)

91. 91.

Roberts, S.M., Shipman, J.S.: Two-Point Boundary Value Problems: Shooting Methods. Am. Elsevier, New York (1972)

92. 92.

Rosa, R.: Exact finite dimensional feedback control via inertial manifold theory with application to the Chafee–Infante equation. J. Dyn. Differ. Equ. 15(1), 61–86 (2003)

93. 93.

Rosa, R., Temam, R.: Finite-dimensional feedback control of a scalar reaction-diffusion equation via inertial manifold theory. In: Foundations of Computational Mathematics, Rio de Janeiro, 1997, pp. 382–391. Springer, Berlin (1997)

94. 94.

Sano, H., Kunimatsu, N.: An application of inertial manifold theory to boundary stabilization of semilinear diffusion systems. J. Math. Anal. Appl. 196(1), 18–42 (1995)

95. 95.

Schättler, H., Ledzewicz, U.: Geometric Optimal Control: Theory, Methods and Examples. Interdisciplinary Applied Mathematics, vol. 38. Springer, New York (2012)

96. 96.

Shvartsman, S.Y., Kevrekidis, I.G.: Nonlinear model reduction for control of distributed systems: a computer-assisted study. AIChE J. 44(7), 1579–1595 (1998)

97. 97.

Temam, R.: Navier–Stokes Equations: Theory and Numerical Analysis. Am. Math. Soc., Providence (1984)

98. 98.

Temam, R.: Inertial manifolds. Math. Intell. 12(4), 68–74 (1990)

99. 99.

Trélat, E.: Optimal control and applications to aerospace: some results and challenges. J. Optim. Theory Appl. 154(3), 713–758 (2012)

100. 100.

Tröltzsch, F.: Optimal Control of Partial Differential Equations: Theory, Methods and Applications. Graduate Studies in Mathematics, vol. 112. Am. Math. Soc., Providence (2010)

101. 101.

Tröltzsch, F., Volkwein, S.: POD a posteriori error estimates for linear-quadratic optimal control problems. Comput. Optim. Appl. 44, 83–115 (2009)

102. 102.

Volkwein, S.: Distributed control problems for the Burgers equation. Comput. Optim. Appl. 18(2), 115–140 (2001)

103. 103.

Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)

## Acknowledgements

We are grateful to Monique Chyba and to Bernard Bonnard for their interest in our works on parameterizing manifolds, which led the authors to propose this article. MDC is also grateful to Denis Rousseau and Michael Ghil for the unique environment they provided to complete this work, at the CERES-ERTI, École Normale Supérieure, Paris. This work has been partly supported by the National Science Foundation grant DMS-1049253 and Office of Naval Research grant N00014-12-1-0911.

## Author information

Authors

### Corresponding author

Correspondence to Mickaël D. Chekroun.

## Appendices

### Appendix A: Suboptimal Controller Synthesis Based on Galerkin Projections and Pontryagin Maximum Principle

To assess the performance of the PM-based reduced systems considered in Sects. 5 and 6 in synthesizing suboptimal controllers in the context of a Burgers-type equation, we derive in this appendix suboptimal control problems associated with the globally distributed optimal control problem (5.9) based on Galerkin approximations. Section A.1 concerns a two-mode Galerkin approximation; and Sect. A.2 deals with the more general m-dimensional case. The former serves as a basis of comparison to analyze the performance achieved by the PM-based approach, while the latter can in principle provide a good indication of the true optimal controller of the underlying optimal control problems by taking the dimension sufficiently large. Results for the general m-dimensional case will also be used in Sect. 7 to derive Galerkin-based reduced systems for the locally distributed problems (7.4) and (7.5).

### A.1 Suboptimal Controller Based on a 2D Galerkin Reduced Optimal Problem

We first present the reduced optimal control problem based on a two-mode Galerkin approximation of the underlying PDE (5.1), which can be derived by simply setting $$h^{(1)}_{\lambda}$$ in (5.18)–(5.17) to zero. The corresponding operational forms for the cost functional and reduced system for the low modes can be obtained from (5.24)–(5.27) by setting α 1(λ) and α 2(λ) to be zero. The resulting cost functional reads:

$$J_G(v, u_G) = \int_0^T \bigl[ \mathcal{G}^G\bigl(v(t)\bigr) + \mathcal{E} \bigl(u_G(t)\bigr) \bigr] \,\mathrm{d}t + C_T\bigl(v(T), P_{\mathfrak{c}}Y\bigr),$$
(A.1)

where $$v = v_{1} e_{1} + v_{2} e_{2} \in L^{2}(0,T; \mathcal{H}^{\mathfrak {c}})$$ is the state variable, $$u_{G} = u_{G,1} e_{1} + u_{G,2} e_{2} \in L^{2}(0,T; \mathcal {H}^{\mathfrak{c}})$$ is the control, C T is the terminal payoff term defined by (5.26), and

$$\mathcal{G}^G(v ) := \frac{1}{2} \|v \|^2 = \frac{1}{2} \bigl[(v_1 )^2 + (v_2 )^2\bigr], \qquad\mathcal{E}(u_G) := \frac{\mu_1}{2}\|u_G\|^2 = \frac {\mu_1}{2} \bigl[(u_{G,1} )^2 + (u_{G,2})^2 \bigr].$$
(A.2)

The equations for v 1 and v 2 are given by:

\begin{aligned} & \frac{\mathrm{d}v_1}{\mathrm{d}t} = \beta_1(\lambda) v_1 + \alpha v_1 v_2 + a_{11} u_{G,1}(t) + a_{21} u_{G,2}(t), \\ & \frac{\mathrm{d}v_2}{\mathrm{d}t} = \beta_2(\lambda) v_2 - \alpha(v_1)^2 + a_{12} u_{G,1}(t) + a_{22} u_{G,2}(t), \end{aligned}
(A.3)

which is subjected to the initial conditions:

$$v_1(0) = \langle y_0, e_1 \rangle, \qquad v_2(0) = \langle y_0, e_2 \rangle,$$
(A.4)

where $$\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}$$.

The corresponding Galerkin-based reduced optimal control problem for (5.9) reads:

$$\min J_G(v, u_G) \quad\text{s.t.}\quad (v, u_G) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr) \times L^2\bigl(0,T; \mathcal {H}^{\mathfrak{c}}\bigr) \text{ solves (A.3)--(A.4)}.$$
(A.5)

It follows again from the Pontryagin maximum principle that for a given pair

$$\bigl(v_G^\ast, u_G^\ast\bigr) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr) \times L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr)$$

to be optimal for the problem (A.5), it must satisfy the following conditions:

\begin{aligned} & \frac{\mathrm{d}v_{G,1}^\ast}{\mathrm{d}t} = \beta_1(\lambda) v_{G,1}^\ast+ \alpha v_{G,1}^\ast v_{G,2}^\ast+ a_{11} u_{G,1}^\ast(t) + a_{21}u_{G,2}^\ast (t), \end{aligned}
(A.6a)
\begin{aligned} & \frac{\mathrm{d}v_{G,2}^\ast}{\mathrm{d}t} = \beta_2(\lambda) v_{G,2}^\ast- \alpha \bigl(v_{G,1}^\ast\bigr)^2 + + a_{12} u_{G,1}^\ast(t) + a_{22}u_{G,2}^\ast(t), \end{aligned}
(A.6b)
\begin{aligned} & \frac{\mathrm{d}p_{G,1}^\ast}{\mathrm{d}t} = - v_{G,1}^\ast- \beta_1(\lambda) p_{G,1}^\ast- \alpha p_{G,1}^\ast v_{G,2}^\ast+ 2 \alpha p_{G,2}^\ast v_{G,1}^\ast, \end{aligned}
(A.6c)
\begin{aligned} & \frac{\mathrm{d}p_{G,2}^\ast}{\mathrm{d}t} = - v_{G,2}^\ast- \beta_2(\lambda) p_{G,2}^\ast- \alpha p_{G,1}^\ast v_{G,1}^\ast, \end{aligned}
(A.6d)
\begin{aligned} & \bigl(u_{G,1}^\ast, u_{G,2}^\ast \bigr)^{\mathrm{tr}} = - \biggl( \frac{a_{11} p_{G,1}^\ast(t) + a_{12} p_{G,2}^\ast(t)}{\mu_1}, \frac{a_{21} p_{G,1}^\ast(t) + a_{22} p_{G,2}^\ast(t)}{\mu_1} \biggr)^{\mathrm{tr}} = - \frac{1}{\mu_1} M^{\mathrm{tr}}p_G^\ast, \end{aligned}
(A.6e)

where $$v_{G,1}^{\ast}= \langle v_{G}^{\ast}, e_{i} \rangle$$, $$u_{G,i}^{\ast}= \langle u_{G}^{\ast}, e_{i} \rangle$$, i=1,2, and $$p_{G}^{\ast}= p_{G,1}^{\ast}e_{1} + p_{G,2}^{\ast}e_{2}$$ denotes the costate associated with $$v_{G}^{\ast}$$.

Thanks to (A.6e), we can express the controller $$u_{G,i}^{\ast}$$ in (A.6a)–(A.6b) in terms of the costate $$p_{G,i}^{\ast}$$, leading thus to the following BVP for $$v_{G}^{\ast}$$ and $$p_{G}^{\ast}$$:

\begin{aligned} \frac{\mathrm{d}v_1}{\mathrm{d}t} &= \beta_1(\lambda) v_1 + \alpha v_1v_2 + f_3(p_1,p_2), \\ \frac{\mathrm{d}v_2}{\mathrm{d}t} & = \beta_2(\lambda) v_2 - \alpha(v_1)^2 + f_4(p_1,p_2), \\ \frac{\mathrm{d}p_1}{\mathrm{d}t} &= -2 v_1 - \beta_1(\lambda) p_1 - \alpha p_1 v_2 + 2 \alpha p_2 v_1, \\ \frac{\mathrm{d}p_2}{\mathrm{d}t} & = -2 v_2 - \beta_2(\lambda) p_2 - \alpha p_1 v_1, \end{aligned}
(A.7)

subject to the boundary condition

\begin{aligned} &v_1(0) = \langle y_0, e_1 \rangle, \qquad v_2(0) = \langle y_0, e_2 \rangle, \\ &p_1(T) =\mu_2 \bigl(v_{1}(T) - Y_1\bigr), \qquad p_2(T) = \mu_2 \bigl(v_2(T) - Y_2\bigr), \end{aligned}
(A.8)

where f 3 and f 4 are defined by (5.33), and the boundary condition for the costate is derived in the same way as in (5.34) thanks to the Pontryagin maximum principle. Once this BVP is solved, the corresponding controller $$u_{G}^{\ast}$$ is determined by (A.6e) which provides the unique optimal controller for the Galerkin-based reduced optimal control problem (A.5), due again to the fact that the cost functional (A.1) is quadratic in u G and the dependence on the controller is affine for the system of Eqs. (A.3); see e.g. [67, Sect. 5.3] and . Note also that analogous results to those presented in Lemma 4 hold for the reduced optimal control problem (A.5) as well.

### A.2 Suboptimal Controller Based on an m-Dimensional Galerkin Reduced Optimal Problem

We derive now a more general reduced optimal control problem based on higher-dimensional Galerkin approximation, where the subspace $$\mathcal {H}^{\mathfrak{c}}$$ is taken to be spanned by the first m eigenmodes:

$$\mathcal{H}^{\mathfrak{c}} := \operatorname{span}\{e_1, \ldots, e_m\}.$$
(A.9)

The main interest is that by choosing m sufficiently large, such a reduced problem can serve in principle to provide a good estimate of the true optimal controllers of the globally distributed optimal control problem (5.9), which can be taken then as a benchmark for the numerical experiments reported in Sects. 5 and 6. Analogous reduced problems associated with the locally distributed cases (7.4) and (7.5) considered in Sect. 7 can be derived in the same way (and actually the corresponding results are the same as those presented in Sect. 7.2 by setting $$h^{(1)}_{\lambda}$$ therein to be zero).

The Galerkin-based reduced optimal control problem (A.5) when generalized to the case with m controlled modes reads:

$$\min\widetilde{J}_G(v, \widetilde{u}_G) \quad \text{s.t.}\quad (v, \widetilde{u}_G) \in L^2\bigl(0,T; \mathcal{H}^{\mathfrak {c}}\bigr) \times L^2\bigl(0,T; \mathcal{H}^{\mathfrak{c}}\bigr) \text{ solves (A.11)--(A.12) below},$$
(A.10)

where $$\mathcal{H}^{\mathfrak{c}}$$ is the m-dimensional reduced phase space defined in (A.9), and

$$\widetilde{J}_G(v, \widetilde{u}_G) = \int _0^T \Biggl[ \frac{1}{2} \sum _{i=1}^m(v_i)^2 + \frac{\mu_1}{2} \sum_{i=1}^m ( \widetilde{u}_{G,i} )^2 \Biggr] \,\mathrm{d}t + \frac{\mu_2}{2} \sum_{i=1}^m \bigl|v_i(T) - Y_i\bigr|^2.$$

The system of equations that $$v(\cdot; \widetilde{u}_{G})$$ satisfies is given by:

$$\frac{\mathrm{d}v_i}{\mathrm{d}t} = \beta_i(\lambda) v_i + \Biggl\langle B \Biggl( \sum _{i=1}^m v_i e_i, \sum_{i=1}^m v_i e_i \Biggr), e_i \Biggr\rangle + \bigl[M^{\mathrm{tr}}\widetilde{u}_{G}(t)\bigr]_i, \quad i = 1,\ldots, m,$$
(A.11)

which is subjected to the initial conditions:

$$v_i(0) = \langle y_0, e_i \rangle, \quad i = 1,\ldots, m,$$
(A.12)

where the matrix M m×m is the representation of the linear operator $$P_{\mathfrak{c}}\mathfrak{C}$$ under the basis e 1,…,e m , i.e. the elements of M are given by $$a_{ij} = \langle\mathfrak {C}e_{i}, e_{j} \rangle$$ (see (5.16) for the case m=2) and $$[M^{\mathrm{tr}}\widetilde{u}_{G}(t)]_{i}$$ denotes the ith-component of the vector $$M^{\mathrm{tr}}\widetilde{u}_{G}(t)$$.

As before, by using the Pontryagin maximum principle, we can derive the following BVP to be satisfied by any optimal pair $$(v_{G}^{\ast}, \widetilde {u}^{\ast}_{G})$$ of (A.10):

\begin{aligned} & \frac{\mathrm{d}v_i}{\mathrm{d}t} = \beta_i(\lambda) v_i + i \alpha \Biggl( - \sum_{j = 1}^{\lfloor i/2 \rfloor} \omega_{i,j} v_j v_{i-j} + \sum _{j = i+1}^m v_j v_{j-i} \Biggr) - \frac{1}{\mu_1} \bigl[M^{\mathrm{tr}} M p\bigr]_i, \quad i = 1, \ldots, m, \end{aligned}
(A.13a)
\begin{aligned} & \frac{\mathrm{d}p_i}{\mathrm{d}t} = - v_i - \sum_{j = 1}^m p_j \frac{\partial f_j(v, p)}{\partial v_i},\quad i = 1, \ldots, m, \end{aligned}
(A.13b)
\begin{aligned} & v_{i}(0) = y_{0,i}, \qquad p_{i}(T) = \mu_2 \bigl(v_{i}(T) - Y_{i}\bigr), \quad i = 1, \ldots, m, \end{aligned}
(A.13c)

where the optimal controller $$\widetilde{u}^{\ast}_{G}$$ is related to the corresponding costate $$p_{G}^{\ast}$$ by

$$\widetilde{u}^\ast_{G} = - \frac{1}{\mu_1} M p_G^\ast,$$
(A.14)

see (A.6e) for the case m=2. Here, f i , i=1,…,m, denotes the RHS of (A.13a) and we have used the nonlinear interactions (5.20) to derive the quadratic parts of f i . The formula for $$\frac{\partial f_{j}(v, p)}{\partial v_{i}}$$ is given by:

$$\frac{\partial f_j(v,p)}{\partial v_i} = \beta_{j}(\lambda) \delta_{ij} + j \alpha I_{j,i},$$
(A.15)

where δ ij denotes the Kronecker delta, and

$$I_{j,i} = \frac{\partial}{\partial v_i} \Biggl( - \sum _{k = 1}^{\lfloor j/2 \rfloor} \omega_{j,k} v_k v_{j-k} + \sum_{k = j+1}^m v_k v_{k-j} \Biggr) = \begin{cases} v_{i-j}, & \text{if }i > j, \\ v_{i+j}, & \text{if }i = j\mbox{ and }i+j\le m, \\ v_{i+j} - v_{j-i}, & \text{if }i < j\mbox{ and }i+j\le m, \\ -v_{j-i}, & \text{if }i < j\mbox{ and }i+j > m, \\ 0, & \text{otherwise}; \end{cases}$$
(A.16)

with ⌊x⌋ being the largest integer less than x and the coefficients ω i,j given by

$$\omega_{i,j} := \begin{cases} 1, & \text{if i is odd, or if i is even and }j \neq i/2, \\ 1/2, & \text{if i is even and }j = i/2. \end{cases}$$

### Appendix B: Global Well-posedness for the Two-dimensional $$h^{(1)}_{\lambda}$$-based Reduced System (5.27)

In this appendix, we show that for any given initial datum and any fixed T>0, the $$h^{(1)}_{\lambda}$$-based reduced system (5.27) admits a unique mild solution in the space $$C([0,T]; \mathbb{R}^{2})$$.Footnote 21 The result follows from classical ODE theory  once we can establish a priori bounds for the solution (z 1(t),z 2(t)). Similar (but more tedious) estimates can be used to deal with the Cauchy problem associated with the $$h^{(2)}_{\lambda}$$-based reduced system (6.17) derived in Sect. 6 and the more general m-dimensional $$h^{(1)}_{\lambda}$$-based reduced system (7.19) encountered in Sect. 7.

Let us first recall that the two-dimensional $$h^{(1)}_{\lambda}$$-based reduced system is given by:

\begin{aligned} & \frac{\mathrm{d}z_1}{\mathrm{d}t} = \beta _1(\lambda) z_1 + \alpha\bigl[ z_1z_2 + \alpha_1(\lambda) z_1z_2^2 + \alpha_1(\lambda) \alpha _2(\lambda ) z_1 z_2^3\bigr] + a_{11}u_{R,1}(t) + a_{21}u_{R,2}(t), \end{aligned}
(B.1a)
\begin{aligned} & \frac{\mathrm{d}z_2}{\mathrm{d}t} = \beta _2(\lambda) z_2 + \alpha \bigl[- z_1^2 + 2 \alpha_1(\lambda) z_1^2z_2 + 2 \alpha_2(\lambda) z_2^3\bigr] + a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t), \end{aligned}
(B.1b)

where $$u_{R}(\cdot):=u_{R,1}(\cdot)e_{1} + u_{R,2}(\cdot)e_{2} \in L^{2}(0,T; \mathcal{H}^{\mathfrak{c}})$$ with T>0 being the fixed finite horizon, α 1(λ) and α 2(λ) are defined in (5.23), $$\alpha= \frac{\gamma\pi}{\sqrt{2}l^{3/2}}$$, and a ij , 1≤i,j≤2, are elements of the coefficients matrix M associated with the operator $$\mathfrak{C}$$; see (5.15)–(5.16).

We check below by energy estimates that no finite time blow-up can occur for solutions to the system (B.1a), (B.1b) emanating from any initial datum $$(z_{1,0}, z_{2,0}) \in\mathbb{R}^{2}$$. For this purpose, let us define

$$R := \max \biggl\{ |z_{2,0}|, \ \frac{\alpha}{|2\alpha\alpha _2(\lambda )|}, \ \sqrt{ \frac{|\beta_2(\lambda)|}{|2\alpha\alpha_2(\lambda)|}} \biggr\} \quad\mbox{and}\quad C := \int _0^T \bigl|a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t)\bigr| \,\mathrm{d}t.$$

We claim that

$$\bigl|z_2(t)\bigr| \le e^{C/R}R \quad \forall t \in[0, T].$$
(B.2)

It is clear that we only need to deal with those values of t such that |z 2(t)|>R. Assume that there exists such time instances, otherwise we are done. Let us fix an arbitrary interval [t ,t ]⊂[0,T] such that

$$\bigl|z_2(t)\bigr| \ge R \quad\forall t \in \bigl[t_\ast, t^\ast\bigr].$$
(B.3)

Since R≥|z 2,0| and z 2 depends continuously on t, we can reduce t such that z 2(t )=R while the condition (B.3) remains true.

Now by multiplying z 2(t) on both sides of (B.1b), we obtain

$$\frac{1}{2} \frac{\mathrm{d}[(z_2)^2]}{\mathrm{d}t} = c(t) (z_2)^2, \quad \forall t \in\bigl[t_\ast, t^\ast\bigr],$$
(B.4)

where

$$c(t) := \biggl( \beta_2(\lambda) - \frac{\alpha(z_1)^2}{z_2} + 2\alpha \alpha_1(\lambda) (z_1)^2 + 2 \alpha \alpha_2(\lambda) (z_2)^2 + \frac {a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t)}{z_2} \biggr).$$

It follows then that

$$\bigl[z_2\bigl(t^\ast\bigr) \bigr]^2 = e^{2\int_{t_\ast}^{t^\ast} c(t) \mathrm{d}t}\bigl[z_2(t_\ast) \bigr]^2.$$
(B.5)

Since |z 2(t)|≥R for all t∈[t ,t ] by the choices of t and t , we get

\begin{aligned} \int_{t_\ast}^{t^\ast} c(t) \, \mathrm{d}t \le& \beta_2(\lambda) \bigl( t^\ast- t_\ast\bigr) + \int_{t_\ast}^{t^\ast} \biggl[ \frac{\alpha}{R} + 2 \alpha \alpha _1(\lambda)\biggr] (z_1)^2 \,\mathrm{d}t + 2 \alpha\alpha_2(\lambda) R^2 \bigl( t^\ast- t_\ast\bigr)\\ &{} + \frac{\int_{t_\ast}^{t^\ast} |a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t)| \,\mathrm{d}t}{R}, \end{aligned}

where we have used $$|- \frac{\alpha}{z_{2}}| \le\frac{\alpha}{R}$$ and 2αα 2(λ)(z 2)2≤2αα 2(λ)R 2, which follow from the definition of R and the fact that α>0 and α 2(λ)<0.

According again to the definition of R and the facts that α>0, α 1(λ)<0 and α 2(λ)<0, we get

$$\frac{\alpha}{R} + 2\alpha\alpha_1(\lambda) \le0 \quad\mbox{and} \quad\beta_2(\lambda) \bigl( t^\ast- t_\ast \bigr) + 2 \alpha\alpha _2(\lambda) R^2 \bigl( t^\ast- t_\ast\bigr) \le0.$$

We obtain then

$$\int_{t_\ast}^{t^\ast} c(t) \,\mathrm{d}t \le \frac{\int_{t_\ast }^{t^\ast} |a_{12}u_{R,1}(t) + a_{22}u_{R,2}(t)| \,\mathrm{d}t}{R} \le\frac{C}{R}.$$

By reporting the above estimate in (B.5) and using |z 2(t )|=R, we obtain

$$\bigl|z_2\bigl(t^\ast\bigr)\bigr| \le e^{C/R}\bigl|z_2(t_\ast)\bigr| = e^{C/R}R,$$

and (B.2) is thus proven.

Note also that by multiplying z 1(t) on both sides of (B.1a), we obtain for any t∈[0,T] at which z 1(t)≠0 that

\begin{aligned} &\frac{1}{2} \frac{\mathrm{d}[(z_1)^2]}{\mathrm{d}t} \\ &\quad = (z_1)^2 \biggl( \beta_1(\lambda) + \alpha z_2 + \alpha \alpha_1(\lambda) (z_2)^2 + \alpha\alpha _1(\lambda) \alpha_2(\lambda) (z_2)^3 + \frac{a_{11}u_{R,1}(t) + a_{21}u_{R,2}(t)}{z_1} \biggr). \end{aligned}
(B.6)

It follows then from the boundedness of z 2 and (B.6) that z 1 can grow at most exponentially. Consequently, no finite time blow-up can occur for the $$h^{(1)}_{\lambda}$$-based reduced system (B.1a), (B.1b).

## Rights and permissions

Reprints and Permissions