Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

In Chap. 4, we presented a DMPC architecture with one-directional communication for a very broad class of nonlinear systems. In this architecture, two separate controllers designed via LMPC were considered, in which one LMPC was used to guarantee the stability of the closed-loop system and the other LMPC was used to improve the closed-loop performance. In this chapter, we focus on DMPC of large-scale nonlinear systems in which several distinct sets of manipulated inputs are used to regulate the system. For each set of manipulated inputs, a different model predictive controller, which is able to communicate with the rest of the controllers in making its decisions, is used to compute the control actions. Specifically, under the assumption that feedback of the state of the process is available to all the distributed controllers at each sampling time and that a model of the plant is available, we present two different DMPC architectures designed via LMPC techniques. In the first architecture, the distributed controllers use a one-directional communication strategy, are evaluated in sequence and each controller is evaluated only once at each sampling time; in the second architecture, the distributed controllers utilize a bidirectional communication strategy, are evaluated in parallel and iterate to improve closed-loop performance. In order to ensure the stability of the closed-loop system, each model predictive controller in both architectures incorporates a stability constraint which is based on a suitable nonlinear control law which can stabilize the closed-loop system. We prove that the two DMPC architectures enforce practical stability in the closed-loop system while improving performance.

Moreover, the DMPC designs will be also extended to include nonlinear systems subject to asynchronous and delayed state feedback. In the case of asynchronous feedback, under the assumption that there is an upper bound on the maximum interval between two consecutive measurements, we first extend both the DMPC architectures to take explicitly into account asynchronous feedback. Subsequently, we design a DMPC scheme using bi-directional communication for systems subject to asynchronous measurements that also involve time-delays under the assumption that there exists an upper bound on the maximum feedback delay. Sufficient conditions under which the proposed distributed control designs guarantee that the states of the closed-loop system are ultimately bounded in regions that contain the origin are provided. The theoretical results are illustrated through a catalytic alkylation of benzene process example.

Finally, in this chapter, we will focus on a hierarchical type DMPC and discuss how to handle communication disruptions—communication channel noise and data losses—between the distributed controllers. To handle communication disruptions, feasibility problems are incorporated in the DMPC architecture to determine whether the data transmitted through the communication channel is reliable or not. Based on the results of the feasibility problems, the transmitted information is accepted or rejected by the stabilizing MPC. In order to ensure the stability of the closed-loop system under communication disruptions, each model predictive controller utilizes a suitable Lyapunov-based stability constraint. The results of this chapter were first presented in [9, 30, 54, 55, 58].

5.2 System Description

In this chapter, we consider nonlinear systems described by the following state-space model:

$$ \dot{x}(t) = f\bigl(x(t)\bigr) + \sum_{i=1}^m g_i\bigl(x(t)\bigr)u_i(t) + k\bigl(x(t)\bigr)w(t),$$
(5.1)

where x(t)∈R n denotes the vector of state variables, \(u_{i}(t) \in R^{m_{i}}\), i=1,…,m, are m sets of control (manipulated) inputs and w(t)∈R w denotes the vector of disturbance variables. The m sets of inputs are restricted to be in m nonempty convex sets \(U_{i}\subseteq R^{m_{u_{i}}}\), i=1,…,m, which are defined as follows:

$$U_i:=\bigl\{u_i\in R^{m_i}: \|u_i\|\leq u_i^{\max}\bigr\},\quad i=1,\ldots,m,$$
(5.2)

where \(u_{i}^{\max}\), i=1,…,m, are the magnitudes of the input constraints. The disturbance vector is bounded, i.e., w(t)∈W where:

$$W:=\bigl\{w \in R^w: \|w\|\leq\theta,\theta>0\bigr\}$$
(5.3)

with θ being a known positive real number.

We assume that f, g i , i=1,…,m, and k are locally Lipschitz vector, matrix and matrix functions, respectively, and that the origin is an equilibrium of the unforced nominal system (i.e., the system of Eq. 5.1 with u i (t)=0, i=1,…,m, w(t)=0 for all t) which implies that f(0)=0.

Remark 5.1

In this chapter, in order to account for DMPC designs in which the distributed controllers are evaluated in parallel, we consider nonlinear systems with control inputs entering the system dynamics in an affine fashion. We note that the results presented in Sects. 5.4.1 and 5.5.2 can be extended to more general nonlinear systems, for example, systems described by the following state-space model:

$$ \dot{x}(t) = f\bigl(x(t),u_1(t),\ldots,u_m(t),w(t)\bigr).$$
(5.4)

5.3 Lyapunov-Based Control

We assume that there exists a nonlinear control law \(h(x)=[h_{1}(x)^{T}\;\cdots\;h_{m}(x)^{T}]^{T}\) with u i =h i (x), i=1,…,m, which renders (under continuous state feedback) the origin of the nominal closed-loop system asymptotically stable while satisfying the input constraints for all the states x inside a given stability region. Using converse Lyapunov theorems, this assumption implies that there exist functions \(\alpha _{i}(\cdot),\;i=1,2,3,4\) of class and a continuously differentiable Lyapunov function V(x) for the nominal closed-loop system that satisfy the following inequalities:

(5.5)
(5.6)
(5.7)
(5.8)

for all \(x \in O \subseteq R^{n_{x}}\) where O is an open neighborhood of the origin. We denote the region Ω ρ O as the stability region of the closed-loop system under the nonlinear control law h(x).

By continuity, the local Lipschitz property assumed for the vector fields f(x), g i (x), i=1,…,m, and k(x) and taking into account that the manipulated inputs u i , i=1,…,m, and the disturbance w are bounded in convex sets, there exist positive constants M, \(M_{g_{i}}\), L x , \(L_{u_{i}}\) and L w (i=1,…,m) such that:

(5.9)
(5.10)
(5.11)
(5.12)
(5.13)

for all x,x′∈Ω ρ , u i U i , i=1,…,m, and wW. In addition, by the continuous differentiable property of the Lyapunov function V(x), there exist positive constants \(L'_{x}\), \(L'_{u_{i}}\), i=1,…,m, and \(L'_{w}\) such that:

(5.14)
(5.15)
(5.16)

for all x,x′∈Ω ρ , u i U i , i=1,…,m, and wW.

5.4 Sequential and Iterative DMPC Designs with Synchronous Measurements

The objective of this section is to design DMPC architectures including multiple MPCs for large-scale nonlinear process systems with continuous, synchronous state feedback. Specifically, we will discuss two different DMPC architectures. The first DMPC architecture is a direct extension of the DMPC presented in Sect. 4.4 in which different MPC controllers are evaluated in sequence, only once at each sampling time and require only one-directional communication between consecutive distributed controllers (i.e., the distributed controllers are connected by pairs). In the second architecture, different MPCs are evaluated in parallel, once or more than once at each sampling time depending on the number of iterations, and bidirectional communication among all the distributed controllers (i.e., the distributed controllers are all interconnected) is used.

In each DMPC architecture, we will design m LMPCs to compute u i , i=1,…,m, and refer to the LMPC computing the input trajectories of u i as LMPC i. In addition, we assume that the state x of the system of Eq. 5.1 is sampled synchronously and the time instants at which we have state measurement samplings are indicated by the time sequence {t k≥0} with t k =t 0+, k=0,1,… where t 0 is the initial time and Δ is the sampling time. The results will be extended to include systems subject to asynchronous and delayed measurements in Sects. 5.5 and 5.6.

5.4.1 Sequential DMPC

A schematic of the architecture considered in this subsection is shown in Fig. 5.1.

Fig. 5.1
figure 1

Sequential DMPC architecture

5.4.1.1 Sequential DMPC Formulation

We first present the implementation strategy of this DMPC architecture and then design the corresponding LMPCs. The implementation strategy of this DMPC architecture is as follows:

  1. 1.

    At t k , all the LMPCs receive the state measurement x(t k ) from the sensors.

  2. 2.

    For j=m to 1

    1. 2.1.

      LMPC j receives the entire future input trajectories of u i , i=m,…,j+1, from LMPC j+1 and evaluates the future input trajectory of u j based on x(t k ) and the received future input trajectories.

    2. 2.2.

      LMPC j sends the first step input value of u j to its actuators and the entire future input trajectories of u i , i=m,…,j, to LMPC j−1.

  3. 3.

    When a new measurement is received (kk+1), go to Step 1.

In this architecture, each LMPC only sends its future input trajectory and the future input trajectories it received to the next LMPC (i.e., LMPC j sends input trajectories to LMPC j−1). This implies that LMPC j, j=m,…,2, does not have any information about the values that u i , i=j−1,…,1 will take when the optimization problems of the LMPCs are designed. In order to make a decision, LMPC j, j=m,…,2 must assume trajectories for u i , i=j−1,…,1, along the prediction horizon. To this end, the nonlinear control law h(x) is used. In order to inherit the stability properties of the controller h(x), each control input u i , i=1,…,m must satisfy a constraint that guarantees a given minimum contribution to the decrease rate of the Lyapunov function V(x). Specifically, the design of LMPC j, j=1,…,m, is based on the following optimization problem:

(5.17)
(5.18)
(5.19)
(5.20)
(5.21)
(5.22)
(5.23)

In the optimization problem of Eqs. 5.175.23, \(u_{s,i}^{*}(t|t_{k})\) denotes the optimal future input trajectory of u i obtained by LMPC i of the form of Eqs. 5.175.23 evaluated before LMPC j, \(\tilde{x}\) is the predicted trajectory of the nominal system with u i =u s,i , i=j+1,…,m, u i , i=1,…,j−1, the corresponding elements of h(x) applied in a sample-and-hold fashion and u j predicted by LMPC j of Eqs. 5.175.23. The optimal solution to the optimization problem of Eqs. 5.175.23 is denoted as \(u_{s,j}^{*}(t|t_{k})\) which is defined for t∈[t k ,t k+N ).

The constraint of Eq. 5.18 is the nominal model of the system of Eq. 5.1, which is used to predict the future evolution of the system; the constraint of Eq. 5.19 defines the value of the inputs evaluated after u j (i.e., u i with i=1,…,j−1); the constraint of Eq. 5.20 defines the value of the inputs evaluated before u j (i.e., u i with i=j+1,…,m); the constraint of Eq. 5.21 is the constraint on the manipulated input u j ; the constraint of Eq. 5.22 sets the initial state for the optimization problem; the constraint of Eq. 5.23 guarantees that the contribution of input u j to the decrease rate of the time derivative of the Lyapunov function V(x) at the initial evaluation time (i.e., at t k ), if \(u_{j}=u^{*}_{s,j}(t_{k}|t_{k})\) is applied, is bigger than or equal to the value obtained when u j =h j (x(t k )) is applied. This constraint allows proving the closed-loop stability properties of this DMPC.

The manipulated inputs of the system of Eq. 5.1 under the DMPC are defined as follows:

$$ u_i(t) = u^*_{s,i}(t|t_k),\quad i=1,\ldots,m,\forall t\in[t_k,t_{k+1}).$$
(5.24)

In what follows, we refer to this DMPC architecture as the sequential DMPC.

Remark 5.2

Note that, in order to simplify the description of the implementation strategy presented above in this subsection, we do not distinguish LMPC m and LMPC 1 from the others. We note that LMPC m does not receive any information from the other controllers and LMPC 1 does not have to send information to any other controller.

Remark 5.3

Note also that the assumption that the full state x of the system is sampled synchronously is a widely used assumption in the control system design. The control system designs presented in this section can be extended to the case where only part of the state x is measurable by designing an observer to estimate the whole state vector from output measurements and by designing the control system based on the measured and estimated states. In this case, the stability properties of the resulting output feedback control systems are affected by the convergence of the observer and need to be carefully studied.

5.4.1.2 Stability Properties

The sequential DMPC of Eqs. 5.175.24 computes the inputs u i , i=1,…,m, applied to the system of Eq. 5.1 in a way such that in the closed-loop system, the value of the Lyapunov function at time instant t k (i.e., V(x(t k ))) is a decreasing sequence of values with a lower bound. Following Lyapunov arguments, this property guarantees practical stability of the closed-loop system. This is achieved due to the constraint of Eq. 5.23. This property is presented in Theorem 5.1 below.

Theorem 5.1

Consider the system of Eq5.1 in closed-loop under the sequential DMPC of Eqs5.17 5.24 based on a nonlinear control law h(x) that satisfies the condition of Eqs5.5 5.8 with class functions α i (⋅), i=1,2,3,4. Let ε w >0, Δ>0 and ρ>ρ s >0 satisfy the following constraint:

$$ -\alpha_3\bigl(\alpha^{-1}_2(\rho_s)\bigr)+L^*\leq-\varepsilon _w/\varDelta ,$$
(5.25)

where \(L^{*} = (L'_{x}+\sum_{i=1}^{m} L'_{u_{i}}u_{i}^{\max})M+L'_{w}\theta\) with M, \(L'_{x}\), \(L'_{u_{i}}\) (i=1,…,m) and \(L'_{w}\) defined in Eqs5.9 5.16. For any N≥1, if x(t 0)∈Ω ρ and if ρ min ρ where:

$$ \rho_{\min}=\max\bigl\{V\bigl(x(t+\varDelta )\bigr):V\bigl(x(t)\bigr)\leq\rho_s\bigr\},$$
(5.26)

then the state x(t) of the closed-loop system is ultimately bounded in \(\varOmega _{\rho_{\min}}\).

Proof

The proof consists of two parts. We first prove that the optimization problem of Eqs. 5.175.23 is feasible for all j=1,…,m and xΩ ρ . Then we prove that, under the DMPC of Eqs. 5.175.24, the state of the system of Eq. 5.1 is ultimately bounded in \(\varOmega _{\rho_{\min}}\). Note that the constraint of Eq. 5.23 of each distributed controller is independent from the decisions that the rest of the distributed controllers make.

Part 1: In order to prove the feasibility of the optimization problem of Eqs. 5.175.23, we only have to prove that there exists a u j (t k ) which satisfies the input constraint of Eq. 5.21 and the constraint of Eq. 5.23. This is because the constraint of Eq. 5.23 is only enforced on the first prediction step of u j (t) and does not depend on the values of the inputs chosen by the rest of the controllers (see Remark 5.9). In the prediction time t∈[t k+1,t k+N ), the input constraint of Eq. 5.24 can be easily satisfied with u j (τ) being any value in the convex set U j .

We assume that x(t k )∈Ω ρ (x(t) is bounded in Ω ρ which will be proved in Part 2). It is easy to verify that the value of u j such that u j (t k )=h j (x(t k )) satisfies the input constraint of Eq. 5.21 (assumed property of h(x) for xΩ ρ ) and the constraint of Eq. 5.23, thus, the feasibility of the optimization problem of LMPC j of Eqs. 5.175.23, j=1,…,m, is guaranteed.

Part 2: From the condition of Eq. 5.6 and the constraint of Eq. 5.23, if x(t k )∈Ω ρ , it follows that:

(5.27)

The time derivative of the Lyapunov function V along the actual state trajectory x(t) of the system of Eq. 5.1 in t∈[t k ,t k+1) is given by:

$$\dot{V}\bigl(x(t)\bigr) = \frac{\partial V(x(t))}{\partial x} \Biggl(f\bigl(x(t)\bigr)+\sum_{i=1}^m g_i\bigl(x(t)\bigr)u_{s,i}^*(t_k|t_k)+k\bigl(x(t)\bigr)w(t) \Biggr). $$
(5.28)

Adding and subtracting \(\frac{\partial V(x(t_{k}))}{\partial x}(f(x(t_{k}))+\sum_{i=1}^{m}g_{i}(x(t_{k}))u_{s,i}^{*}(t_{k}|t_{k}))\) and taking into account Eq. 5.27, we obtain the following inequality:

(5.29)

Taking into account Eqs. 5.5 and 5.9, the following inequality if obtained for all \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\) from Eq. 5.29:

$$ \dot{V}\bigl(x(t)\bigr) \leq-\alpha_3\bigl(\alpha_2^{-1}(\rho_s)\bigr) + \Biggl(L'_x+\sum_{i=1}^m L'_{u_i}u_{s,i}^*(t_k|t_k)\Biggr)\bigl\|x(t)-x(t_k)\bigr\|+L'_w\bigl\|w(t)\bigr\|.$$
(5.30)

Taking into account Eq. 5.9 and the continuity of x(t), the following bound can be written for all t∈[t k ,t k+1):

$$\bigl\|x(t)-x(t_k)\bigr\|\leq M\varDelta .$$
(5.31)

Using this expression, the bounds on the disturbance w(t) and the inputs u i , i=1,…,m, and Eq. 5.30, we obtain the following bound on the time derivative of the Lyapunov function for t∈[t k ,t k+1), for all initial states \(x(t_{k})\in \varOmega _{\rho }/\varOmega _{\rho_{s}}\):

$$ \dot{V}\bigl(x(t)\bigr) \leq-\alpha_3\bigl(\alpha_2^{-1}(\rho_s)\bigr) + \Biggl(L'_x+\sum_{i=1}^m L'_{u_i}u_{i}^{\max}\Biggr)M+L'_w\theta.$$
(5.32)

If the condition of Eq. 5.25 is satisfied, then there exists ε w >0 such that the following inequality holds for \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\):

$$ \dot{V}\bigl(x(t)\bigr)\leq-\varepsilon _w/\varDelta $$
(5.33)

for t∈[t k ,t k+1). Integrating the inequality of Eq. 5.33 on t∈[t k ,t k+1), we obtain that:

(5.34)
(5.35)

for all \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\). Using Eqs. 5.34 and 5.35 recursively it can be proved that, if \(x(t_{0})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\), the state converges to \(\varOmega _{\rho_{s}}\) in a finite number of sampling times without leaving the stability region. Once the state converges to \(\varOmega _{\rho_{s}}\subseteq \varOmega _{\rho_{\min}}\), it remains inside \(\varOmega _{\rho_{\min}}\) for all times. This statement holds because of the definition of ρ min . This proves that the closed-loop system under the sequential DMPC of Eqs. 5.175.24 is ultimately bounded in \(\varOmega _{\rho_{\min}}\). □

Remark 5.4

The sequential DMPC approach can be applied to more general nonlinear systems as described in Eq. 5.4 (see Remark 5.1) by a proper redesign of the Lyapunov-based constraints of Eqs. 5.23 (j=1,…,m) following the method used in the design of the constraints of Eq. 4.15 and 4.21, see Sect. 4.4.1.

5.4.2 Iterative DMPC

An alternative architecture to the sequential DMPC architecture presented in the previous subsection is to evaluate all the distributed LMPCs in parallel and iterate to improve closed-loop performance. A schematic of this control architecture is shown in Fig. 5.2.

Fig. 5.2
figure 2

Iterative DMPC architecture

5.4.2.1 Iterative DMPC Formulation

In this architecture, each distributed LMPC must be able to communicate with all the other controllers (i.e., the distributed controllers are all interconnected). More specifically, when a new state measurement is available at a sampling time, each distributed LMPC controller evaluates and obtains its future input trajectory; and then each LMPC controller broadcasts its latest obtained future input trajectory to all the other controllers. Based on the newly received input trajectories, each LMPC controller evaluates its future input trajectory again and this process is repeated until a certain termination condition is satisfied. Specifically, the implementation strategy is as follows:

  1. 1.

    At t k , all the LMPCs receive the state measurement x(t k ) from the sensors and then evaluate their future input trajectories in an iterative fashion with initial input guesses generated by h(⋅).

  2. 2.

    At iteration c (c≥1):

    1. 2.1.

      Each LMPC evaluates its own future input trajectory based on x(t k ) and the latest received input trajectories of all the other LMPCs (when c=1, initial input guesses generated by h(⋅) are used).

    2. 2.2.

      The controllers exchange their future input trajectories. Based on all the input trajectories, each controller calculates and stores the value of the cost function.

  3. 3.

    If a termination condition is satisfied, each controller sends its entire future input trajectory corresponding to the smallest value of the cost function to its actuators; if the termination condition is not satisfied, go to Step 2 (cc+1).

  4. 4.

    When a new measurement is received, go to Step 1 (kk+1).

Note that at the initial iteration, all the LMPCs use h(x) to estimate the input trajectories of all the other controllers. Note also that the number of iterations c can be variable and it does not affect the closed-loop stability of the DMPC architecture presented in this subsection; a point that will be made clear below. For the iterations in this DMPC architecture, there are different choices of the termination condition. For example, the number of iterations c may be restricted to be smaller than a maximum iteration number c max  (i.e., cc max ) and/or the iterations may be terminated when the difference of the performance or the solution between two consecutive iterations is smaller than a threshold value and/or the iterations maybe terminated when a maximum computational time is reached.

In order to proceed, we define \(\hat{x}(t|t_{k})\) for t∈[t k ,t k+N ) as the nominal sampled trajectory of the system of Eq. 5.1 associated with the feedback control law h(x) and sampling time Δ starting from x(t k ). This nominal sampled trajectory is obtained by integrating recursively the following differential equation:

(5.36)

Based on \(\hat{x}(t|t_{k})\), we can define the following variable:

(5.37)

which will be used as the initial guess of the trajectory of u j .

The design of the LMPC j, j=1,…,m, at iteration c is based on the following optimization problem:

(5.38)
(5.39)
(5.40)
(5.41)
(5.42)
(5.43)

where \(\tilde{x}\) is the predicted trajectory of the nominal system with u k , the input trajectory, computed by the LMPCs of Eqs. 5.385.43 and all the other inputs are the optimal input trajectories at iteration c−1 of the rest of distributed controllers (i.e., \(u_{p,i}^{*,c-1}(t|t_{k})\) for ij). The optimal solution to the optimization problem of Eqs. 5.385.43 is denoted as \(u_{p,j}^{*,c}(t|t_{k})\) which is defined for t∈[t k ,t k+N ). Accordingly, we define the final optimal input trajectory of LMPC j (that is, the optimal trajectories computed at the last iteration) as \(u_{p,j}^{*}(t|t_{k})\) which is also defined for t∈[t k ,t k+N ).

Note that in the first iteration of each distributed LMPC, the input trajectory defined in Eq. 5.37 is used as the initial input trajectory guess; that is, \(u^{*,0}_{p,j}(t|t_{k})=u_{n,j}(t|t_{k})\) with i=1,…,m.

The manipulated inputs of the system of Eq. 5.1 under this DMPC design with LMPCs of Eqs. 5.385.43 are defined as follows:

$$ u_i(t) = u^*_{p,i}(t|t_k),\quad i=1,\ldots,m,\forall t\in[t_k,t_{k+1}).$$
(5.44)

In what follows, we refer to this DMPC architecture as the iterative DMPC. The stability properties of the iterative DMPC are stated in the following Theorem 5.2.

Remark 5.5

In general, there is no guaranteed convergence of the optimal cost or solution of an iterated DMPC to the optimal cost or solution of a centralized MPC for general nonlinear constrained systems because of the nonconvexity of the MPC optimization problems and the fact that the DMPC does not solve the centralized LMPC in a distributed fashion due to the way the Lyapunov-based constraint of the centralized LMPC is broken down into constraints imposed on the individual LMPCs; please also see Remark 5.12 below. However, with the implementation strategy of the iterative DMPC presented in this section, it is guaranteed that the optimal cost of the distributed optimization of Eqs. 5.385.43 is upper bounded by the cost of the Lyapunov-based controller h(⋅) at each sampling time.

Remark 5.6

Note that in the case of linear systems, the constraint of Eq. 5.54 is linear with respect to u j and it can be verified that the optimization problem of Eqs. 5.505.54 is convex. The input given by LMPC j of Eqs. 5.505.54 at each iteration may be defined as a convex combination of the current optimal input solution and the previous one, for example,

$$ u^c_{p,j}(t|t_k) =\sum_{i=1}^{m,i\neq j}w_i u_{p,j}^{c-1}(t|t_k) +w_ju_{p,j}^{*,c}(t|t_k),$$
(5.45)

where \(\sum_{i=1}^{m}w_{i}=1\) with 0<w i <1, \(u_{p,j}^{*,c}\) is the current solution given by the optimization problem of Eqs. 5.505.54 and \(u_{p,j}^{c-1}\) is the convex combination of the solutions obtained at iteration c−1. By doing this, it is possible to proved that the optimal cost of the distributed LMPC of Eqs. 5.505.54 converges to the one of the corresponding centralized control system [5, 98]. This property is summarized in Corollary 5.1 in Sect. 5.4.2.2. We also note that in the case of linear systems, the convexity of the distributed optimization problem also holds for all the other DMPC designs presented in this chapter and Chap. 6. In addition to Corollary 5.1, the reader may also refer to [5, 8, 93, 98] for more discussions on the conditions under which convergence of the solution of a distributed linear or convex MPC design to the solution of a centralized MPC or a Pareto optimal solution is ensured in the context of linear systems.

5.4.2.2 Stability Properties

Theorem 5.2

Consider the system of Eq5.1 in closed-loop under the sequential DMPC of Eqs5.38 5.44 based on a nonlinear control law h(x) that satisfies the condition of Eqs5.5 5.8 with class functions α i (⋅), i=1,2,3,4. Let ε w >0, Δ>0 and ρ>ρ s >0 satisfy the constraint of Eq5.25. For any N≥1 and c≥1, if x(t 0)∈Ω ρ and if ρ min ρ where ρ min  is defined as in Eq5.26, then the state x(t) of the closed-loop system is ultimately bounded in \(\varOmega _{\rho_{\min}}\).

Proof

Similar to the proof of Theorem 5.1, the proof of Theorem 5.2 also consists of two parts. We first prove that the optimization problem of Eqs. 5.385.43 is feasible for each iteration c and xΩ ρ . Then we prove that, under the DMPC architecture of Eqs. 5.385.44, the state of the system of Eq. 5.1 is ultimately bounded in \(\varOmega _{\rho_{\min}}\).

Part 1: In order to prove the feasibility of the optimization problem of Eqs. 5.385.43, we only have to prove that there exists a u j (t k ) which satisfies the input constraint of Eq. 5.41 and the constraint of Eq. 5.43. This is because the constraint of Eq. 5.43 is only enforced on the first prediction step of u j (t k ) and in the prediction time t∈[t k+1,t k+N ), the input constraint of Eq. 5.44 can be easily satisfied with u j (t) being any value in the convex set U j .

We assume that x(t k )∈Ω ρ (x(t) is bounded in Ω ρ which will be proved in Part 2). It is easy to verify that the value of u j such that u j (t k )=h j (x(t k )) satisfies the input constraint of Eq. 5.41 (assumed property of h(x) for xΩ ρ ) and the constraint of Eq. 5.43 for all possible c, thus, the feasibility of LMPC j of Eqs. 5.385.43, j=1,…,m, is guaranteed.

Part 2: By adding the constraint of Eq. 5.43 of each LMPC together, we have:

$$\sum_{j=1}^{m}\frac{\partial V(x(t_k))}{\partial x} g_j\bigl(x(t_k)\bigr)u_{p,j}^{*,c}(t_k|t_k) \leq\sum _{j=1}^{m}\frac{\partial V(x(t_k))}{\partial x}g_j\bigl(x(t_k)\bigr)h_j\bigl(x(t_k)\bigr).$$
(5.46)

It follows from the above inequality and condition of Eq. 5.5 that:

(5.47)

Following the same approach as in the proof of Theorem 5.1, we know that if the condition of Eq. 5.25 is satisfied, then the state of the closed-loop system can be proved to be maintained in \(\varOmega _{\rho_{\min}}\) under the iterative DMPC architecture of Eqs. 5.385.44. □

Corollary 5.1

Consider a class of linear time-invariant systems:

$$ \dot{x}(t) = Ax(t) + \sum_{i=1}^mB_iu_i(t),$$
(5.48)

where A and B i are constant matrices with appropriate dimensions. If we define the inputs of the distributed LMPC of Eqs5.38 5.43 at iteration c as in Eq5.45, then at a sampling time t k , as the iteration number c→∞, the optimal cost of the distributed optimization problem of Eqs5.38 5.43 converges to the optimal cost of the corresponding centralized control system.

Proof

Taking into account that x(t k ) and h(x(t k )) are known at t k , the constraint of Eq. 5.43 can be written in the following linear form:

$$C\bigl(x(t_k)\bigr)u_j(t_k)\leq D\bigl(x(t_k)\bigr),$$
(5.49)

where C(x(t k )) and D(x(t k )) are constants at each t k and only depend on x(t k ). This implies that the constraint of Eq. 5.43 is linear with respect to u j . For a linear system, it is also easy to verify that the constraints of Eqs. 5.385.42 are convex. Therefore, the optimization problem of Eqs. 5.385.43 is convex. If the inputs of the distributed controllers at each iteration c are defined as in Eq. 5.45, then the convergence of the cost given by the distributed optimization problem of Eqs. 5.385.43 to the corresponding centralized control system can be proved following similar strategies used in [5, 98] for time t k . □

Remark 5.7

Note that the DMPC designs have the same stability region Ω ρ as the one of the nonlinear control law h(x). When the stability of the nonlinear control law h(x) is global (i.e., the stability region is the entire state space), then the stability of the DMPC designs is also global. Note also that for any initial condition in Ω ρ , the DMPC designs are proved to be feasible.

Remark 5.8

The choice of the horizon of the DMPC designs does not affect the stability of the closed-loop system. For any horizon length N≥1, the closed-loop stability is guaranteed by the constraints of Eqs. 5.23 and 5.43. However, the choice of the horizon does affect the performance of the DMPC designs.

Remark 5.9

Note that because the manipulated inputs enter the dynamics of the system of Eq. 5.1 in an affine manner, the constraints designed in the LMPC optimization problems of Eqs. 5.175.23 and 5.385.43 to guarantee the closed-loop stability can be decoupled for different distributed controllers as in Eqs. 5.23 and 5.43.

Remark 5.10

In the sequential DMPC architecture presented in Sect. 5.4.1, the distributed controllers are evaluated in sequence, which implies that the minimal time to obtain a set of solutions to all the LMPCs is the sum of the evaluation times of all the LMPCs; whereas in the iterative DMPC architecture presented in Sect. 5.4.2, the distributed controllers are evaluated in parallel, which implies that the minimal time to obtain a set of solutions to all the LMPCs in each iteration is the largest evaluation time among all the LMPCs.

Remark 5.11

An alternative to the DMPC designs is to design a centralized MPC to compute all the inputs. A centralized LMPC design for the system of Eq. 5.1 based on the nonlinear control law h(x) is as follows (please also see Sect. 2.6):

(5.50)
(5.51)
(5.52)
(5.53)
(5.54)

where \(\tilde{x}\) is the predicted trajectory of the nominal system with u i , i=1,…,m, the input trajectory computed by this centralized LMPC. The optimal solution to this optimization problem is denoted by \(u_{ci}^{*}(t|t_{k})\), i=1,…,m, which is defined for t∈[t k ,t k+N ). The manipulated inputs of the closed-loop system of Eq. 5.1 under this centralized LMPC are defined as follows:

$$u_i(t) = u^*_{ci}(t|t_k),\quad i=1,\ldots,m, \forall t\in[t_k,t_{k+1}).$$
(5.55)

In what follows, we refer to this controller as the centralized LMPC.

Remark 5.12

Note that the sequential (or iterative) DMPC is not a direct decomposition of the centralized LMPC because the set of constraints of Eq. 5.23 (or Eq. 5.43) for j=1,…,m in the DMPC formulation of Eqs. 5.175.23 (or Eq. 5.385.43) imposes a different feasibility region from the one of the centralized LMPC of Eqs. 5.505.54 which has a single constraint (Eq. 5.54).

Remark 5.13

Note also that for general nonlinear systems, there is no guarantee that the closed-loop performance of one (centralized or distributed) MPC architecture discussed in this section should be superior than the others since the solutions provided by these MPC architectures are proved to be feasible and stabilizing but the superiority of the performance of one MPC architecture over another is not established. This is because the MPC designs are implemented in a receding horizon scheme and the prediction horizon is finite; and also because of the different MPC designs are not equivalent as we discussed in Remark 5.12 and the nonconvexity property as we discussed in Remark 5.5. In applications of these MPC architectures, especially for chemical process control in which nonconvex problems is a very common occurrence, simulations should be conducted before making decisions as to which architecture should be used.

5.4.3 Application to an Alkylation of Benzene Process

The process of alkylation of benzene with ethylene to produce ethylbenzene is widely used in the petrochemical industry. Dehydration of the product produces styrene, which is the precursor to polystyrene and many copolymers. Over the last two decades, several methods and simulation results of alkylation of benzene with catalysts have been reported in the literature. The process model developed in this section is based on these references [23, 44, 83, 117]. More specifically, the process considered in this work consists of four CSTRs and a flash tank separator, as shown in Fig. 5.3. The CSTR-1, CSTR-2 and CSTR-3 are in series and involve the alkylation of benzene with ethylene. Pure benzene is fed from stream F 1 and pure ethylene is fed from streams F 2, F 4 and F 6. Two catalytic reactions take place in CSTR-1, CSTR-2 and CSTR-3. Benzene (A) reacts with ethylene (B) and produces the desired product ethylbenzene (C) (reaction 1); ethylbenzene can further react with ethylene to form 1,3-diethylbenzene (D) (reaction 2) which is the byproduct. The effluent of CSTR-3, including the products and leftover reactants, is fed to a flash tank separator, in which most of benzene is separated overhead by vaporization and condensation techniques and recycled back to the plant and the bottom product stream is removed. A portion of the recycle stream F r2 is fed back to CSTR-1 and another portion of the recycle stream F r1 is fed to CSTR-4 together with an additional feed stream F 10 which contains 1,3-diethylbenzene from further distillation process that we do not consider in this example. In CSTR-4, reaction 2 and catalyzed transalkylation reaction in which 1,3-diethylbenzene reacts with benzene to produce ethylbenzene (reaction 3) takes place. All chemicals left from CSTR-4 eventually pass into the separator. All the materials in the reactions are in liquid phase due to high pressure and their molar volumes are assumed to be constants. The dynamic equations describing the behavior of the process, obtained through material and energy balances under standard modeling assumptions, are given below:

(5.56)
(5.57)
(5.58)
(5.59)
(5.60)
(5.61)
(5.62)
(5.63)
(5.64)
(5.65)
(5.66)
(5.67)
(5.68)
(5.69)
(5.70)
(5.71)
(5.72)
(5.73)
(5.74)
(5.75)
(5.76)
(5.77)
(5.78)
(5.79)
(5.80)

where r 1,r 2 and r 3 are the reaction rates of reactions 1, 2 and 3, respectively and H i , i=A,B,C,D, are the enthalpies of the reactants. The reaction rates are related to the concentrations of the reactants and the temperature in each reactor as follows:

(5.81)
(5.82)
(5.83)

where:

(5.84)
(5.85)
Fig. 5.3
figure 3

Process flow diagram of alkylation of benzene

The heat capacities of the species are assumed to be constants and the molar enthalpies have a linear dependence on temperature as follows:

$$H_i(T) = H_{\mathit{iref}} + C_{\mathit{pi}}(T-T_{\mathit{ref}}),\quad i =A,B,C,D,$$
(5.86)

where C pi , i=A,B,C,D are heat capacities.

The model of the flash tank separator is developed under the assumption that the relative volatility of each species has a linear correlation with the temperature of the vessel within the operating temperature range of the flash tank, as shown below:

(5.87)
(5.88)
(5.89)
(5.90)

where α i , i=A,B,C,D, represent the relative volatilities. It has also been assumed that there is a negligible amount of reaction taking place in the separator and a fraction of the total condensed overhead flow is recycled back to the reactors. The following algebraic equations model the composition of the overhead stream relative to the composition of the liquid holdup in the flash tank:

$$M_i = k\frac{{\alpha_i (F_7 C_{i3} + F_9 C_{i5} )}\sum _j^{A,B,C,D}(F_7 C_{j3} + F_9 C_{j5})}{{\sum_j^{A,B,C,D}{\alpha_j (F_7 C_{j3} + F_9 C_{j5})} }},\quad i = A,B,C,D,$$
(5.91)

where M i , i=A,B,C,D are the molar flow rates of the overhead reactants and k is the fraction of condensed overhead flow recycled to the reactors. Based on M i , i=A,B,C,D, we can calculate the concentration of the reactants in the recycle streams as follows:

$$C_{ir} = \frac{M_i}{\sum_j^{A,B,C,D} M_i/C_{j0}},\quad i=A,B,C,D,$$
(5.92)

where C j0, j=A,B,C,D, are the mole densities of pure reactants. The condensation of vapor takes place overhead, and a portion of the condensed liquid is purged back to separator to keep the flow rate of the recycle stream at a fixed value. The temperature of the condensed liquid is assumed to be the same as the temperature of the vessel.

The definitions for the variables used in the above model can be found in Table 5.1, with the parameter values given in Table 5.2.

Table 5.1 Process variables of the alkylation of benzene process of Eqs. 5.565.80
Table 5.2 Parameter values of the alkylation of benzene process of Eqs. 5.565.80

Each of the tanks has an external heat/coolant input. The manipulated inputs to the process are the heat injected to or removed from the five vessels, Q 1, Q 2, Q 3, Q 4 and Q 5, and the feed stream flow rates to CSTR-2 and CSTR-3, F 4 and F 6.

The states of the process consist of the concentrations of A, B, C, D in each of the five vessels and the temperatures of the vessels. The state of the process is assumed to be available continuously to the controllers. We consider a stable steady state (operating point), x s , of the process which is defined by the steady-state inputs Q 1s , Q 2s , Q 3s , Q 4s , Q 5s , F 4s and F 6s which are shown in Table 5.3 with corresponding steady-state values shown in Table 5.4.

Table 5.3 Steady-state input values for x s of the alkylation of benzene process of Eqs. 5.565.80
Table 5.4 Steady-state values for x s of the alkylation of benzene process of Eqs. 5.565.80

The control objective is to regulate the system from an initial state to the steady state. The initial state values are shown in Table 5.6.

The first distributed controller (LMPC 1) will be designed to decide the values of Q 1, Q 2 and Q 3, the second distributed controller (LMPC 2) will be designed to decide the values of Q 4 and Q 5, and the third distributed controller (LMPC 3) will be designed to decide the values of F 4 and F 6. Taking this into account, the process model of Eqs. 5.565.80 belongs to the following class of nonlinear systems:

$$\dot{x}(t) = f(x) + g_1(x)u_1(t) + g_2(x)u_2(t)+g_3(x)u_3(t),$$
(5.93)

where the state x is the deviation of the state of the process from the steady state, \(u_{1}^{T}=[u_{11}\ u_{12}\ u_{13}]=[Q_{1}-Q_{1s}\ Q_{2}-Q_{2s}\ Q_{3}-Q_{3s}]\), \(u_{2}^{T}=[u_{21}\ u_{22}]= {[Q_{4}-Q_{4s}\ Q_{5}-Q_{5s}]}\) and \(u_{3}^{T}=[u_{31}\ u_{32}]=[F_{4}-F_{4s}\ F_{6}-F_{6s}]\) are the manipulated inputs which are subject to the constraints shown in Table 5.5.

Table 5.5 Manipulated input constraints of the alkylation of benzene process of Eqs. 5.565.80

In the control of the process, u 1 and u 2 are necessary to keep the stability of the closed-loop system, while u 3 can be used as an extra manipulated input to improve the closed-loop performance. To illustrate the theoretical results, we first design the nonlinear control law h(x)=[h 1(x) h 2(x) h 3(x)]T. Specifically, h 1(x) and h 2(x) are designed as follows [97]:

$$ h_{i}(x)=\left\{ \begin{array}{l@{\quad}l}-\frac{L_fV+\sqrt {(L_fV)^2+(L_{g_i}V)^4}}{(L_{g_i}V)^2}L_{g_i}V & \mbox{if}\ L_{g_i}V \neq0, \\0 & \mbox{if}\ L_{g_i}V = 0,\end{array} \right.$$
(5.94)

where i=1,2, \(L_{f}V=\frac{\partial V}{\partial x}f(x)\) and \(L_{g_{i}}V=\frac{\partial V}{\partial x}g_{i}(x)\) denote the Lie derivatives of the scalar function V with respect to f and g i (i=1,2), respectively. The controller h 3(x) is chosen to be h 3(x)=[0 0]T because the input set u 3 is not needed to stabilize the process. We consider a Lyapunov function V(x)=x T Px with P being the following weight matrix:

$$P=\mathit{diag}\bigl([1 \ 1 \ 1 \ 1 \ 10 \ 1 \ 1 \ 1 \ 1 \ 10 \ 1 \ 1\ 1 \ 1 \ 10 \ 1 \ 1 \ 1 \ 1 \ 10 \ 1 \ 1 \ 1 \ 1 \ 10]\bigr).$$
(5.95)

The weights in P are chosen by a trial-and-error procedure. The basic idea behind this procedure is that more weight should be put on the temperatures of the five vessels because temperatures have more significant effect on the overall control performance, and the controller h(x) should be able to stabilize the closed-loop system asymptotically with continuous feedback and actuation.

Based on h(x), we design the centralized LMPC of Eqs. 5.505.54, the sequential DMPC of Eqs. 5.175.23 and the iterative DMPC of Eqs. 5.385.43. The sampling time used is Δ=30 s and the weight matrices:

$$Q_c=\mathit{diag}\bigl(\bigl[1 \ 1 \ 1 \ 1 \ 10^3 \ 1 \ 1 \ 1 \ 1 \ 10^3 \ 10 \ 10 \ 10 \ 10 \ 10^4 \ 1 \ 1 \ 1 \ 1 \ 10^3 \ 1 \ 1 \ 1\ 1 \ 10^3\bigr]\bigr),$$
(5.96)

and R c1=diag([10−8 10−8 10−8]), R c2=diag([10−8 10−8]) and \(R_{c3}= \mathit{diag}([1\ 1])\).

First, we carried out a set of simulations which demonstrate that the nonlinear control law h(x) and the different schemes of LMPCs can all stabilize the closed-loop system asymptotically. Figure 5.4 shows the trajectories of the Lyapunov function V(x) under the different control schemes. Note that because of the constraints of Eqs. 5.54, 5.23 and 5.43, the trajectories of the Lyapunov function of the closed-loop system under the centralized LMPC, the sequential DMPC and the iterative DMPC are guaranteed to be bounded by the corresponding Lyapunov function trajectory under the controller h(x) implemented in a sample-and-hold fashion with the sampling time Δ until V(x) converges to a small region around the origin (i.e., \(\varOmega _{\rho_{\min}}\)). This point is also illustrated in Fig. 5.4.

Fig. 5.4
figure 4

Trajectories of the Lyapunov function V(x) of the alkylation of benzene process of Eqs. 5.565.80 under the controller h(x) of Eq. 5.94 implemented in a sample-and-hold fashion (solid line), the centralized LMPC of Eqs. 5.505.54 (dashed line), the sequential DMPC of Eqs. 5.175.23 (dash-dotted line) and the iterative DMPC of Eqs. 5.385.43 with c=1 (dotted line)

Next, we compare the mean evaluation times of the centralized LMPC optimization problem and the sequential and iterative DMPC optimization problems. Each LMPC optimization problem was evaluated 100 times at different conditions. Different prediction horizons were considered in this set of simulations. The simulations were carried out using JAVA™ programming language in a PENTIUM® 3.20 GHz computer. The optimization problems were solved using the open source interior point optimizer Ipopt [109]. The results are shown in Table 5.7. From Table 5.7, we can see that in all cases, the time needed to solve the centralized LMPC is much larger than the time needed to solve the sequential or iterative DMPCs. This is because the centralized LMPC has to solve a much larger (in terms of decision variables) optimization problem than the DMPCs. We can also see that the evaluation time of the centralized LMPC is even larger than the sum of evaluation times of LMPC 1, LMPC 2 and LMPC 3 in the sequential DMPC, and the times needed to solve the DMPCs in both sequential and iterative distributed schemes are of the same order of magnitude.

In the following set of simulations, we compare the centralized LMPC and the two DMPC schemes from a performance index point of view. In this set of simulations, the prediction horizon is N=1. To carry out this comparison, the same initial condition and parameters were used for the different control schemes and the total cost under each control scheme was computed as follows:

$$J=\int_{t_0}^{t_M} \bigl[\bigl\|x(\tau)\bigr\|_{Q_c} + \bigl\| u_1(\tau)\bigr\|_{R_{c1}}+\bigl\|u_2(\tau)\bigr\|_{R_{c2}}+\bigl\|u_3(\tau)\bigr\|_{R_{c3}}\bigr]\,d\tau,$$
(5.97)

where t 0=0 is the initial time of the simulations and t M =1000 s is the end of the simulations. Table 5.8 shows the total cost along the closed-loop system trajectories (trajectories I) under the different control schemes. For the iterative DMPC design, different maximum number of iterations, c max , are used. From Table 5.8, we can see that in this set of simulations, the centralized LMPC gives the lowest performance cost, the sequential DMPC gives lower cost than the iterative DMPC when there is no iteration (c max =1). However, as the iteration number c increases, the performance cost given by the iterative DMPC decreases and converges to the cost of the one corresponding to the centralized LMPC. This point is also shown in Fig. 5.5.

Fig. 5.5
figure 5

Total performance costs along the closed-loop trajectories of the alkylation of benzene process of Eqs. 5.565.80 under centralized LMPC of Eqs. 5.505.54 (dashed line), sequential DMPC of Eqs. 5.175.23 (dash-dotted line) and iterative DMPC of Eqs. 5.385.43 (solid line)

Note that the above set of simulations only represents one case of many possible cases. As we discussed in Remarks 5.5 and 5.13, there is no guaranteed convergence of the performance of distributed MPC to the performance of a centralized MPC and there is also no guaranteed superiority of the performance of one DMPC scheme over the others. In the following, we show two sets of simulations to illustrate these points. In both sets of simulations, we chose different matrices R c1 and R c2, and all the other parameters (Q c , R c3, Δ, N) remained the same as the previous set of simulations. In the first set of simulations, we picked \(R_{c1}=\mathit{diag}([5\times 10^{-5}\;5\times10^{-5}\;5\times10^{-5}])\), \(R_{c2}=\mathit{diag}([5\times 10^{-5}\;5\times10^{-5}])\). The total performance cost along the closed-loop system trajectories (trajectories II) under this simulation setting are shown in Table 5.9. From Table 5.9, we can see that the centralized LMPC provides a much lower cost than both the sequential and iterative distributed LMPCs. We can also see that as the number of iterations increases, the iterative distributed LMPC converges to a value which is different from the one obtained by the centralized LMPC. In the second set of simulations, we picked R c1=diag([1×10−4 1×10−4 1×10−4]), R c2=diag([1×10−4 1×10−4]) and the total performance cost along the closed-loop system trajectories (trajectories III) are shown in Table 5.10 from which we can see that the centralized LMPC provides a higher cost than both distributed LMPCs.

5.5 Sequential and Iterative DMPC Designs with Asynchronous Measurements

In this section, we design sequential and iterative DMPC schemes, taking into account asynchronous measurements explicitly in their designs, that provide deterministic closed-loop stability properties. Similarly, in each DMPC architecture, we will design m LMPCs to compute u i , i=1,…,m, and refer to the LMPC computing the input trajectories of u i as LMPC i. Schematic diagrams of the sequential and iterative DMPC designs for systems subject to asynchronous measurements are shown in Figs. 5.6 and 5.7.

Fig. 5.6
figure 6

Sequential DMPC for nonlinear systems subject to asynchronous measurements

Fig. 5.7
figure 7

Iterative DMPC for nonlinear systems subject to asynchronous measurements

5.5.1 Modeling of Asynchronous Measurements

We assume that the state of the system of Eq. 5.1, x(t), is available asynchronously at time instants t a where {t a≥0} is a random increasing sequence of times. We also assume that there exists an upper bound T m on the interval between two successive measurements, that is, the sequence satisfies the condition of Eq. 2.22.

5.5.2 Sequential DMPC with Asynchronous Measurements

5.5.2.1 Sequential DMPC Formulation

For the design of the sequential DMPC for systems subject to asynchronous measurements (see Fig. 5.6), we take advantage of the MPC scheme when feedback is lost to update the control inputs based on a state prediction obtained by the model and to have the control actuators store and implement the last computed optimal input trajectories. Specifically, the implementation strategy is as follows:

  1. 1.

    When a new measurement is available at t a , all the LMPCs receive the state measurement x(t a ) from the sensors.

  2. 2.

    For j=m to 1

    1. 2.1.

      LMPC j receives the entire future input trajectories of u i , i=m,…,j+1, from LMPC j+1 and evaluates the future input trajectory of u j based on x(t a ) and the received future input trajectories.

    2. 2.2.

      LMPC j sends the entire input trajectories of u j to its actuators and the entire input trajectories of u i , i=m,…,j, to LMPC j−1.

  3. 3.

    When a new measurement is received (aa+1), go to Step 1.

In order to make a decision, LMPC j, j=m,…,2 must assume trajectories for u i , i=j−1,…,1, along the prediction horizon since the communication is one-directional. To this end, the controller h(x) is used. In order to inherit the stability properties of the controller h(x), each control input u i , i=1,…,m must satisfy a set of constraints that guarantee a given minimum contribution to the decrease rate of the Lyapunov function V(x) in the case of asynchronous measurements. To this end, the input trajectories, u n,i (t|t a ) (i=1,…,m), defined in Eq. 5.37 are used.

Specifically, the design of LMPC j, j=1,…,m, is based on the following optimization problem:

(5.98)
(5.99)
(5.100)
(5.101)
(5.102)
(5.103)
(5.104)
(5.105)

where N R is the smallest integer satisfying T m N R Δ. The vector \(\tilde{x}^{j}\) is the predicted trajectory of the nominal system with u j computed by the above optimization problem (i.e., LMPC j) and the other control inputs defined by Eqs. 5.1015.102. The vector \(\hat{x}^{j}\) is the predicted trajectory of the nominal system with u j =u n,j (t|t a ) and the other control inputs defined by Eqs. 5.1015.102. In order to fully take advantage of the prediction, we choose NN R . The optimal solution to this optimization problem is denoted \(u_{s,j}^{a,*}(t|t_{a})\) and is defined for t∈[t a ,t a +).

The constraint of Eq. 5.99 is the nominal model of the system, which is used to generate the trajectory \(\tilde{x}^{j}\); the constraint of Eq. 5.100 defines a reference trajectory of the nominal system (i.e., \(\hat{x}^{j}\)) when the input u j is defined by u n,j (t|t a ); the constraint of Eq. 5.101 defines the value of the inputs evaluated after u j (i.e., u i with i=1,…,j−1); the constraint of Eq. 5.102 defines the value of the inputs evaluated before u j (i.e., u i with i=j+1,…,m); the constraint of Eq. 5.103 is the constraint on the manipulated input u j ; the constraint of Eq. 5.104 sets the initial state for the optimization problem; and the constraint of Eq. 5.105 guarantees that the contribution of input u j to the decrease rate of the time derivative of the Lyapunov function from t a to t a +N R Δ, if \(u_{j}=u^{a,*}_{s,j}(t|t_{a})\), t∈[t a ,t a +N R Δ) is applied, is bigger or equal to the value obtained when u j =u n,j (t|t a ), t∈[t a ,t a +N R Δ) is applied. This constraint guarantees that the sequential DMPC design of Eqs. 5.985.105 maintains the stability of the nonlinear control law h(x) implemented in a sample-and-hold fashion and with open-loop state estimation in the presence of asynchronous measurements.

The manipulated inputs of the closed-loop system under the above sequential DMPC are defined as follows:

$$ u_i(t) = u^{a,*}_{s,i}(t|t_k),\quad i=1,\ldots,m, \forall t\in[t_a,t_{a+1}).$$
(5.106)

5.5.2.2 Stability Properties

The sequential DMPC design of Eqs. 5.985.105 maintains the closed-loop stability properties of the nonlinear control law h(x) implemented in a sample-and-hold fashion and with open-loop state estimation in the presence of asynchronous measurements. This property is presented in Theorem 5.3 below. To state this theorem, we need the following corollaries.

From Proposition 2.1, we can obtain the following corollary for systems with m sets of control inputs, entering the dynamics of the system in an affine fashion, which ensures that if the nominal system of Eq. 5.1 under the control u i =h i (x) (i=1,…,m) implemented in a sample-and-hold fashion with state feedback every sampling time starts in Ω ρ , then it is ultimately bounded in \(\varOmega _{\rho_{\min}}\).

Corollary 5.2

Consider the nominal sampled trajectory \(\hat{x}\) of the system of Eq5.1 in closed-loop with a nonlinear control law u i =h i (x) (i=1,…,m), satisfying the conditions of Eqs5.5 5.8 and applied in a sample-and-hold fashion, obtained by solving recursively the following equation:

$$ \dot{\hat{x}}(t) = f\bigl(\hat{x}(t)\bigr) +\sum_{i=1}^m g_i\bigl(\hat{x}(t)\bigr)h_i\bigl(\hat{x}(t_a)\bigr),\quad t\in[t_k,t_{k+1}),$$
(5.107)

where t k =t 0+, k=0,1,… . Let Δ,ε s >0 and ρ>ρ s >0 satisfy:

$$ -\alpha_3\bigl(\alpha_2^{-1}(\rho_s)\bigr) +L'M \leq-\varepsilon _s/\varDelta $$
(5.108)

with \(L'=L'_{x}+\sum^{m}_{i=1}L'_{u_{i}}u_{i}^{\max}\). Then, if ρ min <ρ where ρ min  is defined as in Eq5.26 and \(\hat{x}(0)\in \varOmega _{\rho}\), the following inequality holds:

(5.109)
(5.110)

Proof

Following the definition of \(\hat{x}(t)\) in Eq. 5.107, the time derivative of the Lyapunov function V(x) along the trajectory \(\hat{x}(t)\) of the system 5.1 in t∈[t k ,t k+1) is given by:

$$\dot{V}\bigl(\hat{x}(t)\bigr) = \frac{\partial V(\hat {x}(t))}{\partial x} \Biggl(f\bigl(\hat{x}(t)\bigr)+\sum_{i=1}^mg_i\bigl(\hat{x}(t)\bigr)h_i\bigl(\hat{x}(t_k)\bigr)\Biggr).$$
(5.111)

Adding and subtracting \(\frac{\partial V(\hat{x}(t_{k}))}{\partial x}(f(x(t_{k}))+\sum_{i=1}^{m}g_{i}(x(t_{k}))h_{i}(\hat{x}(t_{k})))\) and taking into account Eq. 5.6, we obtain:

(5.112)

From the Lipschitz property of Eqs. 5.145.15, the fact that the control inputs are bounded in convex sets and the above inequality of Eq. 5.112, we have that:

$$ \dot{V}\bigl(\hat{x}(t)\bigr) \leq-\alpha_3\bigl(\alpha^{-1}_2(\rho_s)\bigr)+ \Biggl(L'_x+\sum_{i=1}^mL'_{u_i}u_i^{\max}\Biggr)\bigl\|\hat{x}(t)-\hat{x}(t_k) \bigr\|$$
(5.113)

for all \(\hat{x}(t_{k}) \in \varOmega _{\rho}/\varOmega _{\rho_{s}}\). Taking into account the Lipschitz property of Eq. 5.9 and the continuity of \(\hat{x}(t)\), the following bound can be written for all t∈[t k ,t k+1):

$$ \bigl\|\hat{x}(t)-\hat{x}(t_k)\bigr\|\leq M\varDelta .$$
(5.114)

Using the expression of Eq. 5.114, we obtain the following bound on the time derivative of the Lyapunov function for t∈[t k ,t k+1), for all initial states \(\hat{x}(t_{k}) \in \varOmega _{\rho}/\varOmega _{\rho_{s}}\):

$$\dot{V}\bigl(\hat{x}(t)\bigr) \leq-\alpha_3\bigl(\alpha_2^{-1}(\rho_s)\bigr) + L' M\varDelta ,$$
(5.115)

where \(L'=L'_{x}+\sum_{i=1}^{m}L'_{u_{i}}u_{i}^{\max}\). If the condition of Eq. 5.108 is satisfied, then \(\dot{V}(\hat{x}(t))\leq -\varepsilon _{s}/\varDelta \). Integrating this bound on t∈[t k ,t k+1) we obtain that the inequality of Eq. 5.109 holds. Using Eq. 5.109 recursively, it is proved that, if \(x(t_{0})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\), the state converges to \(\varOmega _{\rho_{s}}\) in a finite number of sampling times without leaving the stability region. Once the state converges to \(\varOmega _{\rho_{s}}\subseteq \varOmega _{\rho_{\min}}\), it remains inside \(\varOmega _{\rho_{\min}}\) for all times. This statement holds because of the definition of ρ min  as in Eq. 5.26. □

From Proposition 2.2, we can have the following Corollary 5.3 to get an upper bound on the deviation of the state trajectory obtained using the nominal model of Eq. 5.1, from the real-state trajectory when the same control actions are applied for systems with m sets of control inputs entering the dynamics of the system in an affine fashion.

Corollary 5.3

Consider the systems:

(5.116)
(5.117)

where initial states x a (t 0),x b (t 0)∈Ω ρ with x b (t 0)=x a (t 0)+n x andn x ‖≤θ x . There exists a function f W (⋅,⋅) such that:

$$ \bigl\|x_a(t)-x_b(t)\bigr\|\leq f_W(\theta_x,t-t_0),$$
(5.118)

for all x a (t),x b (t)∈Ω ρ and all w(t)∈W with:

$$ f_W(\theta_x,\tau)=\biggl(\frac{L_w\theta}{L''}+\theta_x\biggr) \bigl(e^{L''\tau}-1\bigr),$$
(5.119)

where \(L''=L_{x}+\sum_{i=1}^{m}L_{u_{i}}u^{\max}_{i}\).

Proof

Define the error vector as e(t)=x a (t)−x b (t). The time derivative of the error is given by:

$$\dot{e}(t)=f\bigl(x_a(t)\bigr) - f\bigl(x_b\bigl(x(t)\bigr)\bigr) + \sum _{i=1}^{m}\bigl(g_i\bigl(x_a(t)\bigr)-g_i\bigl(x_b(t)\bigr)\bigr)u_i(t) + k\bigl(x_a(t)\bigr)w(t).$$
(5.120)

From the Lipschitz property of Eq. 5.115.13 and the fact that the control inputs are bounded in convex sets, the following inequality holds:

(5.121)

for all x a (t),x b (t)∈Ω ρ and w(t)∈W. Integrating \(\|\dot{e}(t)\|\) with initial condition ‖e(t 0)‖=‖n x ‖ and that ‖n x ‖≤θ x , the following bound on the norm of the error vector is obtained:

$$\bigl\|e(t)\bigr\|\leq\biggl(\frac{L_w\theta}{L''}+\theta_x\biggr)\bigl(e^{L'' (t-t_0)}-1\bigr),$$
(5.122)

where \(L''=L_{x}+\sum_{i=1}^{m}L_{u_{i}}u^{\max}_{i}\). This implies that the inequality of Eq. 5.118 holds for:

$$f_W(\tau)=\biggl(\frac{L_w\theta}{L''}+\theta_x\biggr) \bigl(e^{L'' \tau}-1\bigr),$$
(5.123)

which proves this corollary. □

In Theorem 5.3 below, we provide sufficient conditions under which the DMPC of Eqs. 5.985.106 guarantees that the state of the closed-loop system is ultimately bounded in a region that contains the origin.

Theorem 5.3

Consider the system of Eq5.1 in closed-loop with x available at asynchronous sampling time instants {t a≥0}, satisfying the condition of Eq2.22, under the DMPC design of Eqs5.98 5.106 based on a control law h(x) that satisfies the conditions of Eqs4.3 5.8. Let Δ,ε s >0, ρ>ρ min >0, ρ>ρ s >0 and NN R ≥1 satisfy the conditions of Eqs5.108 and the following inequality:

$$ -N_R\varepsilon _s+f_V\bigl(f_W(0,N_R\varDelta )\bigr)<0$$
(5.124)

with f V defined in Eq2.49 and f W defined in Eq5.119, and N R being the smallest integer satisfying N R ΔT m . If the initial state of the closed-loop system x(t 0)∈Ω ρ , then x(t) is ultimately bounded in \(\varOmega _{\rho_{a}}\subseteq \varOmega _{\rho}\) where:

$$\rho_a = \rho_{\min} + f_V\bigl(f_W(0,N_R\varDelta )\bigr)$$
(5.125)

with ρ min  defined in Eq5.26.

Proof

In order to prove that the state of the closed-loop system is ultimately bounded in a region that contains the origin, we prove that V(x(t a )) is a decreasing sequence of values with a lower bound. Specifically, we focus on the time interval t∈[t a ,t a+1) and prove that V(x(t a+1)) is reduced compared with V(x(t a )) or is maintained in an invariant set containing the origin.

To simplify the notation, we assume that all the signals used in this proof refer to the different optimization problems solved at t a with the initial condition x(t a ), and the trajectory \(\tilde{x}^{j}(t)\), j=1,…,m, is corresponding to the optimal input \(u_{s,j+1}^{a,*}(t|t_{a})\). We also note that the predicted trajectories \(\tilde{x}^{j+1}(t)\) and \(\hat{x}^{j}(t)\) generated in the optimization problems of LMPC j+1 and LMPC j are identical. This property will be used in the proof.

Part 1: In this part, we prove that the stability results stated in Theorem 5.3 hold in the case that t a+1t a =T m for all a and T m =N R Δ. This case corresponds to the worst situation in the sense that the controllers need to operate in open-loop for the maximum possible amount of time. By Corollary 5.2 and the fact that t a+1=t a +N R Δ, the following inequality is obtained:

$$ V\bigl(\hat{x}(t_{a+1})\bigr) \leq\max\bigl\{V\bigl(\hat{x}(t_a)\bigr)-N_R\varepsilon _s,\rho_{\min}\bigr\}.$$
(5.126)

From the constraints of Eq. 5.105 in the LMPCs, the following inequality can be written:

$$ V\bigl(\tilde{x}^{j}(t)\bigr) \leq V\bigl(\hat{x}^j(t)\bigr),\quad j=1,\ldots,m, \forall t \in [t_a,t_a+N_R\varDelta ).$$
(5.127)

By the fact that \(\tilde{x}^{j+1}(t)\) and \(\hat{x}^{j}(t)\) are identical, the following equations can be written:

$$ V\bigl(\hat{x}^{j}(t)\bigr) = V\bigl(\tilde{x}^{j+1}(t)\bigr),\quad j=1,\ldots,m-1, \forall t \in[t_a,t_a+N_R\varDelta ).$$
(5.128)

From the inequalities of Eqs. 5.127 and 5.128, the following inequalities are obtained:

$$ V\bigl(\tilde{x}^1(t)\bigr)\leq\cdots\leq V\bigl(\tilde{x}^j(t)\bigr)\leq\cdots\leq V\bigl(\tilde{x}^m(t)\bigr)\leq V\bigl(\hat{x}^m(t)\bigr),\quad \forall t \in[t_a,t_a+N_R\varDelta ).$$
(5.129)

Note that the trajectory \(\tilde{x}^{1}\) is the nominal trajectory (i.e., \(\tilde{x}\)) of the closed-loop system under the control of the sequential DMPC of Eqs. 5.985.106. Note also that the trajectory \(\hat{x}^{m}\) is the nominal sampled trajectory (i.e., \(\hat{x}\)) of the closed-loop system defined in Eq. 5.107. Therefore, the following trajectory can be written:

$$ V\bigl(\tilde{x}(t)\bigr)\leq V\bigl(\hat{x}(t)\bigr),\quad \forall t\in[t_a,t_a+N_R\varDelta ).$$
(5.130)

From the inequalities of Eq. 5.126 and 5.130 and the fact that \(\hat{x}(t_{a})=x(t_{a})\), the following inequality is obtained:

$$ V\bigl(\tilde{x}(t_{a+1})\bigr) \leq\max \bigl \{V\bigl(x(t_a)\bigr)-N_R\varepsilon _s,\rho_{\min}\bigr\}.$$
(5.131)

When x(t)∈Ω ρ for all times (this point will be proved below), we can apply Proposition 2.3 to obtain the following inequality:

$$ V\bigl(x(t_{a+1})\bigr) \leq V\bigl(\tilde{x}(t_{a+1})\bigr)+f_V\bigl(\bigl\|\tilde{x}(t_{a+1})-x(t_{a+1})\bigr\|\bigr).$$
(5.132)

Applying Corollary 5.3, we obtain the following upper bound on the deviation of \(\tilde{x}(t)\) from x(t):

$$ \bigl\|x(t_{a+1})-\tilde{x}(t_{a+1})\bigr\|\leq f_W(0,N_R\varDelta ).$$
(5.133)

From the inequalities of Eqs. 5.132 and 5.133, the following upper bound on V(x(t a+1)) can be written:

$$ V\bigl(x(t_{a+1})\bigr) \leq V\bigl(\tilde{x}(t_{a+1})\bigr)+f_V\bigl(f_W(0,N_R\varDelta )\bigr).$$
(5.134)

Using the inequality of Eq. 5.131, we can rewrite the inequality of Eq. 5.134 as follows:

$$ V\bigl(x(t_{a+1})\bigr) \leq\max\bigl\{V\bigl(x(t_a)\bigr)-N_R\varepsilon _s, \rho_{\min}\bigr\}+f_V\bigl(f_W(0,N_R\varDelta )\bigr).$$
(5.135)

If the condition of Eq. 5.124 is satisfied, from the inequality of Eq. 5.135, we know that there exists ε w >0 such that the following inequality holds:

$$ V\bigl(x(t_{a+1})\bigr)\leq\max\bigl\{V\bigl(x(t_a)\bigr)-\varepsilon _w, \rho_a\bigr\}$$
(5.136)

which implies that if \(x(t_{a})\in \varOmega _{\rho}/\varOmega _{\rho_{a}}\), then V(x(t a+1))<V(x(t a )), and if \(x(t_{a})\in \varOmega _{\rho_{a}}\), then V(x(t a+1))≤ρ a .

Because the upper bound on the difference between the Lyapunov function of the actual trajectory x and the nominal trajectory \(\tilde{x}\) is a strictly increasing function of time (see Corollary 5.3 and Proposition 2.3 for the expressions of f V (⋅) and f W (⋅)), the inequality of Eq. 5.136 also implies that

$$ V\bigl(x(t)\bigr)\leq\max\bigl\{V\bigl(x(t_a)\bigr),\rho_a\bigr\},\quad \forall t\in[t_a,t_{a+1}).$$
(5.137)

Using the inequality of Eq. 5.137 recursively, it can be proved that if x(t 0)∈Ω ρ , then the closed-loop trajectories of the system of Eq. 5.1 under the sequential DMPC of Eqs. 5.985.106 stay in Ω ρ for all times (i.e., x(t)∈Ω ρ , ∀t). Moreover, using the inequality of Eq. 5.137 recursively, it can be proved that if x(t 0)∈Ω ρ , the closed-loop trajectories of the system of Eq. 5.1 under the sequential DMPC of Eqs. 5.985.106 satisfy

$$\limsup_{t\rightarrow\infty} V\bigl(x(t)\bigr) \leq\rho_a.$$
(5.138)

This proves that x(t)∈Ω ρ for all times and x(t) is ultimately bounded in \(\varOmega _{\rho_{a}}\) for the case when t a+1t a =T m for all a and T m =N R Δ.

Part 2: In this part, we extend the results proved in Part 1 to the general case, that is, t a+1t a T m for all a and T m N R Δ which implies that t a+1t a N R Δ. Because f V (⋅) and f W (⋅) are strictly increasing functions of time and f V (⋅) is convex, following similar steps as in Part 1, it can be shown that the inequality of Eq. 5.135 still holds. This proves that the stability results stated in Theorem 5.3 hold. □

Remark 5.14

Note that the stability results stated in Theorem 5.3 also hold when the sequential DMPC of Eqs. 5.985.105 is applied to a nonlinear system described by Eq. 5.4.

5.5.3 Iterative DMPC with Asynchronous Measurements

5.5.3.1 Iterative DMPC Formulation

In contrast to the one-directional communication of the sequential DMPC architecture, the iterative DMPC architecture utilizes a bidirectional communication strategy in which all the distributed controllers are able to share their future input trajectories information after each iteration. In the presence of asynchronous measurements, the iterative DMPC of Eqs. 5.385.44 presented in Sect. 5.4.2 cannot guarantee closed-loop stability. In this subsection, we modify the implementation strategy and the formulation of the distributed controllers to take into account asynchronous measurements (see Fig. 5.7). The implementation strategy is as follows:

  1. 1.

    When a new measurement is available at t a , all the LMPCs receive the state measurement x(t a ) from the sensors and then evaluate their future input trajectories in an iterative fashion with initial input guesses generated by h(⋅).

  2. 2.

    At iteration c (c≥1):

    1. 2.1.

      Each LMPC evaluates its own future input trajectory based on x(t a ) and the latest received input trajectories of all the other LMPCs (when c=1, initial input guesses generated by h(⋅) are used).

    2. 2.2.

      The controllers exchange their future input trajectories. Based on all the input trajectories, each controller calculates and stores the value of the cost function.

  3. 3.

    If a termination condition is satisfied, each LMPC sends its entire future input trajectory corresponding to the smallest value of the cost function to its actuators; if the termination condition is not satisfied, go to Step 2 (cc+1).

  4. 4.

    When a new measurement is received (aa+1), go to Step 1.

The design of the LMPC j, j=1,…,m, at iteration c is based on the following optimization problem:

(5.139)
(5.140)
(5.141)
(5.142)
(5.143)
(5.144)
(5.145)

where \(\tilde{x}^{j}\) is the predicted trajectory of the nominal system of Eq. 5.1 with u j computed by this LMPC and all the other inputs are the optimal input trajectories at iteration c−1 of the rest of the distributed controllers, \(\hat{x}(t|t_{a})\) and u n,i (t|t a ) (i=1,…,m) are defined in Eqs. 5.36 and 5.37, respectively. The optimal solution to this optimization problem is denoted \(u_{p,j}^{a,c}(t|t_{a})\) which is defined for t∈[t a ,t a +). Accordingly, we define the final optimal input trajectory of LMPC j of Eqs. 5.1395.145 as \(u_{p,j}^{a,*}(t|t_{a})\) which is also defined for t∈[t a ,t a +).

Similar to the iterative DMPC with continuous measurements, for the first iteration of each distributed LMPC, the input trajectories defined in Eq. 5.37 based on the trajectory generated in Eq. 5.36 are used as the initial input trajectory guesses; that is, \(u_{p,i}^{a,0}=u_{n,i}\) with i=1,…,m.

The constraint of Eq. 5.142 puts a limit on the input change in two consecutive iterations. This constraint allows LMPC j of Eqs. 5.1395.145 to take advantage of the input trajectories received in the last iteration (i.e., \(u_{p,i}^{a,c-1}\), ∀ij) to predict the future evolution of the system state without introducing big errors. For LMPC j (i.e., u j ), the magnitude of input change in two consecutive iterations is restricted to be smaller than a positive constant Δu j . Note that this constraint does not restrict the input to be in a small region and as the iteration number increases, the final optimal input could be quite different from the initial guess. The constraint of Eq. 5.145 is used to guarantee the closed-loop stability.

The manipulated inputs of the closed-loop system under the above iterative DMPC are defined as follows:

$$ u_i(t) = u^{a,*}_{p,i}(t|t_a),\quad i=1,\ldots,m,\ \forall t\in[t_a,t_{a+1}).$$
(5.146)

5.5.3.2 Stability Properties

The iterative DMPC design of Eqs. 5.1395.146 takes into account asynchronous measurements explicitly in the controller design and the implementation strategy. It maintains the closed-loop stability properties of the nonlinear control law h(x) implemented in a sample-and-hold fashion and with open-loop state estimation. This property is presented in Theorem 5.4. To state this theorem, we need another proposition.

Proposition 5.1

Consider the systems:

(5.147)
(5.148)

with initial states x a (t 0),x b (t 0)∈Ω ρ such that x b (t 0)=x a (t 0)+n x andn x ‖≤θ x . There exists a function f X,j (⋅,⋅) such that:

$$ \bigl\|x_a(t)-x_b(t)\bigr\|\leq f_{X,j}(\theta_x,t-t_0)$$
(5.149)

for all x a (t), x b (t)∈Ω ρ , and \(u^{c}_{i}(t)\), \(u^{c-1}_{i}\in U_{i}\) and \(\|u^{c}_{i}(t)-u^{c-1}_{i}(t)\|\leq \varDelta u_{i}\) (i=1,…,m) with:

$$f_{X,j}(\tau)=\biggl(\frac{C_{2,j}}{C_{1,j}}+\theta_x\biggr)\bigl(e^{C_{1,j}\tau}-1\bigr),$$
(5.150)

where \(C_{1,j}=L_{x}+\sum_{i=1}^{m,~i\neq j}L_{g_{i}}u_{i}^{\max}\) and \(C_{2,j}=\sum_{i=1}^{m,~i\neq j}M_{g_{i}}\varDelta u_{i}\).

Proof

Define the error vector as e(t)=x a (t)−x b (t). The time derivative of the error is:

$$\dot{e}(t) = f\bigl(x_a(t)\bigr) - f\bigl(x_b(t)\bigr) + \sum_{i=1}^{m,~i\neq j}g_i\bigl(x_a(t)\bigr)u^c_i(t) - \sum_{i=1}^{m,~i\neq j} g_i\bigl(x_b(t)\bigr)u^{c-1}_{i}(t).$$
(5.151)

Adding and subtracting \(\sum_{i=1}^{m,~i\neq j}g_{i}(x_{b}(t))u^{c}_{i}(t)\) to/from the right-hand side of the above equation, we obtain the following equation:

(5.152)

From the Lipschitz properties of Eqs. 5.105.12, the fact that the manipulated inputs are bounded in convex sets and the difference between \(u^{c}_{i}(t)\) and \(u^{c-1}_{i}(t)\) is bounded, the following inequality can be obtained:

(5.153)

Denoting \(C_{1,j}=L_{x}+\sum_{i=1}^{m,~i\neq j}L_{g_{i}}u_{i}^{\max}\) and \(C_{2,j}=\sum_{i=1}^{m,~i\neq j}M_{g_{i}}\varDelta u_{i}\), we can obtain:

$$\bigl\|\dot{e}(t)\bigr\|\leq C_{1,j}\bigl\|e(t)\bigr\|+C_{2,j}.$$
(5.154)

Integrating \(\|\dot{e}(t) \|\) with initial condition ‖e(t 0)‖=‖n x ‖ (recall that x b (t 0)=x a (t 0)+n x ) and taking into account that ‖n x ‖≤θ x , the following bound on the norm of the error vector is obtained:

$$\bigl\|e(t)\bigr\|\leq\biggl(\frac{C_{2,j}}{C_{1,j}}+\theta_x \biggr) \bigl(e^{C_{1,j}(t-t_0)}-1\bigr).$$
(5.155)

This implies that Eq. 5.149 holds for:

$$f_{X,j}(\theta_x,\tau)=\biggl(\frac{C_{2,j}}{C_{1,j}}+\theta _x\biggr)\bigl(e^{C_{1,j}\tau}-1\bigr).$$
(5.156)

 □

Proposition 5.1 bounds the difference between the nominal state trajectory under the optimized control inputs and the predicted nominal state trajectory generated in each LMPC optimization problem. To simplify the proof of Theorem 5.4, we define a new function f X (τ) based on f X,i , i=1,…,m, as follows:

$$ f_X(\tau)= \sum_{i=1}^m\biggl(\frac {1}{m}L'_x+L'_{u_i}u_i^{\max}\biggr)\biggl(\frac {1}{C_{1,i}}f_{X,i}(0,\tau)-\frac{C_{2,i}}{C_{1,i}}\tau \biggr).$$
(5.157)

It is easy to verify that f X (τ) is a strictly increasing and convex function of its argument. In Theorem 5.4 below, we provide sufficient conditions under which the iterative DMPC of Eqs. 5.1395.146 guarantees that the state of the closed-loop system is ultimately bounded in a region that contains the origin.

Theorem 5.4

Consider the system of Eq5.1 in closed-loop with x available at asynchronous sampling time instants {t a≥0}, satisfying the condition of Eq2.22, under the DMPC design of Eqs5.139 5.146 based on a control law h(x) that satisfies the conditions of Eqs4.3 5.8. Let Δ,ε s >0, ρ>ρ min >0, ρ>ρ s >0 and NN R ≥1 satisfy the conditions of Eqs5.108 and the following inequality:

$$ -N_R\varepsilon _s+f_X(N_R\varDelta )+f_V\bigl(f_W(0,N_R\varDelta )\bigr)<0$$
(5.158)

with f X defined in Eq5.157, f V defined in Eq2.49, f W defined in Eq5.119, and N R being the smallest integer satisfying N R ΔT m . If the initial state of the closed-loop system x(t 0)∈Ω ρ , then x(t) is ultimately bounded in \(\varOmega _{\rho_{b}}\subseteq \varOmega _{\rho}\) where:

$$\rho_b = \rho_{\min} + f_X(N_R\varDelta )+f_V\bigl(f_W(0,N_R\varDelta )\bigr)$$
(5.159)

with ρ min  defined in Eq5.26.

Proof

We follow a similar strategy to the one in the proof of Theorem 5.3. In order to simplify the notation, we assume that all the signals used in this proof refer to the different optimization variables of the problems solved at t a with the initial condition x(t a ). This proof also includes two parts.

Part I: In this part, we prove that the stability results stated in Theorem 5.4 hold in the case that t a+1t a =T m for all a and T m =N R Δ. The derivative of the Lyapunov function of the nominal system of Eq. 5.1 under the control of the iterative DMPC of Eqs. 5.1395.146 from t a to t a+1 is expressed as follows:

$$\dot{V}\bigl(\tilde{x}(t)\bigr) = \frac{\partial V(\tilde{x}(t))}{\partial x} \Biggl(f\bigl(\tilde{x}(t)\bigr)+\sum_{i=1}^m g_i\bigl(\tilde{x}(t)\bigr)u_{p,i}^{a,*}(t|t_a)\Biggr),\quad \forall t\in[t_a,t_a+N_R\varDelta ).$$
(5.160)

Adding the above equation and the constraints of Eq. 5.145 in each LMPC together, we can obtain the following inequality for t∈[t a ,t a +N R Δ):

(5.161)

Reworking the above inequality, the following inequality can be obtained for t∈[t a ,t a +N R Δ):

(5.162)
(5.163)

By the continuity and locally Lipschitz properties of Eqs. 5.145.15, the following inequality can be obtained for t∈[t a ,t a +N R Δ):

(5.164)

Applying Proposition 5.1 to the above inequality of Eq. 5.164, we obtain the following inequality:

(5.165)

Integrating the inequality of Eq. 5.165 from t=t a to t=t a +=N R Δ and taking into account that \(\tilde{x}(t_{a})=\hat{x}(t_{a})\) and t a+1t a =N R Δ, the following inequality can be obtained:

(5.166)

From the definition of f X (⋅), we have

$$V\bigl(\tilde{x}(t_{a+1})\bigr) \leq V\bigl(\hat{x}(t_{a+1})\bigr) + f_X(N_R\varDelta ).$$
(5.167)

By Corollaries 5.2 and 5.3 and following similar calculations to the ones in the proof of Theorem 5.3, we obtain the following inequality

$$ V\bigl(x(t_{a+1})\bigr) \leq\max\bigl\{V\bigl(x(t_a)\bigr)-N_R\varepsilon _s,\rho_{\min}\bigr\} + f_X(N_R\varDelta ) + f_V\bigl(f_W(0,N_R\varDelta )\bigr).$$
(5.168)

If the condition of Eq. 5.158 is satisfied, we know that there exists ε w >0 such that the following inequality holds:

$$ V\bigl(x(t_{a+1})\bigr)\leq\max\bigl\{V\bigl(x(t_a)\bigr)-\varepsilon _w,\rho_b\bigr\},$$
(5.169)

which implies that if \(x(t_{a})\in \varOmega _{\rho}/\varOmega _{\rho_{b}}\), then V(x(t a+1))<V(x(t a )), and if \(x(t_{a})\in \varOmega _{\rho_{b}}\), then V(x(t a+1))≤ρ b .

Because the upper bound on the difference between the Lyapunov function of the actual trajectory x and the nominal trajectory \(\tilde{x}\) is a strictly increasing function of time, the inequality of Eq. 5.169 also implies that:

$$ V\bigl(x(t)\bigr)\leq\max\bigl\{V\bigl(x(t_a)\bigr)-\varepsilon _w,\rho_b\bigr\},\quad \forall t\in[t_a,t_{a+1}].$$
(5.170)

Using the inequality of Eq. 5.170 recursively, it can be proved that if x(t 0)∈Ω ρ , then the closed-loop trajectories of the system of Eq. 5.1 under the iterative DMPC design stay in Ω ρ for all times (i.e., x(t)∈Ω ρ for all t). Moreover, if x(t 0)∈Ω ρ , the closed-loop trajectories of the system of Eq. 5.1 under the iterative DMPC design satisfy:

$$\limsup_{t\rightarrow\infty} V\bigl(x(t)\bigr) \leq\rho_b.$$
(5.171)

This proves that x(t)∈Ω ρ for all times and x(t) is ultimately bounded in \(\varOmega _{\rho_{b}}\) for the case when t a+1t a =T m for all a and T m =N R Δ.

Part 2: In this part, we extend the results proved in Part 1 to the general case, that is, t a+1t a T m for all a and T m N R Δ which implies that t a+1t a N R Δ. Because f V , f W and f X are strictly increasing functions of time and f X , f V are convex, following similar steps as in Part 1, it can be shown that the inequality of Eq. 5.168 still holds. This proves that the stability results stated in Theorem 5.4 hold. □

Remark 5.15

Referring to the design of the LMPC of Eqs. 5.139, the constraint of Eq. 5.142 ensures that the deviation of the predicted future state evolution (using input trajectories obtained in the last iteration) from the actual system state evolution is bounded. It also ensures that the results stated in Theorem 5.4 do not depend on the iteration number c which means the iterations of the DMPC can be terminated at any iteration and the stability properties stated in Theorem 5.4 continue to hold. The constraint of Eq. 5.142 can be also imposed as the termination condition of the iterative DMPC; that is, the DMPC stops iterating when \(\|u_{p,i}(t)-u_{p,i}^{a,c-1}(t|t_{a})\|\leq \varDelta u_{i}\), i=1,…,m, for all t∈[t a ,t a +N R Δ). In this case, however, the stability properties stated in Theorem 5.4 have dependence on the iteration number c in a way that they hold only after the termination condition of Eq. 5.142 is satisfied.

5.5.4 Application to an Alkylation of Benzene Process

Consider the alkylation of benzene with ethylene process of Eqs. 5.565.80 described in Sect. 5.4.3. The control objective is still to drive the system from the initial condition as shown in Table 5.6 to the desired steady-state as shown in Table 5.4. The manipulated inputs are the heat injected to or removed from the five vessels, Q 1, Q 2, Q 3, Q 4 and Q 5, and the feed stream flow rates to CSTR-2 and CSTR-3, F 4 and F 6, whose steady-state input values are shown in Table 5.3. We design three distributed LMPCs to manipulate the 7 inputs. Similarly, the first distributed controller (LMPC 1) will be designed to decide the values of Q 1, Q 2 and Q 3, the second distributed controller (LMPC 2) will be designed to decide the values of Q 4 and Q 5, and the third distributed controller (LMPC 3) will be designed to decide the values of F 4 and F 6. The deviations of these inputs from their corresponding steady-state values are subject to the constraints shown in Table 5.5. We use the same design of h(x) as in Sect. 5.4.3 with a quadratic Lyapunov function V(x)=x T Px with P being the following weight matrix:

$$P=\mathit{diag}\bigl([1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1 \ 1 \ 1 \ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1 \ 1 \ 1 \ 1 \ 1]\bigr).$$
(5.172)

Based on h(x), we design the sequential DMPC of Eqs. 5.985.106 and the iterative DMPC of Eqs. 5.1395.146 with the following weighting matrices:

$$Q_c=\mathit{diag}\bigl(\bigl[1 \ 1 \ 1 \ 1 \ 10^{3} \ 1 \ 1 \ 1 \ 1 \ 10^{3} \ 10 \ 10 \ 10 \ 10 \ 3000 \ 1 \ 1 \ 1 \ 1\ 10^3\ 1 \ 1 \ 1\ 1 \ 10^{3}\bigr]\bigr),$$
(5.173)

and \(R_{c1}=\mathit{diag}([1\times10^{-8}\;1\times10^{-8}\;1\times 10^{-8}])\), \(R_{c2}=\mathit{diag}([1\times10^{-8}\;1\times10^{-8}])\) and \(R_{c3}=\mathit{diag}([10\;10])\). The sampling time of the LMPCs is chosen to be Δ=30 s. For the iterative DMPC of Eqs. 5.1395.146, Δu i is chosen to be \(0.25u_{i}^{\max}\) for all the distributed LMPCs and maximum iteration numbers (i.e., cc max ) are applied as the termination conditions. In all the simulations, bounded process noise is added to the right hand side of the ordinary differential equations of the process model to simulate disturbances/model uncertainty.

Table 5.6 Initial state values of the alkylation of benzene process of Eqs. 5.565.80
Table 5.7 Mean evaluation time of different LMPC optimization problems for 100 evaluations
Table 5.8 Total performance costs along the closed-loop trajectories I of the alkylation of benzene process of Eqs. 5.565.80
Table 5.9 Total performance costs along the closed-loop trajectories II of the alkylation of benzene process of Eqs. 5.565.80
Table 5.10 Total performance costs along the closed-loop trajectories III of the alkylation of benzene process of Eqs. 5.565.80

We consider that the state of the process of Eqs. 5.565.80 is sampled asynchronously and that the maximum interval between two consecutive measurements is T m =75 s. The asynchronous nature of the measurements is introduced by the measurement difficulties of the full state given the presence of several species concentration measurements. We will compare the sequential and iterative DMPC for systems subject to asynchronous measurements with a centralized LMPC which takes into account asynchronous measurements explicitly as presented in Sect. 2.7. The centralized LMPC uses the same weighting matrices, sampling time and prediction horizon as used in the DMPCs. To model the time sequence {t a≥0}, we apply an upper bounded random Poisson process. The Poisson process is defined by the number of events per unit time W. The interval between two successive state sampling times is given by Δ a =min {−ln χ/W,T m }, where χ is a random variable with uniform probability distribution between 0 and 1. This generation ensures that max  a {t a+1t a }≤T m . In the simulations, W is chosen to be 30 and the time sequence generated by this bounded Poisson process is shown in Fig. 5.8. For this set of simulations, we choose the prediction horizon of all the LMPCs to be N=3 and choose N R =N so that N R ΔT m .

Fig. 5.8
figure 8

Asynchronous measurement sampling times {t a≥0} with T m =75 s: the x-axis indicates {t a≥0} and the y-axis indicates the size of the interval between t a and t a−1

We first compare the DMPC designs for systems subject to asynchronous measurements with the centralized LMPC from a stability point of view. Figure 5.9 shows the trajectory of the Lyapunov function V(x) under these control designs. From Fig. 5.9, we see that the DMPC designs as well as the centralized LMPC design are able to drive the system state to a region very close to the desired steady state. From Fig. 5.9, we can also see that the sequential DMPC, the centralized LMPC and the iterative DMPC with c max =5 give very similar trajectories of V(x). Another important aspect we can see from Fig. 5.9(b) is that at the early stage of the closed-loop system simulation, because of the strong driving force related to the difference between the set-point and the initial condition, the process noise/disturbance has small influence on the process dynamics, even though the controller(s) has/have to operate in the presence of asynchronous measurements. When the states are getting close to the set-point, the Lyapunov function starts to fluctuate due to the domination of noise/disturbance over the vanishing driving force. However, the DMPC designs are able to maintain practical stability of the closed-loop system and keep the trajectory of the Lyapunov function in a bounded region (V(x)≤250) very close to the steady state.

Fig. 5.9
figure 9

Trajectories of the Lyapunov function of the alkylation of benzene process of Eqs. 5.565.80 under the nonlinear control law h(x) implemented in a sample-and-hold fashion and with open-loop state estimation, the iterative DMPC of Eqs. 5.1395.146 with c max =1 and c max =5, the sequential DMPC of Eqs. 5.985.106 and the centralized LMPC accounting for asynchronous measurements: (aV(x); (b) Log(V(x))

Next, we compare the evaluation times of the LMPCs in these control designs. The simulations are carried out by JAVA™ programming language in a PENTIUM® 3.20 GHz computer. The optimization problems are solved by the open source interior point optimizer Ipopt [109]. We evaluate the LMPC optimization problems for 100 runs. The mean evaluation time of the centralized LMPC is about 23.7 s. The mean evaluation time for the sequential DMPC scheme, which is the sum of the evaluation times (1.9 s, 3.6 s and 3.2 s) of the three LMPCs, is about 8.7 s. The mean evaluation time of the iterative DMPC scheme with one iteration is 6.3 s which is the largest evaluation time among the evaluation times (1.6 s, 6.3 s and 4.3 s) of the three LMPCs. The mean evaluation time of the iterative DMPC architecture with four iterations is 18.7 s with the evaluation times of the three LMPCs being 6.9 s, 18.7 s and 14.0 s. From this set of simulations, we see that the DMPC designs lead to a significant reduction in the controller evaluation time compared with a centralized LMPC design though they provide a very similar performance.

5.6 Iterative DMPC Design with Delayed Measurements

In this section, we consider the design of DMPC for systems subject to delayed measurements. In Chap. 4, we pointed out that in order to obtain a good estimate of the current system state from a delayed state measurement, a DMPC design should have bi-directional communication among the distributed controllers. Consequently, we focus on the design of DMPC for nonlinear systems subject to delayed measurements in an iterative DMPC framework.

5.6.1 Modeling of Delayed Measurements

We assume that the state of the system of Eq. 5.1 is received by the controllers at asynchronous time instants t a where {t a≥0} is a random increasing sequence of times and that there exists an upper bound T m on the interval between two successive measurements. We also assume that there are delays in the measurements received by the controllers due to delays in the sampling process and data transmission. In order to model delays in measurements, another auxiliary variable d a is introduced to indicate the delay corresponding to the measurement received at time t a , that is, at time t a , the measurement x(t a d a ) is received. In order to study the stability properties in a deterministic framework, we assume that the delays associated with the measurements are smaller than an upper bound D.

5.6.2 Iterative DMPC Formulation

As in the DMPC designs for systems subject to asynchronous measurements, we take advantage of the system model both to estimate the current system state from a delayed measurement and to control the system in open-loop when new information is not available. To this end, when a delayed measurement is received, the distributed controllers use the system model and the input trajectories that have been applied to the system to get an estimate of the current state and then based on the estimate, MPC optimization problems are solved to compute the optimal future input trajectory that will be applied until new measurements are received. A schematic of the iterative DMPC for systems subject to delayed measurements is shown in Fig. 5.10. The implementation strategy for the iterative DMPC design is as follows:

  1. 1.

    When a measurement x(t a d a ) is available at t a , all the distributed controllers receive the state measurement and check whether the measurement provides new information. If t a d a >max  l<a t l d l , go to Step 2. Else the measurement does not contain new information and is discarded, go to Step 3.

  2. 2.

    All the distributed controllers estimate the current state of the system x e(t a ) and then evaluate their future input trajectories in an iterative fashion with initial input guesses generated by h(⋅).

  3. 3.

    At iteration c (c≥1):

    1. 3.1.

      Each controller evaluates its own future input trajectory based on x e(t a ) and the latest received input trajectories of all the other distributed controllers (when c=1, initial input guesses generated by h(⋅) are used).

    2. 3.2.

      The controllers exchange their future input trajectories. Based on all the input trajectories, each controller calculates and stores the value of the cost function.

  4. 4.

    If a termination condition is satisfied, each controller sends its entire future input trajectory corresponding to the smallest value of the cost function to its actuators; if the termination condition is not satisfied, go to Step 3 (cc+1).

  5. 5.

    When a new measurement is received (aa+1), go to Step 1.

Fig. 5.10
figure 10

Iterative DMPC for nonlinear systems subject to delayed measurements

In order to estimate the current system state x e(t a ) based on a delayed measurement x(t a d a ), the distributed controllers take advantage of the input trajectories that have been applied to the system from t a d a to t a and the system model of Eq. 5.1. Let us denote the input trajectories that have been applied to the system as \(u_{d,i}^{*}(t)\), i=1,…,m. Therefore, x e(t a ) is evaluated by integrating the following equation:

$$ \dot{x}^e(t) = f\bigl(x^e(t)\bigr)+\sum_{i=1}^m g_i\bigl(x^e(t)\bigr)u^*_{d,i}(t),\quad \forall t\in[t_a-d_a,t_a)$$
(5.174)

with x e(t a d a )=x(t a d a ).

Before going to the design of the iterative DMPC, we need to define another nominal sampled trajectory \(\check{x}(t|t_{a})\) for t∈[t a ,t a +), which is obtained by replacing \(\hat{x}(t|t_{a})\) with \(\check{x}(t|t_{a})\) in Eq. 5.36 and then integrating the equation with \(\check{x}(t_{a}|t_{a}) = x^{e}(t_{a})\). Based on \(\check{x}(t|t_{a})\), we define a new input trajectory as follows:

(5.175)

which will be used in the design of the LMPC to construct the stability constraint and used as the initial input guess for iteration 1 (i.e., \(u_{d,i}^{*,0}=u_{n,i}^{e}\) for i=1,…,m).

Specifically, the design of LMPC j, j=1,…,m, at iteration c is based on the following optimization problem:

(5.176)
(5.177)
(5.178)
(5.179)
(5.180)
(5.181)
(5.182)

where N D,a is the smallest integer satisfying N D,a ΔT m +Dd a . The optimal solution to this optimization problem is denoted \(u_{d,j}^{*,c}(a|t_{a})\) which is defined for t∈[t a ,t a +). Accordingly, we define the final optimal input trajectory of LMPC j of Eqs. 5.1765.182 as \(u_{d,j}^{*}(t|t_{k})\) which is also defined for t∈[t a ,t a +). Note again that the length of the constraint N D,a depends on the current delay d a , so it may have different values at different time instants and has to be updated before solving the optimization problems.

The manipulated inputs of the closed-loop system under the above iterative DMPC for systems subject to delayed measurements are defined as follows:

$$ u_i(t) = u^*_{d,i}(t|t_a),\quad i=1,\ldots,m, \forall t\in[t_a,t_{a+q})$$
(5.183)

for all t a such that t a d a >max  l<a t l d l and for a given t a , the variable q denotes the smallest integer that satisfies t a+q d a+q >t a d a .

5.6.3 Stability Properties

The stability properties of the iterative DMPC of Eqs. 5.1765.183 are stated in the following theorem.

Theorem 5.5

Consider the system of Eq5.1 in closed-loop with x available at asynchronous sampling time instants {t a≥0} involving time-varying delays such that d a D for all a≥0, satisfying the condition of Eq2.22, under the iterative DMPC of Eqs5.176 5.183 based on a control law u=h(x) that satisfies the conditions of Eqs5.5 5.8. Let Δ,ε s >0, ρ>ρ min >0, ρ>ρ s >0, N≥1 and D≥0 satisfy the condition of Eq5.108 and the following inequality:

$$ -N_R\varepsilon _s+f_X(N_D\varDelta )+f_V\bigl(f_W(0,N_D\varDelta )\bigr)+f_V\bigl(f_W(0,D)\bigr)<0$$
(5.184)

with f V defined in Eq2.49, f W defined in Eq5.119, N D being the smallest integer satisfying N D ΔT m +D and N R being the smallest integer satisfying N R ΔT m . If the initial state of the closed-loop system x(t 0)∈Ω ρ , NN D and d 0=0, then x(t) is ultimately bounded in \(\varOmega _{\rho_{d}}\subseteq \varOmega _{\rho}\) where:

$$\rho_d = \rho_{\min} + f_X(N_D\varDelta )+f_V\bigl(f_W(0,N_D\varDelta )\bigr) +f_V\bigl(f_W(0,D)\bigr)$$
(5.185)

with ρ min  defined in Eq5.26.

Proof

We assume that at t a , a delayed measurement x(t a d a ) containing new information is received, and that the next measurement with new state information is not received until t a+i . This implies that t a+i d a+i >t a d a and that the iterative DMPC of Eqs. 5.1765.183 is solved at t a and the optimal input trajectories \(u_{d,i}^{*}(t|t_{a})\), i=1,…,m, are applied from t a to t a+i . In this proof, we will refer to \(\tilde{x}(t)\) for t∈[t a ,t a+i ) as the state trajectory of the nominal system of Eq. 5.1 under the control of the iterative DMPC of Eqs. 5.1765.183 with \(\tilde{x}(t_{a})=x^{e}(t_{a})\).

Part I: In this part, we prove that the stability results stated in Theorem 5.5 hold for t a+i t a =N D,a Δ and all d a D. By Corollary 5.2 and taking into account that \(\check{x}(t_{a})=x^{e}(t_{a})\), the following inequality can be obtained:

$$ V\bigl(\check{x}(t_{a+i})\bigr)\leq\max\bigl\{V\bigl(x^e(t_a)\bigr)-N_{D,a}\varepsilon _s,\rho _{\min}\bigr\}.$$
(5.186)

By Corollary 5.3 and taking into account that x e(t a d a )=x(t a d a ), \(\tilde{x}(t_{a})=x^{e}(t_{a})\) and N D ΔN D,a Δ+d a , the following inequalities can be obtained:

(5.187)
(5.188)

When x(t)∈Ω ρ for all times (this point will be proved below), we can apply Proposition 2.3 to obtain the following inequalities:

(5.189)

From Eqs. 5.186 and 5.189, the following inequality is obtained:

$$ V\bigl(\check{x}(t_{a+i})\bigr) \leq\max\bigl\{V\bigl(x(t_a)\bigr)-N_{D,a}\varepsilon _s,\rho _{\min}\bigr\}+f_V\bigl(f_W(0,d_a)\bigr).$$
(5.190)

By Proposition 5.1 and following similar steps as in the proof of Theorem 5.4, the following inequality can be obtained:

$$ V\bigl(\tilde{x}(t_{a+i})\bigr)\leq V\bigl(\check{x}(t_{a+i})\bigr)+f_X(N_{D,a}\varDelta ).$$
(5.191)

From Eqs. 5.189, 5.190 and 5.191, the following inequality is obtained:

(5.192)

In order to prove that the Lyapunov function is decreasing between two consecutive new measurements, the following inequality must hold:

$$ N_{D,a}\varepsilon _s > f_V\bigl(f_W(0,d_a)\bigr)+f_V\bigl(f_W(0,N_{D}\varDelta )\bigr)+f_X(N_{D,a}\varDelta )$$
(5.193)

for all possible 0≤d a D. Taking into account that f W , f V and f X are strictly increasing functions of time, N D,a is a decreasing function of the delay d a and that if d a =D then N D,a =N R , then if the condition of Eq. 5.184 is satisfied, the condition of Eq. 5.193 holds for all possible d a and there exists ε w >0 such that the following inequality holds:

$$ V\bigl(x(t_{a+i})\bigr) \leq\max\bigl\{V\bigl(x(t_a)\bigr)-\varepsilon _w, \rho_d\bigr\},$$
(5.194)

which implies that if \(x(t_{a})\in \varOmega _{\rho}/\varOmega _{\rho_{d}}\), then V(x(t a+i ))<V(x(t a )), and if \(x(t_{a})\in \varOmega _{\rho_{d}}\), then V(x(t a+i ))≤ρ d .

Because the upper bound on the difference between the Lyapunov function of the actual trajectory x and the nominal trajectory \(\tilde{x}\) is a strictly increasing function of time, the inequality of Eq. 5.194 also implies that:

$$ V\bigl(x(t)\bigr)\leq\max\bigl\{V\bigl(x(t_a)\bigr),\rho_{d}\bigr\},\quad \forall t\in[t_a,t_{a+i}).$$
(5.195)

Using the inequality of Eq. 5.195 recursively, it can be proved that if x(t 0)∈Ω ρ , then the closed-loop trajectories of the system of Eq. 5.1 under the iterative DMPC of Eqs. 5.1765.183 stay in Ω ρ for all times (i.e., x(t)∈Ω ρ ,∀t). Moreover, using the inequality of Eq. 5.195 recursively, it can be proved that if x(t 0)∈Ω ρ , the closed-loop trajectories of the system of Eq. 5.1 under the iterative DMPC of Eqs. 5.1765.183 satisfy:

$$\limsup_{t\rightarrow\infty} V\bigl(x(t)\bigr) \leq\rho_d.$$
(5.196)

This proves that x(t)∈Ω ρ for all times and x(t) is ultimately bounded in \(\varOmega _{\rho_{d}}\) when t a+i t a =N D,a Δ.

Part 2: In this part, we extend the results proved in Part 1 to the general case, that is, t a+i t a N D,a Δ. Taking into account that f V , f W and f X are strictly increasing functions of time and following similar steps as in Part 1, it can be readily proved that the inequality of Eq. 5.193 holds for all possible d a D and t a+i t a N D,a Δ. Using this inequality and following the same line of argument as in the previous part, the stability results stated in Theorem 5.5 can be proved. □

5.6.4 Application to an Alkylation of Benzene Process

Consider the alkylation of benzene with ethylene process of Eqs. 5.565.80 described in Sect. 5.4.3. We set up the simulations as described in Sect. 5.5.4.

We consider that the state of the process of Eqs. 5.565.80 is sampled at asynchronous time instants {t a≥0} with an upper bound T m =50 s on the interval between two successive measurements. Moreover, we consider that there are delays involved in the measurement samplings and the upper bound on the maximum delay is D=40 s. The delays in measurements can naturally arise in the context of species concentration measurements. We will compare the iterative DMPC of Eqs. 5.1765.183 with a centralized LMPC which takes into account delayed measurements explicitly as presented in Sect. 2.8. The centralized LMPC uses the same weighting matrices, sampling time and prediction horizon as used in the DMPC. In order to model the sampling time instants, the same Poisson process as used in Sect. 5.5.4 is used to generate {t a≥0} with W=30 and T m =50 s and another random process is used to generate the associated delay sequence {d a≥0} with D=40 s. For this set of simulations, we also choose the prediction horizon of all the LMPCs to be N=3 so that the horizon covers the maximum possible open-loop operation interval. Figure 5.11 shows the time instants when new state measurements are received and the associated delay sizes. Note that for all the control designs considered in this subsection, the same state estimation strategy shown in Eq. 5.174 is used.

Fig. 5.11
figure 11

Asynchronous time sequence {t a≥0} and corresponding delay sequence {d a≥0} with T m =50 s and D=40 s: the x-axis indicates {t a≥0} and the y-axis indicates the size of d a

Figure 5.12 shows the trajectory of the Lyapunov function V(x) under different control designs. From Fig. 5.12, we see that both the iterative DMPC for systems subject to delayed measurements and the centralized LMPC accounting for delays are able to drive the system state to a region very close to the desired steady state (V(x)≤250); the trajectories of V(x) generated by the iterative DMPC design are bounded by the corresponding trajectory of V(x) under the nonlinear control law h(x) implemented in a sample-and-hold fashion and with open-loop state estimation. From Fig. 5.12, we can also see that the centralized LMPC and the iterative DMPC with c max =5 give very similar trajectories of V(x).

Fig. 5.12
figure 12

Trajectories of the Lyapunov function of the alkylation of benzene process of Eqs. 5.565.80 under the nonlinear control law h(x) implemented in a sample-and-hold fashion and with open-loop state estimation, the iterative DMPC of Eqs. 5.1765.183 with c max =1 and c max =5 and the centralized LMPC accounting for delays: (aV(x); (b) Log(V(x))

In the final set of simulations, we compare the centralized LMPC and the iterative DMPC from a performance index point of view. To carry out this comparison, the same initial condition and parameters were used for the different control schemes and the total cost under each control scheme was computed as follows:

$$J=\int_{0}^{t_f} \bigl[\bigl\|x(\tau)\bigr\|_{Q_c} + \bigl\|u_1(\tau)\bigr\| _{R_{c1}}+\bigl\|u_2(\tau)\bigr\|_{R_{c2}}+\bigl\|u_3(\tau)\bigr\|_{R_{c3}} \bigr]\,d\tau,$$
(5.197)

where t f =1500 s is the final simulation time. Figure 5.13 shows the total cost along the closed-loop system trajectories under the iterative DMPC of Eqs. 5.1765.183 and the centralized LMPC accounting for delays. For the iterative DMPC design, different maximum numbers of iterations, c max , are used. From Fig. 5.13, we can see that as the iteration number c increases, the performance cost given by the iterative DMPC design decreases and converges to a value which is very close to the cost of the one corresponding to the centralized LMPC. However, we note that there is no guaranteed convergence of the performance of iterative DMPC design to the performance of a centralized MPC because of the nonconvexity of the LMPC optimization problems, and the different stability constraints imposed in the centralized LMPC and the iterative DMPC design.

Fig. 5.13
figure 13

Total performance costs along the closed-loop trajectories of the alkylation of benzene process of Eqs. 5.565.80 under the centralized LMPC accounting for delays (dashed line) and iterative DMPC of Eqs. 5.1765.183 (solid line)

5.7 Handling Communication Disruptions in DMPC

In this section, we focus on a hierarchical type DMPC (see Remark 4.12) for the system of Eq. 5.1 and discuss how to handle communication disruptions—communication channel noise and data losses—between the distributed controllers. In the sequel, we design m LMPCs to calculate the m sets of control inputs, respectively, and refer to the controller that calculates u i (i=1,…,m) as LMPC i. In this approach, LMPC 1 communicates with the rest of LMPCs (i.e., LMPC 2 to LMPC m) using one-directional communication and cooperates with them to maintain the closed-loop stability.

In the proposed design, to handle communication channel noise between the distributed controllers, feasibility problems are incorporated in the DMPC architecture to determine if the data transmitted through the communication channel is reliable or not. Based on the results of the feasibility problems, the transmitted information is accepted or rejected by LMPC 1. When there are communication data losses between the distributed controllers, the closed-loop system under the proposed DMPC is guaranteed to be practically stable because of the stability constraints incorporated in the LMPC designs. A schematic diagram of the DMPC design for systems subject to communication disruptions between distributed controllers is depicted in Fig. 5.14.

Fig. 5.14
figure 14

Hierarchical type distributed LMPC control architecture (F means solving a feasibility problem)

5.7.1 Model of the Communication Channel

We consider data losses and channel noise in communication between the m distributed controllers. For a given input rR m to the communication channel, the output \(\tilde{r}\in R^{m}\) is characterized as:

$$\tilde{r}=lr+n,$$
(5.198)

where l is a Bernoulli random variable with parameter α and nR m is a vector whose elements are white gaussian noise with zero mean and the same variance σ 2. The random variable l is used to model data losses in the communication channel. The white noise, n, is used to model channel noise, quantization error or any other error to the transmitted signal, and it is independent of the data losses in a probabilistic sense. If the receiver determines that a successful transmission is made, then l=1, otherwise l=0. Furthermore, in order to get deterministic stability results, we assume that, when a successful transmission is made, the noise, n, attached to the input signal, r, is bounded by θ c (that is ‖n‖≤θ c ) as shown in Fig. 5.15. Both assumptions are meaningful from a practical standpoint. We further assume that the capacity of the communication channel [15] is high enough so that we can transmit data through it with a high rate.

Fig. 5.15
figure 15

Bounded communication channel noise

Remark 5.16

Note that there are a variety of approaches to detect whether data loss has happened at the receiver side of a communication channel. One common approach is to measure the power of the received signal and compare it with a pre-configured signal transmission power level. If the power of the received signal is much smaller than the preconfigured signal transmission power level, then data loss is declared; and if the power of the received signal is close to the preconfigured signal transmission power level, then the transmission is assumed to be successful.

5.7.2 DMPC with Communication Disruptions

The implementation strategy for the DMPC with feasibility problems is as follows:

  1. 1.

    At t k , all controllers receive the sensor measurements x(t k ).

  2. 2.

    For i=2,…,m

    1. 2.1.

      LMPC i evaluates the optimal input trajectory of u i based on x(t k ) and sends the first step input values of u i to its corresponding actuators.

    2. 2.2.

      LMPC i sends the entire optimal input trajectory of u i to LMPC 1 through a communication channel.

  3. 3.

    LMPC 1 solves a feasibility problem for each input trajectory it received to determine if the trajectory should be accepted or rejected.

  4. 4.

    LMPC 1 evaluates the future input trajectory of u 1 based on x(t k ) and the results of the feasibility problems for the trajectories it received from LMPC i with i=2,…,m.

  5. 5.

    LMPC 1 sends the first step input value of u 1 to its corresponding actuators.

  6. 6.

    When a new measurement is received (kk+1), go to Step 1.

In the sequel, we describe the design of LMPC j (j=2,…,m) and its corresponding feasibility problem and the design of LMPC 1. In the formulations, the input trajectories u n,i (t|t k ) defined in Eq. 5.37 based on the sampled state trajectory defined in Eq. 5.36 will be used.

Upon receiving the sensor measurement x(t k ), LMPC j obtains its optimal input trajectory by solving the following optimization problem:

(5.199)
(5.200)
(5.201)
(5.202)
(5.203)
(5.204)

where \(q=0,\ldots,N-1, \tilde{x}^{j}\) is the predicted trajectory of the nominal system with u j being the input trajectory computed by this LMPC j and u i (ij) determined by u n,i (t|t k ).

Let \(u_{j}^{*}(t|t_{k})\) denote the optimal solution of the optimization problem of Eqs. 5.1995.204. LMPC j sends the first step value of \(u_{j}^{*}(t|t_{k})\) to its actuators and transmits the whole optimal input trajectory through the communication channel to LMPC 1. LMPC 1 receives a corrupted version of \(u_{j}^{*}(t|t_{k})\) which can be formulated as:

$$\tilde{u}_{j}(t|t_{k})=l u_{j}^{*}(t|t_{k})+n.$$
(5.205)

If data losses occur during the transmission of the control input trajectory from LMPC j to LMPC 1, LMPC 1 assumes that LMPC j applies h j (x) (i.e., u j =h j (x)). Note that we do not consider explicitly the step of determining whether data losses occur or not in the transmission of input trajectories. Please see Remark 5.16 on approaches of determining transmission data losses.

When a transmission of the input trajectory \(u_{j}^{*}(t|t_{k})\) is successful, LMPC 1 receives \(\tilde{u}_{j}(t|t_{k})\) which is a noise-corrupted version of \(u^{*}_{j}(t|t_{k})\). To determine the reliability of the received information, LMPC 1 solves a feasibility problem. Based on the result of the feasibility problem, LMPC 1 determines if the received information should be accepted or rejected. The feasibility problem for the information received from LMPC j is as follows:

(5.206)
(5.207)
(5.208)
(5.209)

According to the bounded noise value and the received signal from the communication channel, LMPC 1 considers all the possibilities of noise effect on the optimal trajectory of LMPC j (i.e., the constraint of Eq. 5.207) and checks whether in these cases the input received from LMPC j still satisfies the stability constraint of Eq. 5.204 (i.e., the constraint of Eq. 5.209). Note that when the optimization problem of Eqs. 5.2065.209 is not feasible, it is guaranteed that the original signal \(u_{j}^{*}(t|t_{k})\) after transmission through the channel still satisfies the stability constraint of Eq. 5.204. The feasibility of this problem is used to test whether there exists any possible value of the noise that could (due to corruption) end up making the implemented control action cause an increase in the Lyapunov function derivative, i.e., that \(\frac{\partial V(x(t_{k}))}{\partial x}g_{j}(x(t_{k}))u_{j}(0)> g_{j}(x(t_{k}))h_{j}(x(t_{k}))\). If the problem is infeasible, it is guaranteed that the noise cannot make the control action destabilizing, and hence, the control action is accepted. On the other hand, if the problem is feasible, it opens up the possibility of the noise rendering the control action destabilizing, and hence, it is discarded. We also note that there is no requirement that θ c is sufficient small, however, larger values of θ c increase the range of z(t) and influence the feasibility of the problem of Eqs. 5.2065.209.

If the optimization problem of Eqs. 5.2065.209 is not feasible, then the trajectory information received by LMPC 1 (i.e., \(\tilde{u}_{j}(t|t_{k})\)) is used in the evaluation of LMPC 1; and if the optimization problem of Eqs. 5.2065.209 is feasible, then \(\tilde {u}_{j}(t|t_{k})\) is discarded and the input trajectory u n,j (t|t k ) will be used in the evaluation of LMPC 1. If we define the trajectory of u j that is used in the evaluation of LMPC 1 as \(\tilde{u}_{j}^{*}(t|t_{k})\), then it is defined as follows:

$$\tilde{u}_{j}^{*}(t|t_{k})= \left\{ \begin{array}{l@{\quad}l}\tilde{u}_{j}(t|t_{k}) & \mbox{if the problem of Eqs.~5.206--5.209 is notfeasible} \\ & \mbox{and there is no data loss},\\u_{n,j}(t|t_k) & \mbox{if the problem of Eqs.~5.206--5.209 isfeasible or} \\ & \mbox{thereexists data loss}.\end{array} \right.$$

Note that when data loss in the communication channel occurs, the input trajectory u n,j (t|t k ) is also used in the evaluation of LMPC 1. Note also that the above strategy on the use of the corrupted communication information is just one of many possible options to handle communication disruptions in the DMPC architecture.

Employing \(\tilde{u}_{j}^{*}\), j=2,…,m, LMPC 1 obtains its optimal trajectory according to the following optimization problem:

(5.210)
(5.211)
(5.212)
(5.213)
(5.214)
(5.215)

In the LMPC 1 formulation of Eqs. 5.2105.215, LMPC 1 takes advantage of the knowledge of m−1 feasibility problems (i.e., \(\tilde{u}_{j}^{*}\), j=2,…,m) to predict the future evolution of the system \(\tilde{x}^{1}\). Let \(u_{1}^{*}(t|t_{k})\) denote the optimal solution of the optimization problem of Eqs. 5.2105.215.

Based on the solutions of the m LMPC optimization problems, the manipulated inputs of the DMPC design are defined as follows:

$$u_i(t) = u^*_{i}(t|t_k),\quad \forall t\in[t_k,t_{k+1}), i=1,\ldots ,m. $$
(5.216)

Remark 5.17

Note that the white gaussian noise considered in this section is the accumulation of thermal effects and quantization errors. We do not consider the effects of multi-path transmission, terrain blocking, interference, etc. Furthermore, we assume that when package loss happens, all of the information that should be transmitted is lost; however, without loss of generality, the method presented in this section can be extended to the case in which data loss happens only in some packets of information following a similar methodology like Eqs. 5.2065.209 to deal with this issue. The interested reader may refer to [15, 87] for more details on communication channel modeling.

5.7.3 Stability Properties

As it will be proved in Theorem 5.6 below, the DMPC framework takes advantage of the constraints of Eqs. 5.204 and 5.215 to compute the optimal trajectories u 1,…,u m such that the Lyapunov function value V(x(t k )) is a decreasing sequence with a lower bound and achieves the closed-loop stability of the system.

Theorem 5.6

Consider the system of Eq5.1 in closed-loop under the DMPC design of Eqs5.199 5.216 based on a control law h(x) that satisfies the conditions of Eqs5.5 5.8. Let ε w >0, Δ>0 and ρ>ρ s >0 satisfy the following constraint:

$$ -\alpha_3\bigl(\alpha^{-1}_2(\rho_s)\bigr)+\Biggl(L'_x+\sum _{i=1}^{m}L'_{u_i}u_{i}^{\max}\Biggr)M\varDelta +L'_w\theta\leq -\varepsilon _w/\varDelta .$$
(5.217)

If x(t 0)∈Ω ρ and if ρ min ρ where ρ min =max {V(x(t+Δ)):V(x(t))≤ρ s }, then the state x(t) of the closed-loop system is ultimately bounded in \(\varOmega _{\rho_{\min}}\).

Proof

The proof consists of two parts. We first prove that the optimization problems of Eqs. 5.1995.204 and 5.2105.215 are feasible for all states xΩ ρ . Subsequently, we prove that, under the DMPC design of Eqs. 5.1995.216, the state of the system of Eq. 5.1 is ultimately bounded in a region that contains the origin.

Part 1: First, we consider the feasibility of LMPC j of Eqs. 5.1995.204 (j=2,…,m) and of LMPC 1 of Eqs. 5.2105.215. All input trajectories of u j (t) (j=1,…,m) such that u j (t)=u n,j (t|t k ),∀t∈[t k ,t k+N ) satisfy all the constraints (including the input constraints of Eqs. 5.203 and 5.212 and the constraints of Eq. 5.204 and 5.215) of LMPC j, thus the feasibility of LMPC j as well as LMPC 1 is obtained.

Part 2: Considering the inequality of Eq. 5.6, addition of inequalities of Eqs. 5.204 for j=2,…,m and 5.215 implies that if x(t k )∈Ω ρ , the following inequality holds:

(5.218)

The time derivative of the Lyapunov function along the state trajectory x(t) of the system of Eq. 5.1 in t∈[t k ,t k+1) is given by:

$$\dot{V}\bigl(x(t)\bigr)=\frac{\partial V(x)}{\partial x}\Biggl(f\bigl(x(t)\bigr)+\sum _{i=1}^{m}g_{i}\bigl(x(t)\bigr)u^{*}_{i}(t_k|t_k)+k\bigl(x(t)\bigr)w(t)\Biggr). $$
(5.219)

Adding and subtracting \(\frac{\partial V(x(t_{k}))}{\partial x}(f(x(t_{k}))+\sum_{i=1}^{m}g_{i}(x(t_{k}))u^{*}_{i}(t_{k}|t_{k}))\) to the right-hand side of Eq. 5.219 and taking Eq. 5.218 into account, we obtain the following inequality:

(5.220)

From Eq. 5.5, Eqs. 5.145.16, and the inequality of Eq. 5.220, the following inequality is obtained for all \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\):

$$\dot{V}\bigl(x(t)\bigr)\leq-\alpha_3\bigl(\alpha^{-1}_2(\rho_s)\bigr)+L'_w\bigl\|w(t)\bigr\|+\Biggl(L'_{x}+\sum _{i=1}^{m}L'_{u_i}u^{*}_{i}(t_k|t_k)\Biggr)\bigl\|x(t)-x(t_k)\bigr\|.$$
(5.221)

Taking into account Eq. 5.9 and the continuity of x(t), the following bound can be written for all t∈[t k ,t k+1), ‖x(t)−x(t k )‖≤. Using this expression, we obtain the following bound on the time derivative of the Lyapunov function for t∈[t k ,t k+1), for all initial states \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\):

$$\dot{V}\bigl(x(t)\bigr)\leq -\alpha_3\bigl(\alpha^{-1}_2(\rho_s)\bigr)+\Biggl(L'_{x}+\sum _{i=1}^{m}L'_{u_i}u_{i}^{\max}\Biggr)M\varDelta +L'_w\theta.$$
(5.222)

If the condition of Eq. 5.217 is satisfied, then there exists ε w >0 such that the following inequality holds for \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\):

$$\dot{V}\bigl(x(t)\bigr)\leq-\varepsilon _w/\varDelta , \quad \forall t\in[t_k,t_{k+1}).$$
(5.223)

Integrating this bound on t∈[t k ,t k+1), we obtain that:

(5.224)
(5.225)

for all \(x(t_{k})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\). Using Eqs. 5.2245.225 recursively, it is proved that, if \(x(t_{0})\in \varOmega _{\rho}/\varOmega _{\rho_{s}}\), the state converges to \(\varOmega _{\rho_{s}}\) in a finite number of sampling times without leaving the stability region. Once the state converges to \(\varOmega _{\rho_{s}}\subseteq \varOmega _{\rho_{\min}}\), it remains inside \(\varOmega _{\rho_{\min}}\) for all times. This statement holds because of the definition of ρ min . This proves that the closed-loop system under the DMPC of Eqs. 5.1995.216 is ultimately bounded in \(\varOmega _{\rho_{\min}}\). □

Remark 5.18

Note that the use of the corrupted input trajectory information of u j (i.e., \(\tilde{u}_{j}\)) where j=2,…,m does not affect the feasibility of the optimization problems of Eqs. 5.1995.204 and 5.2105.215 as well as the stability of the closed-loop system; however, it does affect the closed-loop system performance. This is the reason for the introduction of the feasibility problem of Eqs. 5.2065.209 which is used to decide whether the corrupted information can be used or not to improve the closed-loop performance. An application of the DMPC architecture with the feasibility problem of Eqs. 5.1995.216 to a chemical process can be found in [30].

5.8 Conclusions

In this chapter, we designed sequential and iterative DMPC schemes for large-scale nonlinear systems. In the sequential DMPC architecture, the distributed controllers adopt a one-directional communication strategy and are evaluated in sequence and once at each sampling time; in the iterative DMPC architecture, the distributed controllers utilize a bidirectional communication strategy, are evaluated in parallel and iterate to improve closed-loop performance. We considered three cases for the design of the sequential and iterative DMPC schemes: systems with continuous, synchronous state measurements, systems with asynchronous measurements and systems with delayed measurements. For all the three cases, appropriate implementation strategies, suitable Lyapunov-based stability constraints and sufficient conditions under which practical closed-loop stability is ensured, were provided. Extensive simulations using a catalytic alkylation of benzene process example were carried out to compare the DMPC architectures with existing centralized LMPC algorithms from computational time and closed-loop performance points of view. Moreover, we focused on a hierarchical type DMPC and discussed how to handle communication disruptions in the communication between the distributed controllers by incorporating feasibility problems to decide the reliability of the transmitted information.