Journal of Control, Automation and Electrical Systems

, Volume 24, Issue 4, pp 420–429

LMI-Based Multi-model Predictive Control of an Industrial C3/C4 Splitter

Authors

  • Bruno Didier Olivier Capron
    • Department of Chemical EngineeringUniversity of São Paulo
    • Department of Chemical EngineeringUniversity of São Paulo
Article

DOI: 10.1007/s40313-013-0050-1

Cite this article as:
Capron, B.D.O. & Odloak, D. J Control Autom Electr Syst (2013) 24: 420. doi:10.1007/s40313-013-0050-1

Abstract

In this paper, the robust Model Predictive Control (MPC) of systems with model uncertainty is addressed. The robust approach usually involves the inclusion of nonlinear constraints to the optimization problem upon which the controller is based. At each time step the sequence of control actions is then calculated through the resolution of a NonLinear Programming problem, which can be too computer demanding for high dimension systems. Here, the conventional Multi-model Predictive Control (MMPC) problem is re-casted as an LMI-based problem that can be solved with a lower computational cost. The conventional and LMI-based robust controllers’ performances and computational costs are compared through simulations of the control of an industrial C3/C4 splitter.

Keywords

Model predictive controlLinear matrix inequalityMulti-model uncertaintyRobust control

1 Introduction

Model Predictive Control (MPC) has known over the past decades a tremendous development. Originally designed and developed for power plants and oil refineries, this advanced control technology can now be found in many sectors such as chemical, food processing, automotive and aerospace industries (Qin and Badgwell 2003), and in medical research (Lee and Bequette 2009). Based on a model representation of the system to be controlled, MPC basically consists in calculating at every time step the sequence of inputs that optimizes the predicted behavior of the system subject to restrictions on the inputs and the outputs. The first calculated control move is implemented and the optimization problem is solved again at the next time step.

MPC is usually based on a single linear model of the system. However, chemical systems often exhibit highly nonlinear behavior. Consequently, as the system commonly works at different operating points, a controller based on a single linear model may not produce an efficient control of the plant. One way to circumvent this problem is to work with the Multi-model Predictive Control (MMPC) (Porfírio et al. 2003), for which a discrete set of plant models corresponding to different operating points of the system is considered. In that case, an objective function is defined for each model of the set and the multi-model predictive controller results from minimizing the worst case objective function.

Such a controller has already been implemented in Porfírio et al. (2003) on a real industrial C3/C4 splitter, where it actually showed to have a much better performance than the standard MPC formulation. A disadvantage of the MMPC compared to the conventional MPC is the computational burden involved in the solution of the optimization problem that defines the MMPC, which can be prohibitive for high dimension systems. In this case, the LMI techniques that have been developed over the past decades may be of interest as they allow a significant reduction of the computational complexity of the optimization problem (Gahinet et al. 1995). These techniques were first applied for MPC by Kothare et al. (1996) whose work has opened the way to several developments over the years including the recent works of Cuzzola et al. (2002), Alamo et al. (2008), Cychoswski and O’mahony (2010), Ding (2010), Li and Xi (2010), Falugi et al. (2010a, b).

In these works the robust model predictive controller includes the calculation of a state feedback control law \(u(k)=F(k)x(k)\) that minimizes an upper bound of the cost functions associated to the different possible models describing the system. The resulting robust controller can incorporate constraints and the stability of the closed-loop system can be achieved for stable and unstable systems. Nevertheless this control strategy presents the following drawbacks:
  • The number of variables of the control problem is much larger than in the conventional MPC. If the control problem is solved on-line, the computational effort may be prohibitive for moderate to large systems.

  • The way that the input constraints are implemented tends to be conservative, which impacts on the controller domain of attraction and performance.

In this work, the conventional MMPC problem developed for the industrial C3/C4 splitter system that can be found in Porfírio et al. (2003) is re-casted as an LMI-based problem. Then, the performances and computational costs of the conventional NLP-based MMPC and the LMI-based MMPC are compared in order to quantify the gain that the LMI techniques can provide in terms of computational times for the free control moves strategy. The so-called zone control strategy (Maciejowski 2002) is adopted, assuming that a Real-Time Optimization (RTO) algorithm lies at the top of the control structure and defines optimum targets for some of the inputs of the system and that the outputs have to remain inside predefined boundaries instead of tracking fixed targets.

2 System Representation

Consider a system with \(ny\) controlled outputs and \(nu\) manipulated inputs, and let \(\theta _{i,j} \) be the time delay between input \(u_j \) and output \(y_i \). Then, define \(p\) such that \(p>{\mathop {\text{ max }}\limits _{i,j}} \left( {{\theta _{i,j}}\big / T}\right) +m\) where \(T\) is the sampling period and \(m\) is the input horizon. The state space model adopted in this work is the Output Prediction-Oriented Model (OPOM) extended to delayed time systems (González and Odloak 2011; Odloak 2004). To build this model, assume that the Laplace transfer function relating input \(u_j \)and output \(y_i \) is given by:
$$\begin{aligned} G_{i,j} (s)=\frac{B_{i,j} (s)e^{-\theta _{i,j} s}}{A_{i,j} (s)} \end{aligned}$$
(1)
where
$$\begin{aligned} B_{i,j} (s)&= b_{i,j,0} +b_{i,j,1} s+b_{i,j,2} s^{2}+\ldots +b_{i,j,nb} s^{nb} \\ A_{i,j} (s)&= 1+a_{i,j,1} s+a_{i,j,2} s^{2}+\ldots +a_{i,j,na} s^{na} \end{aligned}$$
The step response of the system defined in (1) can be represented as follows:
$$\begin{aligned} S_{i,j} (k)=d_{i,j}^0 +\sum _{l=1}^{na} {d_{i,j,l}^d e^{r_{i,j,l} (kT-\theta _{i,j} )}} \end{aligned}$$
(2)
where \(d_{i,j}^0,d_{i,j,1}^d,\ldots ,d_{i,j,na}^d \) are obtained by partial fractions expansion of (1) and \(r_{i,j,1},\ldots ,r_{i,j,na} \) are the poles of \(A_{i,j} (s)\), which are assumed to be distinct. Note that, non-integrating modes are considered in this work or that all poles are different from zero.
The extended OPOM is based on the step responses defined in (2) and can be expressed as follows:
$$\begin{aligned}&x({k+1})=Ax(k)+B\Delta u(k) \nonumber \\&y(k)=Cx(k) \end{aligned}$$
(3)
where \(x(k)\!=\!\big [ {y(k)^{T}} {y(k\!+\!1)^{T}} \cdots {y(k\!+\!p)^{T}} {x^{s}(k)^{T}} x^{d} (k)^{T} \big ]^{T}\), \(\Delta u(k)\) is the column vector of the manipulated input moves at time step \(k\),
$$\begin{aligned} A&= \left[ {{\begin{array}{ccccccc} 0&{} {I_{ny} }&{} 0&{} \cdots &{} 0&{} 0&{} 0 \\ 0&{} 0&{} {I_{ny} }&{} \cdots &{} 0&{} 0&{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots \\ 0&{} 0&{} 0&{} \cdots &{} {I_{ny} }&{} 0&{} 0 \\ 0&{} 0&{} 0&{} \cdots &{} 0&{} {I_{ny} }&{} {\Psi ((p+1)T)} \\ 0&{} 0&{} 0&{} \cdots &{} 0&{} {I_{ny} }&{} 0 \\ 0&{} 0&{} 0&{} \cdots &{} 0&{} 0&{} F \\ \end{array} }} \right] ,\\ B&= \left[ {{\begin{array}{l} {S_1 } \\ {S_2 } \\ \vdots \\ {S_p } \\ {S_{p+1} } \\ {B^{s} } \\ {B^{d}} \\ \end{array} }} \right] ,C=\left[ {{\begin{array}{cccc} {I_{ny} }&{} 0&{} \cdots &{} 0 \\ \end{array} }} \right] \end{aligned}$$
where the first \(p+1\) components of \(x(k)\) are associated to the output predictions at future time steps, \(x^{s}\in \mathfrak R ^{ny}\) corresponds to the predicted output at steady-state and \(x^{d}\in C^{nd}\) are the states corresponding to the stable modes of the system that tend to zero when the system approaches steady-state, \(nd\) being the number of stable modes of the system, \(\Psi \in \mathfrak R ^{nd \times nd}\), \(I_{ny} ={\mathrm{diag}}\left[ {\left( {1\ldots 1} \right) } \right] \in \mathfrak R ^{ny\times ny}\)
$$\begin{aligned} B^{s}&= \left[ {{\begin{array}{cccc} {d_{1,1}^0 }&{} {d_{1,2}^0 }&{} \cdots &{} {d_{1,nu}^0 } \\ {d_{2,1}^0 }&{} {d_{2,2}^0 }&{} \cdots &{} {d_{2,nu}^0 } \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {d_{ny,1}^0 }&{} {d_{ny,2}^0 }&{} \cdots &{} {d_{ny,nu}^0 } \\ \end{array} }} \right] , \\ F&= {\mathrm{diag}}(e^{r_{1,1,1} T},\ldots ,e^{r_{1,1,na} T},e^{r_{1,2,1} T},\ldots ,e^{r_{1,2,na} T},\\&\quad \ldots ,e^{r_{1,nu,1} T},\ldots ,e^{r_{1,nu,na} T},e^{r_{2,1,1} T},\ldots ,e^{r_{2,1,na} T}, \\&\quad \!\ldots \!,e^{r_{ny,1,1} T},\ldots ,e^{r_{ny,1,na} T},e^{r_{ny,nu,1} T},\ldots ,e^{r_{ny,nu,na} T})\\&\quad \in C^{nd\times nd}, \\ B^{d}&= D^{d}FN \end{aligned}$$
with
$$\begin{aligned} D^{d}&= {\mathrm{diag}}(d_{1,1,1}^d,\ldots ,d_{1,1,na}^d,d_{1,2,1}^d,\ldots ,d_{1,2,na}^d,\\&\quad \ldots ,d_{1,nu,1}^d,\ldots ,d_{1,nu,na}^d ,d_{2,1,1}^d,\ldots ,d_{2,1,na}^d, \\&\quad \ldots ,d_{ny,1,1}^d ,\ldots ,d_{ny,1,na}^d,d_{ny,nu,1}^d,\ldots ,d_{ny,nu,na}^d ),\\&N=\left. {\left[ {{\begin{array}{l} J \\ \vdots \\ J \\ \end{array} }} \right] } \right\} ny,\quad J=\left[ {{\begin{array}{ccccc} 1&{} 0&{} 0&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 1&{} 0&{} 0&{} \cdots &{} 0 \\ 0&{} 1&{} 0&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0&{} 1&{} 0&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0&{} 0&{} 0&{} \cdots &{} 1 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0&{} 0&{} 0&{} \cdots &{} 1 \\ \end{array} }} \right] . \end{aligned}$$
\(S_1,\ldots ,S_{p+1} \) are the step response coefficients of the system. Matrix \(\Psi \) is defined as follows:
$$\begin{aligned} \Psi \left( t \right) =\left[ {{\begin{array}{cccc} {\phi _1 \left( t \right) }&{} 0&{} \cdots &{} 0 \\ 0&{} {\phi _2 \left( t \right) }&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0&{} 0&{} \cdots &{} {\phi _{ny} \left( t \right) } \\ \end{array} }} \right] ,\Psi \in \mathfrak R ^{nd \times nd} \end{aligned}$$
where
$$\begin{aligned} \phi _i \left( t \right)&= \left[ {e^{r_{i,1,1} \left( {t-\theta _{i,1} } \right) }}\quad \ldots \quad {e^{r_{i,1,na} \left( {t-\theta _{i,1} } \right) }}\right. \\&\left. \quad \ldots \quad {e^{r_{i,nu,1} \left( {t-\theta _{i,nu} } \right) }}\quad \ldots \quad {e^{r_{i,nu,na} \left( {t-\theta _{i,nu} } \right) }}\right] . \end{aligned}$$
Let us note that in the system studied in this work some measured disturbances can enter the system. However, as the inclusion of the measured disturbances input in the state space representation model doesn’t bring any theoretical interest, it was deliberately omitted of the system representation defined in (3) for the sake of clarity and simplicity.

In the system studied in this work, the uncertainties concentrate on matrices \(B,F\)and \(\varPsi \), so that the discrete set \(\Omega \) of possible plants can be defined as \(\Omega =\left\{ {\Theta _1,\cdots ,\Theta _L } \right\} \) where each \(\Theta _\mathrm{n} \) corresponds to a particular plant: \(\varTheta _{n} =\left( {B,F,\varPsi } \right) _{n},\,n=1,\ldots ,L\). Let us also assume that the true plant is designated as \(\Theta _T \) and that the current estimated state corresponds to the true plant state.

3 Conventional NLP-Based Multi-model Predictive Control with Zone Control and Input Target

It is assumed in this work that a Real-Time Optimization (RTO) algorithm lies at the top of the control structure and defines optimum targets for some inputs and outputs of the system while the remaining inputs and outputs have to remain inside predefined boundaries, characterizing the so-called zone control strategy (Maciejowski 2002).

For each model of the set \(\Omega \) the following cost function is defined (González and Odloak 2011; Porfírio et al. 2003):
$$\begin{aligned} V_k (\Theta _n )&= \sum _{j=0}^p {\left( {y(k+j|k)-y_{sp,k} (\Theta _n)} \right) ^{T}} \nonumber \\&\quad Q_y \left( {y(k+j|k)-y_{sp,k} (\Theta _n )} \right) \nonumber \\&\quad +\sum _{j=p+1}^{np} {\left( {y(k+j|k)-y_{sp,k} (\Theta _n )} \right) ^{T}}\nonumber \\&\quad \quad Q_y \left( {y(k+j|k)-y_{sp,k} (\Theta _n )} \right) \nonumber \\&\quad +\sum _{j=0}^{m-1} {\left( {u(k+j|k)-u_{des,k} } \right) ^{T}} Q_u \left( {u(k+j|k)-u_{des,k} } \right) \nonumber \\&\quad +\sum _{j=0}^{m-1} {\Delta u(k+j|k)^{T}} R\Delta u(k+j|k) \end{aligned}$$
(4)
where \(np\) is the output prediction horizon, \(u_{des,k} \) represents the input target, \(Q_y, Q_u,\) and \(R\) are weighting positive definite matrices.

Let us note that the third term of the right hand side of (4) is included to force the inputs to reach their corresponding targets.

It can be shown that the first term of the right hand side of (4) can be written as follows:
$$\begin{aligned} V_{k,1} (\Theta _n )&= \left( {N_x x(k)+\tilde{S}(\Theta _n )\Delta u_k -\tilde{I}_y y_{sp,k} (\Theta _n )} \right) ^{T}\nonumber \\&\quad Q_1 \left( {N_x x(k)+\tilde{S}(\Theta _n )\Delta u_k -\tilde{I}_y y_{sp,k} (\Theta _n )} \right) \nonumber \\ \end{aligned}$$
(5)
where \(N_x =\left[ {{\begin{array}{ll} {I_{(p+1)ny} }&{} 0 \\ \end{array} }} \right] \in \mathfrak R ^{(p+1)ny\times nx}\), \(S(\Theta _n )=\left[ {{{\begin{array}{cccc} 0&{} 0&{} \cdots &{} 0 \\ {S_1 (\Theta _n )}&{} 0&{} \cdots &{} 0 \\ {S_2 (\Theta _n )}&{} {S_1 (\Theta _n )}&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {S_m (\Theta _n )}&{} {S_{m-1} (\Theta _n )}&{} \cdots &{} {S_1 (\Theta _n )} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {S_p (\Theta _n )}&{} {S_{p-1} (\Theta _n )}&{} \cdots &{} {S_{p-m+1} (\Theta _n )} \\ \end{array}} }} \right] \), \(\tilde{I}_y \!=\!\left[ \! {\underbrace{{{\begin{array}{lll} {I_{ny} }&{} \cdots &{} {I_{ny} } \\ \end{array}} }}_{p+1}} \!\right] ^{T}\!\!,\,\, Q_1 \!=\!{\mathrm{diag}}\left( {\underbrace{{\begin{array}{lll} {Q_y }&{} \cdots &{} {Q_y } \\ \end{array} }}_{p+1}} \right) \).
Now, in the second term of the right hand side of (4), the output error can be written as follows:
$$\begin{aligned} y(k\!+\!j|k)\!-\!y_{sp,k} (\Theta _n )&= x^{s}(k)+\Psi (jT,\Theta _n)x^{d}(k)\nonumber \\&\quad +\,\bar{{B}}^{s}(\Theta _n )\Delta u_k \nonumber \\&\quad +\,\Psi (jT,\Theta _n )\bar{{F}}(\Theta _n)\bar{{D}}^{d}(\Theta _n )\Delta u_k\nonumber \\&\quad -\,y_{sp,k} (\Theta _n ) \end{aligned}$$
(6)
where
$$\begin{aligned} \bar{{B}}^{s}(\Theta _n )&= \left[ {\underbrace{{\begin{array}{cccc} {B^{s}(\Theta _n )}&{} {B^{s}(\Theta _n )}&{} \cdots &{} {B^{s}(\Theta _n )} \\ \end{array} }}_m} \right] ,\\ \bar{{F}}(\Theta _n )&= \left[ {{\begin{array}{cccc} I&{} {F(\Theta _n )^{-1}}&{} \cdots &{} {F(\Theta _n )^{-(m-1)}} \\ \end{array} }} \right] ,\\ \bar{{D}}^{d}(\Theta _n )&= \left[ {{\begin{array}{cccc} {D^{d}(\Theta _n )N}&{} 0&{} \cdots &{} 0 \\ 0&{} {D^{d}(\Theta _n )N}&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0&{} 0&{} 0&{} {D^{d}(\Theta _n )N} \\ \end{array} }} \right] . \end{aligned}$$
Using (6), the second term of the right hand side of (4) can then be written:
$$\begin{aligned} V_{k,2} (\Theta _n )&= \left[ \bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)+\bar{{\bar{{B}}}}^{s}(\Theta _n )\Delta u_k\right. \nonumber \\&\quad \left. +\bar{{\Psi }}(\Theta _n)\bar{{F}}(\Theta _n)\bar{{D}}^{d}(\Theta _n)\Delta u_k -\bar{{I}}_y y_{sp,k} (\Theta _n)\right] ^{T}Q_2 \nonumber \\&\quad \left[ \bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)+\bar{{\bar{{B}}}}^{s}(\Theta _n )\Delta u_k\right. \nonumber \\&\quad \left. +\bar{{\Psi }}(\Theta _n )\bar{{F}}(\Theta _n )\bar{{D}}^{d}(\Theta _n )\Delta u_k -\bar{{I}}_y y_{sp,k} (\Theta _n ) \right] \end{aligned}$$
(7)
where \(\bar{{\Psi }}(\Theta _n )\!=\!\left[ {{{\begin{array}{c} {\Psi \left( {(p+1)T,\Theta _n } \right) } \\ {\Psi \left( {(p+2)T,\Theta _n } \right) } \\ \vdots \\ {\Psi \left( {np.T,\Theta _n } \right) } \\ \end{array}} }} \right] \), \(\bar{{\bar{{B}}}}^{s}(\Theta _n )\!=\!\left[ {{{\begin{array}{c} {\bar{{B}}^{s}(\Theta _n )} \\ {\bar{{B}}^{s}(\Theta _n )} \\ \vdots \\ {\bar{{B}}^{s}(\Theta _n )} \\ \end{array} }}} \right] \), \(\bar{{I}}_y =\left[ {\underbrace{{\begin{array}{lll} {I_{ny} }&{} \cdots &{} {I_{ny} } \\ \end{array} }}_{np-p}} \right] ^{T}\),
$$\begin{aligned} Q_2 ={\mathrm{diag}}\left( {\underbrace{{\begin{array}{lll} {Q_y }&{} \cdots &{} {Q_y } \\ \end{array} }}_{np-p}} \right) \end{aligned}$$
Finally using (5) and (7) it can be shown that (4) can be written as follows:
$$\begin{aligned} V_k (\Theta _n )&= \left( {N_x x(k)+\tilde{S}(\Theta _n )\Delta u_k -\tilde{I}_y y_{sp,k} (\Theta _n )} \right) ^{T}\nonumber \\&\quad Q_1 \left( {N_x x(k)+\tilde{S}(\Theta _n )\Delta u_k -\tilde{I}_y y_{sp,k} (\Theta _n )} \right) \\&\quad +\left[ \bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)+\bar{{\bar{{B}}}}^{s}(\Theta _n )\Delta u_k\right. \nonumber \\&\quad +\left. \bar{{\Psi }}(\Theta _n )\bar{{F}}(\Theta _n )\bar{{D}}^{d}(\Theta _n )\Delta u_k -\bar{{I}}_y y_{sp,k} (\Theta _n ) \right] ^{T}Q_2 \\&\quad \left[ \bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)+\bar{{\bar{{B}}}}^{s}(\Theta _n )\Delta u_k\right. \nonumber \\&\quad +\left. \bar{{\Psi }}(\Theta _n )\bar{{F}}(\Theta _n )\bar{{D}}^{d}(\Theta _n )\Delta u_k -\bar{{I}}_y y_{sp,k} (\Theta _n ) \right] \\&\quad +\left( {\tilde{I}_u u(k-1)+M\Delta u_k -\tilde{I}_u u_{des,k} } \right) ^{T}\nonumber \\&\quad \tilde{Q}_u \left( {\tilde{I}_u u(k-1)+M\Delta u_k -\tilde{I}_u u_{des,k} } \right) ^{T} \\&\quad +\Delta u_k ^{T}\tilde{R}\Delta u_k \end{aligned}$$
where
$$\begin{aligned} \tilde{I}_u^T&= \left[ {\underbrace{{\begin{array}{lll} {I_{nu} }&{} \ldots &{} {I_{nu} } \\ \end{array} }}_m} \right] ,M=\left[ {{\begin{array}{cccc} {I_{nu} }&{} 0&{} \cdots &{} 0 \\ {I_{nu} }&{} {I_{nu} }&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {I_{nu} }&{} {I_{nu} }&{} \cdots &{} {I_{nu} } \\ \end{array} }} \right] \nonumber \\&\quad M\in \mathfrak R ^{(nu.m)\times (nu.m)},\\ \tilde{Q}_u&= {\mathrm{diag}}\left( {\underbrace{{\begin{array}{lll} {Q_u }&{} \cdots &{} {Q_u } \\ \end{array} }}_m} \right) ,\tilde{R}={\mathrm{diag}}\left( {\underbrace{{\begin{array}{lll} R&{} \cdots &{} R \\ \end{array} }}_m} \right) . \end{aligned}$$
It is clear that the control costs can finally be reduced to the following quadratic form:
$$\begin{aligned} V_k (\Theta _n )=X_k ^{T}H(\Theta _n )X_k +2C_f (\Theta _n )X_k +c(\Theta _n ) \end{aligned}$$
where:
$$\begin{aligned} X_k&= \left[ {{\begin{array}{llll} {\Delta u_k ^{T}}&{} {y_{sp,k} (\Theta _1 )^{T}}&{} \ldots &{} {y_{sp,k} (\Theta _L )^{T}} \\ \end{array} }} \right] ^{T}, \\ H(\Theta _n )&= \left[ {{\begin{array}{ll} {H_{11} (\Theta _n )}&{} {H_{12} (\Theta _n )} \\ {H_{21} (\Theta _n )}&{} {H_{22} (\Theta _n )} \\ \end{array} }} \right] , \\ H_{11} (\Theta _n )&= \tilde{S}(\Theta _n )^{T}Q_1 \tilde{S}(\Theta _n )+M^{T}\tilde{Q}_u M+\tilde{R} \\&\quad +\left[ {\bar{{\bar{{B}}}}^{s}(\Theta _n )+\bar{{\Psi }}(\Theta _n )\bar{{F}}(\Theta _n )\bar{{D}}^{d}(\Theta _n )} \right] ^{T}\nonumber \\&\quad Q_2 \left[ {\bar{{\bar{{B}}}}^{s}(\Theta _n )+\bar{{\Psi }}(\Theta _n )\bar{{F}}(\Theta _n )\bar{{D}}^{d}(\Theta _n )}\right] , \\ H_{12} (\Theta _n)&= H_{21} (\Theta _n)^{T}=-\tilde{S}(\Theta _n)^{T}Q_1 \tilde{I}_y\\&\quad -\left[ {\bar{{\bar{{B}}}}^{s}(\Theta _n)\!+\!\bar{{\Psi }}(\Theta _n)\bar{{F}}(\Theta _n)\bar{{D}}^{d}(\Theta _n)} \right] ^{T}Q_2 \bar{{I}}_y, \\ H_{22} (\Theta _n )&= \tilde{I}_y ^{T}Q_1 \tilde{I}_y +\bar{{I}}_y ^{T}Q_2 \bar{{I}}_y, \\ C_f (\Theta _n )&= \left[ {{\begin{array}{ll} {C_{f,1} (\Theta _n )}&{} {C_{f,2} (\Theta _n )} \\ \end{array} }} \right] , \\ C_{f,1} (\Theta _n )&= \left[ {N_x x(k)} \right] ^{T}Q_1 \tilde{S}(\Theta _n )\\&\quad +\left[ {u_{mv} (k-1)-u_{des,k} } \right] ^{T}\tilde{I}_u ^{T}\tilde{Q}_u M\\&\quad +\left[ {\bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)} \right] ^{T}\\&\quad Q_2 \left[ {\bar{{\bar{{B}}}}^{s}(\Theta _n )+\bar{{\Psi }}(\Theta _n )\bar{{F}}(\Theta _n )\bar{{D}}^{d}(\Theta _n )} \right] , \\ C_{f,2} (\Theta _n )&= -\left[ {N_x x(k)+\tilde{S}_{dv} (\Theta _n )\Delta u_{dv} (k)} \right] ^{T}Q_1 \tilde{I}_y\\&\quad -\left[ {\bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)} \right] ^{T}Q_2 \bar{{I}}_y \\ c(\Theta _n )&= \left[ {N_x x(k)} \right] ^{T}Q_1 \left[ {N_x x(k)} \right] \\&\quad +\left[ {\bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)} \right] ^{T}\\&\quad Q_2 \left[ {\bar{{I}}_y x^{s}(k)+\bar{{\Psi }}(\Theta _n )x^{d}(k)} \right] \\&\quad +\left[ {u_{mv} (k-1)-u_{des,k} } \right] ^{T}\\&\quad \tilde{I}_u ^{T}\tilde{Q}_u \tilde{I}_u \left[ {u_{mv} (k-1)-u_{des,k} } \right] . \end{aligned}$$
Then, the conventional multi-model predictive controller for the zone control strategy results from the solution of the following min–max optimization problem:
$$\begin{aligned} {\mathop {\min }\limits _{\varDelta u_k,y_{sp,k} (\varTheta _n )}} {\mathop {\max }\limits _{\varTheta _n}}\;V_k (\varTheta _n ) \end{aligned}$$
Subject to
$$\begin{aligned}&\Delta u_{\min } <\Delta u(k+j|k)<\Delta u_{\max } \quad \quad j=0,1,\ldots ,m-1 \nonumber \\ \end{aligned}$$
(8)
$$\begin{aligned}&u_{\min } <u(k+j|k)<u_{\max } \quad \quad \quad \quad \quad j=0,1,\ldots ,m-1\nonumber \\ \end{aligned}$$
(9)
$$\begin{aligned}&y_{\min } \le y_{sp,k} (\Theta _n )\le y_{\max } \quad \quad \quad \quad \quad \quad n=1,\ldots ,L \end{aligned}$$
(10)
It can be shown (Porfírio et al. 2003) that the problem defined above is equivalent to the following optimization problem:
Problem P1
$$\begin{aligned} \mathop {\min }\limits _{\varDelta u_k,\gamma _k,y_{sp,k} (\varTheta _n),n=1,...,L} \gamma _k \end{aligned}$$
subject to (8), (9), (10) and
$$\begin{aligned} V{ }_k\left( {\Theta _n } \right) <\gamma _k \quad \quad \quad n=1,\ldots ,L \end{aligned}$$
(11)
One can observe that in this optimization problem \(y_{sp,k} \left( {\Theta _n } \right) \) is a set of additional decision variables. The restrictions represented in (9) tend to force the output to remain inside the boundaries defined by \(y_{\min }\) and \(y_{\max }\). The inequality constraints represented by (11) are nonlinear and turn problem P1 into a Non Linear Programming (NLP) optimization problem, which can be highly computationally expensive for high dimension systems. In the next section, problem P1 is re-casted as an LMI problem that, as will be shown later, has a lower computational burden.

4 LMI-Based Multi-model Predictive Control for Zone Control and Input Target Strategy

Applying the Schur complement to the nonlinear inequality constraints represented by (11), problem P1 can be easily turned into the following LMI optimization problem:

Problem P2
$$\begin{aligned} \mathop {\min }\limits _{\varDelta u_k,\gamma _k,y_{sp,k} (\varTheta _n ),n=1,\ldots ,L} \gamma _k \end{aligned}$$
subject to
$$\begin{aligned}&\left[ {{\begin{array}{cc} I&{} {\sqrt{H(\Theta _n )} X_k } \\ {X_k ^{T}\sqrt{H(\Theta _n )}}&{} {\gamma _k -2C_f \left( {\Theta _n } \right) X_k -c_ (\Theta _n )} \\ \end{array} }} \right] >0\\&\quad n=1,\ldots ,L \\&\Delta u_i (k+j|k)-\Delta u_{i,\min } >0\quad \quad i=1,\ldots ,nu \\&\quad j=0,1,\ldots ,m-1 \\&\Delta u_{i,\max } -\Delta u_i (k+j|k)>0\quad \quad i=1,\ldots ,nu \\&\quad j=0,1,\ldots ,m-1 \\&u_i (k+j|k)-u_{i,\min } >0\quad \quad i=1,\ldots ,nu\\&\quad j=0,1,\ldots ,m-1 \\&u_{i,\max } -u_i (k+j|k)>0\quad \quad i=1,\ldots ,nu\\&\quad j=0,1,\ldots ,m-1 \\&y_{j,sp,k} (\Theta _n )-y_{j,\min }>0\quad \quad n=1,\ldots ,L \\&\quad j=1,\ldots ,ny\quad i=1,\ldots ,L \\&y_{j,\max } -y_{j,sp,k} (\Theta _n)>0\quad \quad n=1,\ldots ,L \\&\quad j=1,\ldots ,ny\quad i=1,\ldots ,L \end{aligned}$$
Defined as above, problem P2 can be solved with available LMI solvers such as the MATLAB LMI Toolbox.

One should note that the Schur complement states that problem P1 and problem P2 are equivalent. As a result, the controllers based on both problems are expected to have similar performances.

5 Application of the LMI-Based MMPC to the C3/C4 Splitter

The system considered in this work is the C3/C4 splitter studied in Porfírio et al. (2003) where more details can be found. The controlled outputs are the percentage of propane (C3) in the bottom stream \((y_1 )\) and the temperature of the first stage of the distillation column top section (\(y_2 )\). The manipulated inputs are the reflux flow rate to the top of the column (\(u_1 )\) and the flow rate of hot oil to the reboiler (\(u_2 )\). The feed flow rate (\(u_3 )\) and the temperature of the hot oil stream (\(u_4 )\) are two measured disturbances of the system.

The discrete set \(\Omega \) of possible models for the plant is composed of 6 models, each one of them corresponding to a particular operating point of the distillation column. Each model can be represented by a second-order transfer function with a time delay as follows
$$\begin{aligned} G_{i,j} (s)=\frac{\left( {b_{i,j,0} +b_{i,j,1} s} \right) e^{-\theta _{i,j} s}}{1+a_{i,j,1} s+a_{i,j,2} s^{2}} \end{aligned}$$
(12)
For each model considered in Porfírio et al. (2003), the coefficients of the transfer functions defined in (1) can be found in Table 1.
Table 1

Transfer functions models coefficients of the C3/C4

 

\(\varvec{\Theta }_{1}\)

\(\varvec{\Theta }_{2}\)

\(\varvec{\Theta }_{3}\)

\(\varvec{\Theta }_{4}\)

\(\varvec{\Theta }_{5}\)

\(\varvec{\Theta }_{6}\)

\(\mathbf{b}_\mathbf{1,1,0}\)

0.1094e\(-\)4

0.4220e\(-\)3

0.1532e\(-\)2

0.4884e\(-\)3

0.5647e\(-\)3

0.5656e\(-\)3

\(\mathbf{b}_\mathbf{1,2,0}\)

\(-\)0.3824e\(-\)4

\(-\)1.4050e\(-\)4

\(-\)0.7811e\(-\)3

\(-\)0.1862e\(-\)3

\(-\)0.4780e\(-\)3

\(-\)0.1452e\(-\)2

\(\mathbf{b}_\mathbf{1,3,0}\)

0.5668e\(-\)2

0.0521e\(-\)3

0.7698e\(-\)3

0.2710e\(-\)3

0.6786e\(-\)3

0.1125e\(-\)3

\(\mathbf{b}_\mathbf{1,4,0}\)

\(-\)0.2174e\(-\)3

\(-\)0.4740e\(-\)3

\(-\)0.7701e\(-\)2

\(-\)0.1781e\(-\)2

\(-\)0.6481e\(-\)2

\(-\)0.1642e\(-\)1

\(\mathbf{b}_\mathbf{2,1,0}\)

\(-\)0.1116e\(-\)3

\(-\)0.0063

\(-0.0008\)

\(-0.0025\)

\(-0.0021\)

\(-0.001235\)

\(\mathbf{b}_\mathbf{2,2,0}\)

0.0070

0.0045

0.0089

0.0029

0.0081

0.0020

\(\mathbf{b}_\mathbf{2,3,0}\)

\(-0.0015\)

\(-0.0012\)

\(-0.0009\)

\(-0.0018\)

\(-\)0.7616e\(-\)3

\(-0.0003829\)

\(\mathbf{b}_\mathbf{2,4,0}\)

0.1044

0.0575

0.0877

0.0281

0.0766

0.006970

\(\mathbf{b}_\mathbf{1,1,1}\)

0.4227e\(-\)4

\(-\)0.2722e\(-\)3

\(-\)0.0860e\(-\)2

\(-\)0.1107e\(-\)3

\(-\)0.3536e\(-\)3

\(-\)0.2218e\(-\)3

\(\mathbf{b}_\mathbf{1,2,1}\)

\(-\)1.2055e\(-\)4

\(-\)2.1828e\(-\)4

\(-\)0.3770e\(-\)3

\(-\)0.1763e\(-\)3

\(-\)0.1427e\(-\)3

0.7413e\(-\)4

\(\mathbf{b}_\mathbf{1,3,1}\)

\(-\)0.0945e\(-\)2

0.150e\(-\)3

0.8929e\(-\)3

0.1854e\(-\)3

\(-\)0.1149e\(-\)3

\(-\)0.1138e\(-\)3

\(\mathbf{b}_\mathbf{1,4,1}\)

\(-\)0.6856e\(-\)3

\(-\)0.7365e\(-\)3

\(-\)0.3716e\(-\)2

\(-\)0.1686e\(-\)2

\(-\)0.1935e\(-\)3

\(-\)0.01324

\(\mathbf{b}_\mathbf{2,1,1}\)

\(-\)0.0873e\(-\)3

\(-\)0.0034

\(-\)0.0034

\(-\)0.0039

\(-\)0.0019

\(-\)0.001135

\(\mathbf{b}_\mathbf{2,2,1}\)

0.0013

0.0002

0.0064

0.0055

0.0053

\(-0.0003\)

\(\mathbf{b}_\mathbf{2,3,1}\)

\(-\)0.0004

\(-\)0.0012

\(-\)0.0012

\(-\)0.0004

\(-\)0.5323e\(-\)3

0.0001715

\(\mathbf{b}_\mathbf{2,4,1}\)

0.0194

0.0020

0.0634

0.0527

0.0725

0.001998

\(\mathbf{a}_\mathbf{1,1,1}\)

0.01090

1.6602

1.1913

0.9881

0.8165

3.4948

\(\mathbf{a}_\mathbf{1,2,1}\)

0.1342

0.1322

0.3402

0.2605

0.3417

2.6987

\(\mathbf{a}_\mathbf{1,3,1}\)

0.3975

0.2097

0.5633

0.5111

0.9003

0.4559

\(\mathbf{a}_\mathbf{1,4,1}\)

0.1342

0.1322

0.3402

0.2605

0.3417

1.4380

\(\mathbf{a}_\mathbf{2,1,1}\)

0.1317

2.0724

0.4017

0.8868

1.1676

1.6280

\(\mathbf{a}_\mathbf{2,2,1}\)

2.2605

0.8352

1.8959

0.8602

2.4190

2.4298

\(\mathbf{a}_\mathbf{2,3,1}\)

1.9247

0.8328

0.8844

1.8877

0.9818

0.2295

\(\mathbf{a}_\mathbf{2,4,1}\)

2.2605

0.8352

1.8959

0.8602

1.6731

0.1731

\(\mathbf{a}_\mathbf{1,1,2}\)

0.0243

0.2525

0.0912

0.0646

0.0809

0.5902

\(\mathbf{a}_\mathbf{1,2,2}\)

0.0111

0.0117

0.0181

0.0091

0.0259

0.4023

\(\mathbf{a}_\mathbf{1,3,2}\)

0.0850

0.0279

0.0359

0.0265

0.0712

0.03227

\(\mathbf{a}_\mathbf{1,4,2}\)

0.0111

0.0117

0.0181

0.0091

0.0259

0.1840

\(\mathbf{a}_\mathbf{2,1,2}\)

0.0073

0.2428

0.0365

0.0840

0.1069

0.09852

\(\mathbf{a}_\mathbf{2,2,2}\)

0.1366

0.0812

0.1946

0.0392

0.1761

0.06510

\(\mathbf{a}_\mathbf{2,3,2}\)

0.1353

0.0693

0.0799

0.0669

0.0549

0.02632

\(\mathbf{a}_\mathbf{2,4,2}\)

0.1366

0.0812

0.1946

0.0392

0.1242

0.01045

As in Porfírio et al. (2003), the time delays \(\theta _{i,j}(\Theta _n)\) are all equal to 1.

In the three simulations shown in this section, the MMPC defined in the previous section is implemented for the C3/C4 splitter system defined above. The model corresponding to \(\Theta _6 \) is considered to be the true plant model \(\Theta _T \) and the following tuning parameters were adopted for the controller:
$$\begin{aligned}&T=1\min ,\,m=3,\,np=60, Q_y ={\mathrm{diag}}(50,1),\\&\qquad R={\mathrm{diag}}(1,1)\times 10^{-5},\,Q_u ={\mathrm{diag}}(0,10^{-2}), \\&\Delta u_{\max } =\left[ {{\begin{array}{ll} {50}&{} {25} \\ \end{array} }} \right] ,u_{\min } =\left[ {{\begin{array}{ll} {2{,}000}&{} {1{,}200} \\ \end{array} }} \right] ,\\&\qquad \qquad \quad u_{\max } =\left[ {{\begin{array}{ll} {4{,}100}&{} {2{,}200} \\ \end{array} }} \right] . \end{aligned}$$
In the three cases considered here, \(u_1 \) does not have an optimizing target while \(u_2 \) has a target set equal to 1850. The output zone limits are initially assumed to be \(y_{\min } =\left[ {{\begin{array}{ll} {0.85}&{} {48} \\ \end{array} }} \right] ^{T}\) and \(y_{\max } =\left[ {{\begin{array}{ll} {0.95}&{} {50} \\ \end{array} }} \right] ^{T}\), and then moved to \(y_{\min } =\left[ {{\begin{array}{ll} {0.80}&{} {48} \\ \end{array} }} \right] ^{T}\) and \(y_{\max } =\left[ {{\begin{array}{ll} {0.85}&{} {50} \\ \end{array} }} \right] ^{T}\) at time step \(k=50\). The reactor system starts from the initial operating point defined by \(u(0)=\left[ {{\begin{array}{ll} {3250}&{} {1950} \\ \end{array} }} \right] ^{T}\) and \(y(0)=\left[ {{\begin{array}{ll} {1.25}&{} {47.5} \\ \end{array} }} \right] ^{T}\).

For the implementation simulated here, the MATLAB LMI Toolbox routine “mincx” was used to solve Problem P2 and the MATLAB Optimization Toolbox with active-set algorithm of the routine “fmincon” was used to solve Problem P1.

In order to provide a fair comparison of the performance and computation cost of the two controllers, the convergence parameters of routines mincx and fmincon were selected in such a way that the stopping criterion that is first reached will always be the relative accuracy of the optimal objective, which was set to \(10^{-10}\).

In the first simulation, it was considered that no disturbances enter the system, which means that \(\Delta u_3 =\Delta u_4 =0\) along the simulation time. Figures 1 and 2 show that the responses of the two controllers are not exactly the same mainly in the early stages of the simulation period, but both controllers produce a satisfactory control of the system as they manage to bring efficiently the outputs to inside their respective predefined control zones while \(u_2 \) reaches its target rapidly. On the other hand, the LMI-based controller shows to have a significantly lower computational cost. Indeed, while it takes 131.0 s to run the simulation with the NLP-based controller defined through problem P1, it takes only 25.0 s to perform the same simulation with the LMI-based controller.
https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig1_HTML.gif
Fig. 1

Outputs with LMI–MPC (blue coloured continuous line) and NLP–MPC (red coloured continuous line). Calculated set-points with LMI–MPC (blue coloured broken lines) and NLP–MPC (red coloured broken lines) and control zones (dashed lines) (Color figure online)

https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig2_HTML.gif
Fig. 2

Manipulated inputs with LMI–MPC (blue coloured continuous line) and NLP–MPC (red coloured continuous line) and targets(dashed lines) (Color figure online)

In the second simulation case, some disturbances are added to the system corresponding to steps on inputs \(u_3\) and \(u_4 \) implemented along the simulation horizon. The interval between two consecutive steps and the amplitude of the different steps are generated randomly. The corresponding disturbances for this simulation case are presented in Fig. 3 and they are the same for both controllers. The initial values of these disturbances are set arbitrarily to 0.
https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig3_HTML.gif
Fig. 3

Measured disturbances

Again, Figs. 4 and 5 show that, although the closed loop responses are not the same, the performances of the two controllers are quite similar and both controllers provide a satisfactory control of the system. Regarding the computational time, while it remained very close to the previous simulation for the LMI-based controller (27.9 s), it increased significantly for the NLP-based controller and reached a total of 187.2 s to run the simulation.
https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig4_HTML.gif
Fig. 4

Outputs with LMI–MPC (blue coloured continuous line) and NLP–MPC (red coloured continuous line). Calculated set-points with LMI–MPC (blue coloured broken lines) and NLP–MPC (red coloured broken lines) and bounds (dashed lines) (Color figure online)

https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig5_HTML.gif
Fig. 5

Manipulated inputs with LMI–MPC (blue coloured continuous line) and NLP–MPC (red coloured continuous line) and targets (dashed lines) (Color figure online)

In the last simulation, some correlated noise-type disturbances are added to the system. This kind of disturbances is actually representative of the real system disturbances. The corresponding disturbances for this simulation are presented in Fig. 6.
https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig6_HTML.gif
Fig. 6

Measured disturbances

Also, in this simulation case, Figs. 7 and 8 show that the responses of both controllers follow the same patterns and the two controllers show a satisfactory performance to reach the control objectives. Regarding the computational time, while it remains nearly constant for the LMI-based controller (below 27.6 s), depending on the level of the disturbances that affect the system, it can increase significantly (to 421.5 s) for the NLP-based controller. As a result, one can conclude that the more the system is disturbed, larger the gap between the computational times of the two controllers and the more advantageous becomes the LMI-based MMPC.
https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig7_HTML.gif
Fig. 7

Outputs with LMI–MPC (blue coloured continuous line) and NLP–MPC (red coloured continuous line). Calculated set-points with LMI–MPC (blue coloured broken lines) and NLP–MPC (red coloured broken lines) and bounds (dashed lines) (Color figure online)

https://static-content.springer.com/image/art%3A10.1007%2Fs40313-013-0050-1/MediaObjects/40313_2013_50_Fig8_HTML.gif
Fig. 8

Manipulated inputs with LMI–MPC (blue coloured continuous line) and NLP–MPC (red coloured continuous line) and targets (dashed lines) (Color figure online)

6 Conclusion

In this work, it is presented an LMI formulation of the multi-model predictive control (MMPC) of an industrial C3/C4 splitter. Compared to the classical NLP-formulation proposed in Porfírio et al. (2003), the LMI approach allows to a significant reduction in the computational time without penalizing the controller performance. For high dimension systems, this method then may constitute a very interesting alternative to the conventional NLP-based MMPC.

Copyright information

© Brazilian Society for Automatics--SBA 2013