Abstract
This paper is concerned with controllers design using complementary state and state-derivative feedback, as an extension of the work presented in Rossi et al. (in: Proceedings of XXI Congresso Brasileiro de Automática, Vitória, Brazil, pp 828–833, 2016). The novel approach is more general, in that it enables the direct design of an CSSDF controller, dispensing with the need for a preliminary state feedback design. For this purpose, a discrete-time design model is derived to describe the plant dynamics in terms of the state and state-derivative combinations available for feedback. The resulting model can be used with existing discrete-time state-space methods for the design of linear or nonlinear control laws. Simulation examples are presented to illustrate the proposed design method within a model predictive control formulation.
Similar content being viewed by others
References
Abdelaziz, T. H. S. (2012). Parametric eigenstructure assignment using state-derivative feedback for linear systems. Journal of Vibration and Control, 18(12), 1809–1827.
Abdelaziz, T. H. S., & Valášek, M. (2004). Pole-placement for SISO linear systems by state derivative feedback. IEE Proceedings: Control Theory and Applications, 151(4), 377–385.
Assunção, E., Teixeira, M. C. M., Faria, F. A., Silva, N. A. P., & Cardim, R. (2007). Robust state derivative feedback LMI-based designs for multivariable linear systems. International Journal of Control, 80(8), 1260–1270.
Cardim, R., Teixeira, M. C. M., Faria, F. A., & Assunção, E. (2009). LMI-based digital redesign of linear time invariant systems with state derivative feedback. In Proceedings of IEEE multi-conference of systems and control, Saint Petersburg, Russia, 8–10 July 2009 (pp. 745–749).
Chua, L. O., Desoer, C. A., & Kuh, E. S. (1987). Linear and nonlinear circuits. New York: McGraw-Hill.
Duan, Y. F., Ni, Y. Q., & Ko, J. M. (2005). State-derivative feedback control of cable vibration using semiactive magnetorheological dampers. Computer-Aided Civil and Infrastructure Engineering, 20(6), 431–449.
Fallah, S., Khajepour, A., Fidan, B., Chen, S.-K., & Litkouhi, B. (2013). Vehicle optimal torque vectoring using state derivative feedback and linear matrix inequalities. IEEE Transactions on Vehicular Technology, 62(4), 1540–1552.
Faria, F. A., Assunção, E., Teixeira, M. C. M., Cardim, R., & Silva, N. A. P. (2009). Robust state derivative pole placement LMI-based designs for linear systems. International Journal of Control, 82(1), 1–12.
Fleming, J., Kouvaritakis, B., & Cannon, M. (2015). Robust tube MPC for linear systems with multiplicative uncertainty. IEEE Transaction on Automatic Control, 60(4), 1087–1092.
Franklin, G. F., Powell, J. D., & Workman, M. L. (1998). Digital control of dynamic systems (3rd ed.). Menlo Park, California: Addison Wesley.
Gautam, A., & Soh, Y. C. (2014). Constraint-softening in model predictive control with off-line-optimized admissible sets for systems with additive and multiplicative disturbances. Systems & Control Letters, 69, 65–72.
Gilbert, E. G., & Tan, K. T. (1991). Linear systems with state and control constraints: The theory and application of maximal output admissible sets. IEEE Transactions on Automatic Control, 36(9), 1008–1020.
Herceg, M., Kvasnica, M., Jones, C. N., & Morari, M. (2013). Multi-parametric toolbox 3.0. In Proceedings of the European control conference, Zürich, Switzerland, 17–19 July 2013 (pp. 502–510).
Kwon, W. H., & Han, S. (2005). Receding horizon control. London: Springer.
Lewis, F. L., & Syrmos, V. L. (1995). Optimal control (2nd ed.). New York: Wiley.
Maciejowski, J. M. (2002). Predictive control with constraints. Harlow: Prentice Hall.
Olalla, C., Leyva, R., Aroudi, A. E., & Queinnec, I. (2009). Robust LQR control for PWM converters: An LMI approach. IEEE Transaction on Industrial Electronics, 56(7), 2548–2558.
Quanser (2003). Vibration control: Active mass damper—One floor (AMD-1). Student manual (4th ed.). Ontario, Canada: Quanser Consulting Inc.
Rakovic, S. V., Kouvaritakis, B., Findeisen, R., & Cannon, M. (2012). Homothetic tube model predictive control. Automatica, 48(8), 1631–1638.
Reithmeier, E., & Leitmann, G. (2003). Robust vibration control of dynamical systems based on the derivative of the state. Archive of Applied Mechanics, 72(11), 856–864.
Rossi, F. Q. (2018). Discrete-time design of control laws using state derivative feedback. Ph.D. thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP.
Rossi, F. Q., Galvão, R. K. H., Teixeira, M. C. M., & Assunção, E. (2016). Discrete-time control design with complementary state and state-derivative feedback. In Proceedings of XXI Congresso Brasileiro de Automática, Vitória, Brazil, 03–07 October 2016 (pp. 828–833).
Rossi, F. Q., Galvão, R. K. H., Teixeira, M. C. M., & Assunção, E. (2018). Direct discrete time design of robust state derivative feedback control laws. International Journal of Control, 91(1), 70–84.
Rossi, F. Q., Teixeira, M. C. M., Galvão, R. K. H., & Assunção, E. (2013). Discrete time design of state derivative feedback control laws. In Proceedings of conference on control and fault-tolerant systems, Nice, France, 09–11 October 2013 (pp. 808–813).
Rossiter, J. A. (2003). Model-based predictive control: A practical approach. Boca Raton: CRC Press.
Silva, E. R. P., Assunção, E., Teixeira, M. C. M., & Buzachero, L. F. S. (2012). Less conservative control design for linear systems with polytopic uncertainties via state derivative feedback. Mathematical Problems in Engineering, 2012, 315049.
Silva, E. R. P., Assunção, E., Teixeira, M. C. M., & Cardim, R. (2013). Robust controller implementation via state derivative feedback in an active suspension system subject to fault. In Proceedings of conference on control and fault-tolerant systems, Nice, France, 9–11 October 2013 (pp. 752–757).
Tseng, Y. W., & Hsieh, J. G. (2013). Optimal control for a family of systems in novel state derivative space form with experiment in a double inverted pendulum system. Abstract and Applied Analysis, 715026, 8.
Zeilinger, M. N., Morari, M., & Jones, C. N. (2014). Soft constrained model predictive control with robust stability guarantees. IEEE Transactions on Automatic Control, 59(5), 1190–1202.
Zhang, J., Ouyang, H., & Yang, J. (2014). Partial eigenstructure assignment for undamped vibration systems using acceleration and displacement feedback. Journal of Sound and Vibration, 333(1), 1–12.
Zhang, J., Ye, J., & Ouyang, H. (2016). Static output feedback for partial eigenstructure assignment of undamped vibration systems. Mechanical Systems and Signal Processing, 68–69(2016), 555–561.
Acknowledgements
The main theoretical results presented in this paper were developed as part of the first author’s Ph.D. research (Rossi 2018), which was funded by CNPq under Doctoral Scholarship 140585/2014-1. The remaining authors acknowledge the support of FAPESP under Grant 2011/ 17610-0 and CNPq under Grants 303714/2014-0, 310798/2014-0, 301227/2017-9 (Research Fellowships).
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: State-Derivative Feedback Design
Consider a vibration suppression system represented by a continuous time state-space equation of form (1) with
and
where \(m_1 = 100\) kg and \(m_2 = 10\) kg are two masses coupled by a spring-damper device with stiffness \(k_2 = 36\) kN/m and damping \(b_2 = 50\) Ns/m, with an additional spring-damper device between \(m_1\) and the ground with stiffness \(k_1 = 360\)kN and damping \(b_1 = 70\) Ns/m, as described in Abdelaziz and Valášek (2004) and adopted in Rossi et al. (2018). The states \(x_1\) and \(x_2\) represent the vertical displacements of \(m_1\) and \(m_2\), and \({\dot{x}}_1\) and \({\dot{x}}_2\) represent the corresponding velocities. The control input u is the force provided by an actuator between \(m_1\) and \(m_2\).
In Rossi et al. (2018), a feedback control example was presented employing the state derivative \({\dot{x}}\), which comprises the vertical velocities \({\dot{x}}_1\), \({\dot{x}}_2\) and accelerations \(\ddot{x}_1\), \(\ddot{x}_2\). More specifically, an optimal control was designed as
with \(\xi \) given by
and a gain matrix F obtained by minimizing a quadratic cost function J of the form
with positive-definite weight matrices S, R.
Herein, this problem can be solved by using the representation (11)–(10) proposed in Theorem 1, with \(w(kT) = {\dot{x}}(kT)\). In this case, the identity (6) holds with matrix \(\mathcal {H}\) given by
The cost function (45) can be rewritten as
and, thus, the optimal control is given by
with the gain F calculated as (Lewis and Syrmos 1995)
where P is the positive-definite solution of the following Riccati equation
As in Rossi et al. (2018), the cost function weights were set to \(S = \mathrm{diag}(1, 1, 1, 1, 0.01)\) and \(R = 0.01\) and two sampling periods (\(T = 0.01\) s and \(T = 0.04\) s) were employed in the discretization formula (4).
By using the \(\hbox {MATLAB}^\circledR \) Control System \(\hbox {Toolbox}^{\text {TM}}\) to solve the Riccati equation (50) and calculate the gain F as in (49), the following gain matrices F were obtained for \(T = 0.01\) s and \(T = 0.04\) s, respectively:
which match the gain matrices F shown in Eqs. (33) and (34) of Rossi et al. (2018).
Appendix B: Model Predictive Control Formulation
Consider a discrete-time space-state model of the form:
where \(x(kT) \in {\mathbb {R}}^{n}\), \(u(kT) \in {\mathbb {R}}^{m}\) are the state and input vectors at time kT, and \(\varPhi \in {\mathbb {R}}^{n \times n}\), \(\varGamma \in {\mathbb {R}}^{n \times m}\) are known matrices.
At each sampling time kT, the MPC formulation adopted herein involves the minimization of a cost function J of the form
subject to
where N is the prediction horizon, \(S, R, P_f\) are positive-definite weight matrices and \({\mathbb {X}}_f\) is a terminal constraint set. Following the usual notation adopted in MPC, the hat symbol is employed in \({\hat{x}}((k+i)T|kT)\) to represent the predicted value of \(x((k+i)T|kT)\), calculated on the basis of the current value x(kT) and a sequence of future control actions \({\hat{u}}(kT|kT)^+,\)\({\hat{u}}((k+1)T|kT)^+, \ldots ,\)\({\hat{u}}((k+i-1)T|kT)^+\), which is to be optimized. The first element of the optimized control sequence is applied to the plant, i.e. \(u(kT)^+ = {\hat{u}}^*(kT|kT)^+\), and the optimization procedure is repeated at the next sampling period, in a receding horizon manner.
The constraints (57) and (58) impose bounds on the control input u and a state \(z = P x\), with the vector P appropriately defined according to the state for which a constraint will be imposed. The recursive feasibility of the optimization problem throughout the control task and the asymptotic stability of the closed-loop system can be ensured by choosing the terminal set \({\mathbb {X}}_f\) and the terminal weight matrix \(P_f\) in a suitable manner. A possible choice consists of taking \({\mathbb {X}}_f\) as the maximal admissible set with respect to the input and state constraints under a terminal control law of the form \(u(kT)^+ = - K x(kT)\). If K is chosen such that the eigenvalues of \((\varPhi - \varGamma K)\) are inside the unit circle, \({\mathbb {X}}_f\) will be defined by a finite number of linear inequalities (Gilbert and Tan 1991). For a given K, the terminal weight matrix \(P_f\) should be taken as the positive-definite solution of the following Lyapunov equation:
where \({\bar{\varPhi }} = \varPhi - \varGamma K\). In the present work, the stabilizing gain K was obtained as the solution of an infinite-horizon linear-quadratic regulator problem with cost weights S and R for the state and control.
Owing to the use of a terminal control law, this formulation is known as “dual-mode predictive control”. For details concerning its recursive feasibility and stability guarantees, the reader is referred to Rossiter (2003). In what follows, the main ideas involved in the demonstration of these guarantees will be presented. For brevity of notation, the \(+\) superscript in the control values and the sampling period T will be omitted.
Let \({\hat{u}}^*(k+i-1|k)\), \({\hat{x}}^*(k+i|k)\), \(i = 1, 2, \ldots , N\), denote the control and state sequences obtained as solution of the optimization problem at time k. The associated cost will be denoted by
By applying \(u(k) = {\hat{u}}^*(k|k)\) to the plant, the state evolves to
Now, consider the following candidate solution \({\hat{u}}^c(k+i|k+1)\), \({\hat{x}}^c(k+i+1|k+1)\), \(i = 1, 2, \ldots , N\), for the optimization problem at time \(k+1\):
This candidate solution satisfies the control and state constraints for \(i = 1, 2, \ldots , N-1\) because \({\hat{u}}^*(k+i|k)\) and \({\hat{x}}^*(k+i+1|k)\) were obtained under such constraints. Moreover, \({\hat{x}}^c(k+N|k+1) = {\hat{x}}^*(k+N|k) \in {\mathbb {X}}_f\), because \({\hat{x}}^*(k+N|k)\) was obtained under the terminal constraint (59). It is worth recalling that \({\mathbb {X}}_f\) is the maximal admissible set with respect to the input and state constraints under the terminal control law \(u(k) = - K x(k)\). Therefore, the control \({\hat{u}}^c(k+N|k+1) = -K {\hat{x}}^c(k+N|k+1)\) also satisfies the input constraint. Furthermore, since the maximal admissible set is invariant (Gilbert and Tan 1991), the state \({\hat{x}}^c(k+N+1|k+1) = (\varPhi - \varGamma K) {\hat{x}}^c(k+N|k+1)\) will lie in \({\mathbb {X}}_f\). Therefore, it can be concluded that the candidate solution is feasible, because it satisfies the control, state and terminal constraints.
Let \(J^c(k+1)\) denote the cost associated to the candidate solution, i.e.
By using (63), (64), the cost \(J^c(k+1)\) can be expressed as
Moreover, by using (65), (66) and noting that \({\hat{x}}^c(k+N|k+1) = {\hat{x}}^*(k+N|k)\), it follows that
where \({\bar{\varPhi }} = \varPhi - \varGamma K\).
After comparing (61) and (69), one can write
In view of (60), the identity (70) simplifies to
At this point, it is worth noting that the minimal cost \(J^*(k+1)\) will be smaller or equal to the cost \(J^c(k+1)\) obtained with the candidate solution, i.e.
The inequality (72) shows that \(J^*(k)\) is a non-increasing sequence of cost values, which are lower-bounded by zero. Therefore, as noted in Kwon and Han (2005) it can be concluded that \(J^*(k)\) is a convergent sequence and thus
From (72) and (73), it can be concluded that
Since S and R are positive-definite matrices, it follows that
Finally, from (62) and (75), it can be concluded that x(k) converges to the origin as \(k \rightarrow \infty \).
Appendix C: Design of Complementary State and State-Derivative Feedback Controller by Nominal Equivalence
Consider a system described by a continuous-time model of the form (1) with the control actions employed according to (2). Therefore, the state derivative of the system (1) at time \(t = kT\) is given by (5).
Assume that a gain matrix \(F \in {\mathbb {R}}^{m \times n}\) has been designed in order to obtain a suitable SF control law of the form
for system (5). Moreover, consider the state vector x(kT) can be partitioned as
where \(x_{1}(kT)\) and \({\dot{x}}_{2}(kT)\) are measured, with an associated partition of the \(\varPhi _\mathrm{c}\), \(\varGamma _\mathrm{c}\) matrices as:
Also, assume that matrix \(\varPhi _{c,22}\) is non-singular. Then, a control law implemented as (Rossi et al. 2016)
with
will generate control values that will match those provided by the state feedback law (77).
Rights and permissions
About this article
Cite this article
Rossi, F.Q., Galvão, R.K.H., Teixeira, M.C.M. et al. Direct Design of Controllers Using Complementary State and State-Derivative Feedback. J Control Autom Electr Syst 30, 181–193 (2019). https://doi.org/10.1007/s40313-018-00436-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40313-018-00436-9