Skip to main content
Log in

Direct Design of Controllers Using Complementary State and State-Derivative Feedback

  • Published:
Journal of Control, Automation and Electrical Systems Aims and scope Submit manuscript

Abstract

This paper is concerned with controllers design using complementary state and state-derivative feedback, as an extension of the work presented in Rossi et al. (in: Proceedings of XXI Congresso Brasileiro de Automática, Vitória, Brazil, pp 828–833, 2016). The novel approach is more general, in that it enables the direct design of an CSSDF controller, dispensing with the need for a preliminary state feedback design. For this purpose, a discrete-time design model is derived to describe the plant dynamics in terms of the state and state-derivative combinations available for feedback. The resulting model can be used with existing discrete-time state-space methods for the design of linear or nonlinear control laws. Simulation examples are presented to illustrate the proposed design method within a model predictive control formulation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Abdelaziz, T. H. S. (2012). Parametric eigenstructure assignment using state-derivative feedback for linear systems. Journal of Vibration and Control, 18(12), 1809–1827.

    Article  MathSciNet  Google Scholar 

  • Abdelaziz, T. H. S., & Valášek, M. (2004). Pole-placement for SISO linear systems by state derivative feedback. IEE Proceedings: Control Theory and Applications, 151(4), 377–385.

    Google Scholar 

  • Assunção, E., Teixeira, M. C. M., Faria, F. A., Silva, N. A. P., & Cardim, R. (2007). Robust state derivative feedback LMI-based designs for multivariable linear systems. International Journal of Control, 80(8), 1260–1270.

    Article  MathSciNet  MATH  Google Scholar 

  • Cardim, R., Teixeira, M. C. M., Faria, F. A., & Assunção, E. (2009). LMI-based digital redesign of linear time invariant systems with state derivative feedback. In Proceedings of IEEE multi-conference of systems and control, Saint Petersburg, Russia, 8–10 July 2009 (pp. 745–749).

  • Chua, L. O., Desoer, C. A., & Kuh, E. S. (1987). Linear and nonlinear circuits. New York: McGraw-Hill.

    MATH  Google Scholar 

  • Duan, Y. F., Ni, Y. Q., & Ko, J. M. (2005). State-derivative feedback control of cable vibration using semiactive magnetorheological dampers. Computer-Aided Civil and Infrastructure Engineering, 20(6), 431–449.

    Article  Google Scholar 

  • Fallah, S., Khajepour, A., Fidan, B., Chen, S.-K., & Litkouhi, B. (2013). Vehicle optimal torque vectoring using state derivative feedback and linear matrix inequalities. IEEE Transactions on Vehicular Technology, 62(4), 1540–1552.

    Article  Google Scholar 

  • Faria, F. A., Assunção, E., Teixeira, M. C. M., Cardim, R., & Silva, N. A. P. (2009). Robust state derivative pole placement LMI-based designs for linear systems. International Journal of Control, 82(1), 1–12.

    Article  MathSciNet  MATH  Google Scholar 

  • Fleming, J., Kouvaritakis, B., & Cannon, M. (2015). Robust tube MPC for linear systems with multiplicative uncertainty. IEEE Transaction on Automatic Control, 60(4), 1087–1092.

    Article  MathSciNet  MATH  Google Scholar 

  • Franklin, G. F., Powell, J. D., & Workman, M. L. (1998). Digital control of dynamic systems (3rd ed.). Menlo Park, California: Addison Wesley.

    MATH  Google Scholar 

  • Gautam, A., & Soh, Y. C. (2014). Constraint-softening in model predictive control with off-line-optimized admissible sets for systems with additive and multiplicative disturbances. Systems & Control Letters, 69, 65–72.

    Article  MathSciNet  MATH  Google Scholar 

  • Gilbert, E. G., & Tan, K. T. (1991). Linear systems with state and control constraints: The theory and application of maximal output admissible sets. IEEE Transactions on Automatic Control, 36(9), 1008–1020.

    Article  MathSciNet  MATH  Google Scholar 

  • Herceg, M., Kvasnica, M., Jones, C. N., & Morari, M. (2013). Multi-parametric toolbox 3.0. In Proceedings of the European control conference, Zürich, Switzerland, 17–19 July 2013 (pp. 502–510).

  • Kwon, W. H., & Han, S. (2005). Receding horizon control. London: Springer.

    Google Scholar 

  • Lewis, F. L., & Syrmos, V. L. (1995). Optimal control (2nd ed.). New York: Wiley.

    Google Scholar 

  • Maciejowski, J. M. (2002). Predictive control with constraints. Harlow: Prentice Hall.

    MATH  Google Scholar 

  • Olalla, C., Leyva, R., Aroudi, A. E., & Queinnec, I. (2009). Robust LQR control for PWM converters: An LMI approach. IEEE Transaction on Industrial Electronics, 56(7), 2548–2558.

    Article  Google Scholar 

  • Quanser (2003). Vibration control: Active mass damper—One floor (AMD-1). Student manual (4th ed.). Ontario, Canada: Quanser Consulting Inc.

  • Rakovic, S. V., Kouvaritakis, B., Findeisen, R., & Cannon, M. (2012). Homothetic tube model predictive control. Automatica, 48(8), 1631–1638.

    Article  MathSciNet  MATH  Google Scholar 

  • Reithmeier, E., & Leitmann, G. (2003). Robust vibration control of dynamical systems based on the derivative of the state. Archive of Applied Mechanics, 72(11), 856–864.

    MATH  Google Scholar 

  • Rossi, F. Q. (2018). Discrete-time design of control laws using state derivative feedback. Ph.D. thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP.

  • Rossi, F. Q., Galvão, R. K. H., Teixeira, M. C. M., & Assunção, E. (2016). Discrete-time control design with complementary state and state-derivative feedback. In Proceedings of XXI Congresso Brasileiro de Automática, Vitória, Brazil, 03–07 October 2016 (pp. 828–833).

  • Rossi, F. Q., Galvão, R. K. H., Teixeira, M. C. M., & Assunção, E. (2018). Direct discrete time design of robust state derivative feedback control laws. International Journal of Control, 91(1), 70–84.

    Article  MathSciNet  MATH  Google Scholar 

  • Rossi, F. Q., Teixeira, M. C. M., Galvão, R. K. H., & Assunção, E. (2013). Discrete time design of state derivative feedback control laws. In Proceedings of conference on control and fault-tolerant systems, Nice, France, 09–11 October 2013 (pp. 808–813).

  • Rossiter, J. A. (2003). Model-based predictive control: A practical approach. Boca Raton: CRC Press.

    Google Scholar 

  • Silva, E. R. P., Assunção, E., Teixeira, M. C. M., & Buzachero, L. F. S. (2012). Less conservative control design for linear systems with polytopic uncertainties via state derivative feedback. Mathematical Problems in Engineering, 2012, 315049.

    MathSciNet  MATH  Google Scholar 

  • Silva, E. R. P., Assunção, E., Teixeira, M. C. M., & Cardim, R. (2013). Robust controller implementation via state derivative feedback in an active suspension system subject to fault. In Proceedings of conference on control and fault-tolerant systems, Nice, France, 9–11 October 2013 (pp. 752–757).

  • Tseng, Y. W., & Hsieh, J. G. (2013). Optimal control for a family of systems in novel state derivative space form with experiment in a double inverted pendulum system. Abstract and Applied Analysis, 715026, 8.

    MathSciNet  Google Scholar 

  • Zeilinger, M. N., Morari, M., & Jones, C. N. (2014). Soft constrained model predictive control with robust stability guarantees. IEEE Transactions on Automatic Control, 59(5), 1190–1202.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, J., Ouyang, H., & Yang, J. (2014). Partial eigenstructure assignment for undamped vibration systems using acceleration and displacement feedback. Journal of Sound and Vibration, 333(1), 1–12.

    Article  Google Scholar 

  • Zhang, J., Ye, J., & Ouyang, H. (2016). Static output feedback for partial eigenstructure assignment of undamped vibration systems. Mechanical Systems and Signal Processing, 68–69(2016), 555–561.

    Article  Google Scholar 

Download references

Acknowledgements

The main theoretical results presented in this paper were developed as part of the first author’s Ph.D. research (Rossi 2018), which was funded by CNPq under Doctoral Scholarship 140585/2014-1. The remaining authors acknowledge the support of FAPESP under Grant 2011/ 17610-0 and CNPq under Grants 303714/2014-0, 310798/2014-0, 301227/2017-9 (Research Fellowships).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fernanda Quelho Rossi.

Appendices

Appendix A: State-Derivative Feedback Design

Consider a vibration suppression system represented by a continuous time state-space equation of form (1) with

$$\begin{aligned} x(t) = \! \left[ \begin{array}{cccc} x_1(t)&x_2(t)&{\dot{x}}_{1}(t)&{\dot{x}}_{2}(t) \end{array} \right] ^\mathrm{T} \; \end{aligned}$$
(41)

and

$$\begin{aligned} \varPhi _{c}= & {} \left[ \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ \dfrac{-k_1-k_2}{m_1} &{} \dfrac{k_2}{m_1} &{} \dfrac{-b_1-b_2}{m_1} &{} \dfrac{b_2}{m_1} \\ \dfrac{k_2}{m_2} &{} \dfrac{-k_2}{m_2} &{} \dfrac{b_2}{m_2} &{} \dfrac{-b_2}{m_2} \end{array} \right] , \; \; \;\nonumber \\ \varGamma _{c}= & {} \left[ \begin{array}{c} 0 \\ 0 \\ -\dfrac{1}{m_1} \\ \dfrac{1}{m_2} \end{array} \right] \; \end{aligned}$$
(42)

where \(m_1 = 100\) kg and \(m_2 = 10\) kg are two masses coupled by a spring-damper device with stiffness \(k_2 = 36\) kN/m and damping \(b_2 = 50\) Ns/m, with an additional spring-damper device between \(m_1\) and the ground with stiffness \(k_1 = 360\)kN and damping \(b_1 = 70\) Ns/m, as described in Abdelaziz and Valášek (2004) and adopted in Rossi et al. (2018). The states \(x_1\) and \(x_2\) represent the vertical displacements of \(m_1\) and \(m_2\), and \({\dot{x}}_1\) and \({\dot{x}}_2\) represent the corresponding velocities. The control input u is the force provided by an actuator between \(m_1\) and \(m_2\).

In Rossi et al. (2018), a feedback control example was presented employing the state derivative \({\dot{x}}\), which comprises the vertical velocities \({\dot{x}}_1\), \({\dot{x}}_2\) and accelerations \(\ddot{x}_1\), \(\ddot{x}_2\). More specifically, an optimal control was designed as

$$\begin{aligned} u(kT)^{+} = F \xi (kT) \end{aligned}$$
(43)

with \(\xi \) given by

$$\begin{aligned} \xi (kT) \triangleq \left[ \begin{array}{c} {\dot{x}}(kT) \\ u((k-1)T)^{+} \end{array} \right] \end{aligned}$$
(44)

and a gain matrix F obtained by minimizing a quadratic cost function J of the form

$$\begin{aligned} J = \sum ^{\infty }_{k = 0} \left[ ||\xi (kT)||_S^2 + ||u(kT)^+||^2_R \right] \end{aligned}$$
(45)

with positive-definite weight matrices S, R.

Herein, this problem can be solved by using the representation (11)–(10) proposed in Theorem 1, with \(w(kT) = {\dot{x}}(kT)\). In this case, the identity (6) holds with matrix \(\mathcal {H}\) given by

$$\begin{aligned} \mathcal {H} = \left[ \begin{array}{cc} 0_{4 \times 4} &{} I_{4 \times 4}\\ \end{array} \right] \end{aligned}$$
(46)

The cost function (45) can be rewritten as

$$\begin{aligned} J = \sum ^{\infty }_{k = 0} \left[ ||\eta (kT)||_S^2 + ||u(kT)^+||^2_R \right] \end{aligned}$$
(47)

and, thus, the optimal control is given by

$$\begin{aligned} u(kT)^{+} = F \eta (kT) \end{aligned}$$
(48)

with the gain F calculated as (Lewis and Syrmos 1995)

$$\begin{aligned} F = - (B^\mathrm{T} P B + R)^{-1} (B^\mathrm{T} P A) \end{aligned}$$
(49)

where P is the positive-definite solution of the following Riccati equation

$$\begin{aligned} A^\mathrm{T} P A - P - (A^\mathrm{T} P B)(B^\mathrm{T} P B + R)^{-1} (B^\mathrm{T} P A) + S = 0 \end{aligned}$$
(50)

As in Rossi et al. (2018), the cost function weights were set to \(S = \mathrm{diag}(1, 1, 1, 1, 0.01)\) and \(R = 0.01\) and two sampling periods (\(T = 0.01\) s and \(T = 0.04\) s) were employed in the discretization formula (4).

By using the \(\hbox {MATLAB}^\circledR \) Control System \(\hbox {Toolbox}^{\text {TM}}\) to solve the Riccati equation (50) and calculate the gain F as in (49), the following gain matrices F were obtained for \(T = 0.01\) s and \(T = 0.04\) s, respectively:

$$\begin{aligned} F_{(T\,=\,0.01\,\mathrm{s})}= & {} \left[ \begin{array}{ccccc} 101.8&-221.6&-0.074&-2.70&0.27 \end{array} \right] \; \end{aligned}$$
(51)
$$\begin{aligned} F_{(T\,=\,0.04\,\mathrm{s})}= & {} \left[ \begin{array}{ccccc} 71.6&-108.7&-0.29&-3.33&0.33 \end{array} \right] \; \end{aligned}$$
(52)

which match the gain matrices F shown in Eqs. (33) and (34) of Rossi et al. (2018).

Appendix B: Model Predictive Control Formulation

Consider a discrete-time space-state model of the form:

$$\begin{aligned} x((k+1)T) = \varPhi x(kT) + \varGamma u(kT) \end{aligned}$$
(53)

where \(x(kT) \in {\mathbb {R}}^{n}\), \(u(kT) \in {\mathbb {R}}^{m}\) are the state and input vectors at time kT, and \(\varPhi \in {\mathbb {R}}^{n \times n}\), \(\varGamma \in {\mathbb {R}}^{n \times m}\) are known matrices.

At each sampling time kT, the MPC formulation adopted herein involves the minimization of a cost function J of the form

$$\begin{aligned} J= & {} {||{\hat{x}}((k+N)T|kT)||^2_{P_f}} \nonumber \\&\quad +\sum _{i = 1}^N ||{\hat{x}}((k+i)T|kT)||^2_S + ||{\hat{u}}((k+i-1)T|kT)^+||^2_R \end{aligned}$$
(54)

subject to

$$\begin{aligned}&{\hat{x}}((k+i)T|kT) = \varPhi {\hat{x}}((k+i-1)T|kT) \nonumber \\&\quad \qquad \qquad \qquad \qquad \quad +\,\varGamma {\hat{u}}((k+i-1)T|kT)^+, \; i = 1, 2, \ldots , N \end{aligned}$$
(55)
$$\begin{aligned}&{\hat{x}}(kT|kT) = x(kT) \end{aligned}$$
(56)
$$\begin{aligned}&u_{\min } \le {\hat{u}}((k+i-1)T|kT)^+ \le u_{\max }, \; i = 1, 2, \ldots , N \end{aligned}$$
(57)
$$\begin{aligned}&z_{\min } \le P \, {\hat{x}}((k+i)T|kT) \le z_{\max }, \; i = 1, 2, \ldots , N \end{aligned}$$
(58)
$$\begin{aligned}&{\hat{x}}((k+N)T|kT) \, {\in {\mathbb {X}}_f} \end{aligned}$$
(59)

where N is the prediction horizon, \(S, R, P_f\) are positive-definite weight matrices and \({\mathbb {X}}_f\) is a terminal constraint set. Following the usual notation adopted in MPC, the hat symbol is employed in \({\hat{x}}((k+i)T|kT)\) to represent the predicted value of \(x((k+i)T|kT)\), calculated on the basis of the current value x(kT) and a sequence of future control actions \({\hat{u}}(kT|kT)^+,\)\({\hat{u}}((k+1)T|kT)^+, \ldots ,\)\({\hat{u}}((k+i-1)T|kT)^+\), which is to be optimized. The first element of the optimized control sequence is applied to the plant, i.e. \(u(kT)^+ = {\hat{u}}^*(kT|kT)^+\), and the optimization procedure is repeated at the next sampling period, in a receding horizon manner.

The constraints (57) and (58) impose bounds on the control input u and a state \(z = P x\), with the vector P appropriately defined according to the state for which a constraint will be imposed. The recursive feasibility of the optimization problem throughout the control task and the asymptotic stability of the closed-loop system can be ensured by choosing the terminal set \({\mathbb {X}}_f\) and the terminal weight matrix \(P_f\) in a suitable manner. A possible choice consists of taking \({\mathbb {X}}_f\) as the maximal admissible set with respect to the input and state constraints under a terminal control law of the form \(u(kT)^+ = - K x(kT)\). If K is chosen such that the eigenvalues of \((\varPhi - \varGamma K)\) are inside the unit circle, \({\mathbb {X}}_f\) will be defined by a finite number of linear inequalities (Gilbert and Tan 1991). For a given K, the terminal weight matrix \(P_f\) should be taken as the positive-definite solution of the following Lyapunov equation:

$$\begin{aligned} {\bar{\varPhi }}^\mathrm{T} P_f {\bar{\varPhi }} - P_f + {\bar{\varPhi }}^\mathrm{T} S {\bar{\varPhi }} + K^\mathrm{T} R K = 0 \end{aligned}$$
(60)

where \({\bar{\varPhi }} = \varPhi - \varGamma K\). In the present work, the stabilizing gain K was obtained as the solution of an infinite-horizon linear-quadratic regulator problem with cost weights S and R for the state and control.

Owing to the use of a terminal control law, this formulation is known as “dual-mode predictive control”. For details concerning its recursive feasibility and stability guarantees, the reader is referred to Rossiter (2003). In what follows, the main ideas involved in the demonstration of these guarantees will be presented. For brevity of notation, the \(+\) superscript in the control values and the sampling period T will be omitted.

Let \({\hat{u}}^*(k+i-1|k)\), \({\hat{x}}^*(k+i|k)\), \(i = 1, 2, \ldots , N\), denote the control and state sequences obtained as solution of the optimization problem at time k. The associated cost will be denoted by

$$\begin{aligned}&J^*(k) = ||{\hat{x}}^*(k+N|k)||^2_{P_f} \nonumber \\&\qquad \quad \quad +\sum _{i = 1}^N ||{\hat{x}}^*(k+i|k)||^2_S + ||{\hat{u}}^*(k+i-1|k)||^2_R \end{aligned}$$
(61)

By applying \(u(k) = {\hat{u}}^*(k|k)\) to the plant, the state evolves to

$$\begin{aligned} x(k+1) = {\hat{x}}^*(k+1|k) \end{aligned}$$
(62)

Now, consider the following candidate solution \({\hat{u}}^c(k+i|k+1)\), \({\hat{x}}^c(k+i+1|k+1)\), \(i = 1, 2, \ldots , N\), for the optimization problem at time \(k+1\):

$$\begin{aligned}&{\hat{u}}^c(k+i|k+1) = {\hat{u}}^*(k+i|k), i = 1, 2, \ldots , N \!\! - \! 1 \end{aligned}$$
(63)
$$\begin{aligned}&{\hat{x}}^c(k+i+1|k+1) = {\hat{x}}^*(k+i+1|k), i = 1, 2, \ldots , N \!\! - \! 1 \end{aligned}$$
(64)
$$\begin{aligned}&{\hat{u}}^c(k+N|k+1) = -K {\hat{x}}^c(k+N|k+1) \end{aligned}$$
(65)
$$\begin{aligned}&{\hat{x}}^c(k+N+1|k+1) = \varPhi {\hat{x}}^c(k+N|k+1) \nonumber \\&\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;\; +\,\varGamma {\hat{u}}^c(k+N|k+1) \end{aligned}$$
(66)

This candidate solution satisfies the control and state constraints for \(i = 1, 2, \ldots , N-1\) because \({\hat{u}}^*(k+i|k)\) and \({\hat{x}}^*(k+i+1|k)\) were obtained under such constraints. Moreover, \({\hat{x}}^c(k+N|k+1) = {\hat{x}}^*(k+N|k) \in {\mathbb {X}}_f\), because \({\hat{x}}^*(k+N|k)\) was obtained under the terminal constraint (59). It is worth recalling that \({\mathbb {X}}_f\) is the maximal admissible set with respect to the input and state constraints under the terminal control law \(u(k) = - K x(k)\). Therefore, the control \({\hat{u}}^c(k+N|k+1) = -K {\hat{x}}^c(k+N|k+1)\) also satisfies the input constraint. Furthermore, since the maximal admissible set is invariant (Gilbert and Tan 1991), the state \({\hat{x}}^c(k+N+1|k+1) = (\varPhi - \varGamma K) {\hat{x}}^c(k+N|k+1)\) will lie in \({\mathbb {X}}_f\). Therefore, it can be concluded that the candidate solution is feasible, because it satisfies the control, state and terminal constraints.

Let \(J^c(k+1)\) denote the cost associated to the candidate solution, i.e.

$$\begin{aligned} J^c(k+1)&= ||{\hat{x}}^c(k+N+1|k+1)||^2_{P_f} \nonumber \\&\quad +\sum _{i = 1}^N ||{\hat{x}}^c(k+i+1|k+1)||^2_S + ||{\hat{u}}^c(k+i|k+1)||^2_R \end{aligned}$$
(67)

By using (63), (64), the cost \(J^c(k+1)\) can be expressed as

$$\begin{aligned} J^c(k+1)&= ||{\hat{x}}^c(k+N+1|k+1)||^2_{P_f} \nonumber \\&\quad +\sum _{i = 1}^{N-1} ||{\hat{x}}^*(k+i+1|k)||^2_S + ||{\hat{u}}^*(k+i|k)||^2_R \nonumber \\&\quad + ||{\hat{x}}^c(k+N+1|k+1)||^2_S + ||{\hat{u}}^c(k+N|k+1)||^2_R \end{aligned}$$
(68)

Moreover, by using (65), (66) and noting that \({\hat{x}}^c(k+N|k+1) = {\hat{x}}^*(k+N|k)\), it follows that

$$\begin{aligned} J^c(k+1)&= ||{\bar{\varPhi }}{\hat{x}}^*(k+N|k)||^2_{P_f} \nonumber \\&\quad +\sum _{i = 1}^{N-1} ||{\hat{x}}^*(k+i+1|k)||^2_S + ||{\hat{u}}^*(k+i|k)||^2_R \nonumber \\&\quad + ||{\bar{\varPhi }}{\hat{x}}^*(k+N|k)||^2_S + ||K{\hat{x}}^*(k+N|k)||^2_R \end{aligned}$$
(69)

where \({\bar{\varPhi }} = \varPhi - \varGamma K\).

After comparing (61) and (69), one can write

$$\begin{aligned} J^c(k+1)&= J^*(k) - ||{\hat{x}}^*(k+1|k)||^2_S - ||{\hat{u}}^*(k|k)||^2_R \nonumber \\&\quad - ||{\hat{x}}^*(k+N|k)||^2_{P_f} + ||{\bar{\varPhi }}{\hat{x}}^*(k+N|k)||^2_{P_f} \nonumber \\&\quad + ||{\bar{\varPhi }}{\hat{x}}^*(k+N|k)||^2_S + ||K{\hat{x}}^*(k+N|k)||^2_R \end{aligned}$$
(70)

In view of (60), the identity (70) simplifies to

$$\begin{aligned} J^c(k+1) = J^*(k) - ||{\hat{x}}^*(k+1|k)||^2_S - ||{\hat{u}}^*(k|k)||^2_R \end{aligned}$$
(71)

At this point, it is worth noting that the minimal cost \(J^*(k+1)\) will be smaller or equal to the cost \(J^c(k+1)\) obtained with the candidate solution, i.e.

$$\begin{aligned} J^*(k+1) \le J^*(k) - ||{\hat{x}}^*(k+1|k)||^2_S - ||{\hat{u}}^*(k|k)||^2_R \end{aligned}$$
(72)

The inequality (72) shows that \(J^*(k)\) is a non-increasing sequence of cost values, which are lower-bounded by zero. Therefore, as noted in Kwon and Han (2005) it can be concluded that \(J^*(k)\) is a convergent sequence and thus

$$\begin{aligned} \lim _{k \rightarrow \infty } \Big [J^*(k) - J^*(k+1)\Big ] = 0 \end{aligned}$$
(73)

From (72) and (73), it can be concluded that

$$\begin{aligned} \lim _{k \rightarrow \infty } \Big [||{\hat{x}}^*(k+1|k)||^2_S + ||{\hat{u}}^*(k|k)||^2_R\Big ] = 0 \end{aligned}$$
(74)

Since S and R are positive-definite matrices, it follows that

$$\begin{aligned}&\lim _{k \rightarrow \infty } {\hat{x}}^*(k+1|k) = 0 \end{aligned}$$
(75)
$$\begin{aligned}&\lim _{k \rightarrow \infty } {\hat{u}}^*(k|k) = 0 \end{aligned}$$
(76)

Finally, from (62) and (75), it can be concluded that x(k) converges to the origin as \(k \rightarrow \infty \).

Appendix C: Design of Complementary State and State-Derivative Feedback Controller by Nominal Equivalence

Consider a system described by a continuous-time model of the form (1) with the control actions employed according to (2). Therefore, the state derivative of the system (1) at time \(t = kT\) is given by (5).

Assume that a gain matrix \(F \in {\mathbb {R}}^{m \times n}\) has been designed in order to obtain a suitable SF control law of the form

$$\begin{aligned} u(kT)^{+} = F x(kT) \end{aligned}$$
(77)

for system (5). Moreover, consider the state vector x(kT) can be partitioned as

$$\begin{aligned} x(kT) = \left[ \begin{array}{c} x_{1}(kT) \\ {x}_{2}(kT) \end{array} \right] \end{aligned}$$
(78)

where \(x_{1}(kT)\) and \({\dot{x}}_{2}(kT)\) are measured, with an associated partition of the \(\varPhi _\mathrm{c}\), \(\varGamma _\mathrm{c}\) matrices as:

$$\begin{aligned} \varPhi _{c} = \left[ \begin{array}{cc} \varPhi _{c,11} &{}\quad \varPhi _{c,12} \\ \varPhi _{c,21} &{}\quad \varPhi _{c,22} \end{array} \right] , \; \; B_{c} = \left[ \begin{array}{c} \varGamma _{c,1} \\ \varGamma _{c,2} \end{array} \right] \end{aligned}$$
(79)

Also, assume that matrix \(\varPhi _{c,22}\) is non-singular. Then, a control law implemented as (Rossi et al. 2016)

$$\begin{aligned} u(kT)^+ = F \, M \, \left[ \begin{array}{c} x_{1}(kT) \\ {\dot{x}}_{2}(kT) \\ u((k-1)T)^{+} \end{array} \right] \end{aligned}$$
(80)

with

$$\begin{aligned} M = \left[ \begin{array}{ccc} I &{}\quad 0 &{}\quad 0 \\ -\varPhi _{c,22}^{-1} \varPhi _{c,21} &{}\quad \varPhi _{c,22}^{-1} &{}\quad -\varPhi _{c,22}^{-1} \varGamma _{c,2} \end{array} \right] \end{aligned}$$
(81)

will generate control values that will match those provided by the state feedback law (77).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rossi, F.Q., Galvão, R.K.H., Teixeira, M.C.M. et al. Direct Design of Controllers Using Complementary State and State-Derivative Feedback. J Control Autom Electr Syst 30, 181–193 (2019). https://doi.org/10.1007/s40313-018-00436-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40313-018-00436-9

Keywords

Navigation