Abstract
We have seen in the previous chapter that a control can be used to bring the state of a system to a given value or to stabilize the system. In dealing with stability we have also used a method that consists in solving an optimal control problem. Optimal control represents an essential branch of control theory. We shall present the general theory later on. In the case of linear systems, the results can be obtained by ad hoc techniques that are useful to know.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kalman, R.E., On a new characterization of linear passive systems, Proc. 1st Allerton Conf. on Circuit and System Theory, Univ. Illinois, Urbana, (1963)
Popov, V.M., L’hyperstabilité des systèmes automatiques,Dunod, (1973)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Bensoussan, A. (2018). Optimal Control of Linear Dynamical Systems. In: Estimation and Control of Dynamical Systems. Interdisciplinary Applied Mathematics, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-319-75456-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-75456-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75455-0
Online ISBN: 978-3-319-75456-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)