Abstract
Different from the inverse problem put forward by R. E. Kalman, another kind of inverse problem of linear optimal control is proposed and dicussed in this paper, which is as follows. If an asymptotically stable system\(\dot x = \tilde Ax\) and a performance index\(J = \int_0^\infty {(x^\tau Qx + u^\tau u)} dt\) are given, when can à be decomposed into Ã=A+BKT so that the control lawu=KTx is optimal for the system\(\dot x = Ax + Bu\) and the index J? This paper gives the solution for the problem and presents certain correspondence between the asymptotically stable system and the optimal system.
Similar content being viewed by others
References
Kalman, R. E., When is a linear control system optimal,ASME J. Basic Engineering, March (1964).
Anderson, B. D. O., The Inverse problem of optimal control, Rept. No. SEL-66-038 (TR. No. 6560-3), Standford electronics Laboratories, Standford, Clif., May (1966).
Anderson, B. D. O. Nonlinear regulator theory and an inverse optimal problem,IEEE Trans. Automatic Control, AC-18 (1973).
Wonham, W. M.,Linear Multivariable Control, Spring-Verlag, (1974).
Hwang Ling, Zheng In-ping and Chang Di, The lyapunov second method and the analytical design of the optimal controller,Acta Automatica Sinica 4 (1964). (in Chinese)
Hwang Ling, Generating element and controllability,Proceedings of the Bilateral Meeting on Control Systems (P. R. C. and U.S.A.), Science Press, Beijing (1982).
Hwang Ling,Algebra in Control and System Theory, Scientific Press, Beijing (1983). (in Chinese)
Author information
Authors and Affiliations
Additional information
communicated by Zhu Zao-xuan.
Rights and permissions
About this article
Cite this article
Xiao-lin, C., Ling, H. The relationship between the stability and the optimality of linear systems. Appl Math Mech 6, 149–156 (1985). https://doi.org/10.1007/BF01874953
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF01874953