Abstract
In this note, optimal tracking control for uncertain continuous-time nonlinear system is investigated by using a novel reinforcement learning (RL) scheme. The uncertainty here refers to unknown system drift dynamics. Based on the nonlinear system and reference signal, we firstly formulate the tracking problem by constructing an augmented system. The optimal tracking control problem for original nonlinear system is thus transformed into solving the Hamilton–Jacobi–Bellman (HJB) equation of the augmented system. A new single neural network (NN)-based online RL method is proposed to learn the solution of tracking HJB equation while the corresponding optimal control input that minimizes the tracking HJB equation is calculated in a forward-in-time manner without requiring any value, policy iterations and the system drift dynamics. In order to relax the dependence of the RL method on traditional Persistence of Excitation (PE) conditions, a concurrent learning technique is adopted to design the NN tuning laws. The Uniformly Ultimately Boundedness of NN weight errors and closed-loop augmented system states are rigorous proved. Three numerical simulation examples are given to demonstrate the effectiveness of the proposed scheme.
Similar content being viewed by others
References
Lewis FL, Jagannathan S, Yesildirek A (1998) Neural network control of robot manipulators and nonlinear systems. Taylor & Francis, Philadelphia, PA
Mahony R, Hamel T (2004) Robust trajectory tracking for a scale model autonomous helicopter. Int J Robust Nonlinear Control 14(12):1035
Huang J, Wen C, Wang W, Jiang ZP (2014) Adaptive output feedback tracking control of a nonholonomic mobile robot. Automatica 50(3):821
Tang X, Tao G, Joshi SM (2003) Adaptive actuator failure compensation for parametric strict feedback systems and an aircraft application. Automatica 39(11):1975
Lewis FL, Vrabie DL, Syrmos VL (2015) Optimal control, 3rd edn. Wiley, New York
Mannava A, Balakrishnan SN, Tang L, Landers RG (2012) Optimal tracking control of motion systems. IEEE Trans Control Syst Technol 20(6):1548
Sharma R, Tewari A (2013) Optimal nonlinear tracking of spacecraft attitude maneuvers. IEEE Trans Control Syst Technol 12(5):677
Liu T, Liang S, Xiong Q, Wang K (2018) Adaptive critic based optimal neurocontrol of a distributed microwave heating system using diagonal recurrent network. IEEE Access 6:68839
Liu T, Liang S, Xiong Q, Wang K (2019) Data-based online optimal temperature tracking control in continuous microwave heating system by adaptive dynamic programming. Neural Process Lett. https://doi.org/10.1007/s11063-019-10081-1
Sutton R, Barto A (2018) Reinforcement learning: an introduction. The MIT Press, Cambridge
Lewis FL, Liu D (2015) Reinforcement learning and approximate dynamic programming for feedback control. IEEE Circuits Syst Mag 9(3):32
Ren L, Zhang G, Mu C (2019) Optimal output feedback control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning. Neural Process Lett. https://doi.org/10.1007/s11063-019-10072-2
Qiao L, Wei Q, Liu D (2017) A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm. Int J Syst Sci 48(3):525
Zhang H, Wei Q, Luo Y (2008) A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm. IEEE Trans Syst Man Cybern Part B 38(4):937
Zhang H, Cui L, Zhang X, Luo Y (2011) Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method. IEEE Trans Neural Netw 22(12):2226
Xiong Y, Liu D, Ding W (2014) Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints. Int J Control 87(3):553
Kamalapurkar R, Andrews L, Walters P, Dixon WE (2017) Model-based reinforcement learning for infinite-horizon approximate optimal tracking. IEEE Trans Neural Netw Learn Syst 28(3):753
Kiumarsi B, Lewis FL (2017) Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems. IEEE Trans Neural Netw Learn Syst 26(1):140
Modares H, Lewis FL (2014) Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning. Automatica 50(7):1780
Modares H, Lewis FL (2014) Linear quadratic tracking control of partially-unknown continuous-time systems using reinforcement learning. IEEE Trans Autom Control 59(11):3051
Kiumarsi-Khomartash B, Lewis FL, Naghibi-Sistani M, Karimpour A (2013) Optimal tracking control for linear discrete-time systems using reinforcement learning. In: 52nd IEEE Conference on Decision and Control, Florence, pp 3845–3850
Kiumarsi B, Lewis FL, Naghibi-Sistani MB, Karimpour A (2015) Optimal tracking control of unknown discrete-time linear systems using input–output measured data. IEEE Trans Cybern 45(12):2770
Wei Q, Liu D (2014) Adaptive dynamic programming for optimal tracking control of unknown nonlinear systems with application to coal gasification. IEEE Trans Autom Sci Eng 11(4):1020
Lin X, Qiang D, Kong W, Song C, Huang Q (2015) Adaptive dynamic programming-based optimal tracking control for nonlinear systems using general value iteration
Zhang H, Song R, Wei Q, Zhang T (2011) Optimal tracking control for a class of nonlinear discrete-time systems with time delays based on heuristic dynamic programming. IEEE Trans Neural Netw 22(12):1851
Gao W, Jiang Z (2016) Adaptive dynamic programming and adaptive optimal output regulation of linear systems. IEEE Trans Autom Control 61(12):4164
Han KZ, Jian F, Cui X (2017) Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology. Int J Syst Sci 48(13):2811
Zhang H, Wei Q, Luo Y (2008) A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm. IEEE Trans Syst Man Cybern Part B (Cybern) 38(4):937
Liu C, Zhang H, Ren H, Liang Y (2019) An analysis of IRL-based optimal tracking control of unknown nonlinear systems with constrained input. Neural Process Lett. https://doi.org/10.1007/s11063-019-10029-5
Bertsekas DP (2005) Dynamic programming and optimal control, 3rd edn. Athena Scientific, Belmont, MA
Bruce FA (1990) The method of weighted residuals and variational principles. Academic Press, New York
Abu-Khalaf M, Lewis FL (2005) Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach. Automatica 41(5):779
Vamvoudakis KG, Lewis FL (2010) Online actorcritic algorithm to solve the continuous-time infinite horizon optimal control problem. Automatica 46(5):878
Luy NT (2014) Reinforecement learning-based optimal tracking control for wheeled mobile robot. Trans Inst Meas Control 36(7):171
Zargarzadeh H, Dierks T, Jagannathan S (2015) Optimal control of nonlinear continuous-time systems in strict-feedback form. IEEE Trans Neural Netw Learn Syst 26(10):2535
Vamvoudakis KG (2017) Q-learning for continuous-time linear systems: a model-free infinite horizon optimal control approach. Syst Control Lett 100(Complete):14
Modares H, Lewis FL, Naghibi-Sistani MB (2014) Integral reinforcement learning and experience replay for adaptive optimal control of partially-unknown constrained-input continuous-time systems. Automatica 50(1):193
Chowdhary G, Johnson E (2010) Concurrent learning for convergence in adaptive control without persistency of excitation. In: 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, pp 3674–3679
Vamvoudakis KG, Mojoodi A, Ferraz H (2017) Eventtriggered optimal tracking control of nonlinear systems. Int J Robust Nonlinear Control 27(4):598–619
Yang X, Liu D, Wei Q, Wang D (2016) Guaranteed cost neural tracking control for a class of uncertain nonlinear systems using adaptive dynamic programming. Neurocomputing 198:80
Acknowledgements
This paper is funded by International Graduate Exchange Program of Beijing Institute of Technology.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhao, J. Neural Network-Based Optimal Tracking Control of Continuous-Time Uncertain Nonlinear System via Reinforcement Learning. Neural Process Lett 51, 2513–2530 (2020). https://doi.org/10.1007/s11063-020-10220-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-020-10220-z