Abstract
Machine learning for mobile robots has attracted lots of research interests in recent years. However, there are still many challenges to apply learning techniques in real mobile robots, e.g., generalization in continuous spaces, learning efficiency and convergence, etc. In this paper, a reinforcement learning path-following control strategy based on approximate policy iteration (API) is developed for a real mobile robot. It has some advantages such as optimized control policies can be obtained without much a priori knowledge on dynamic models of mobile robot, etc. Two kinds of API-based control method, i.e., API with linear approximation and API with kernel machines, are implemented in the path following control task and the efficiency of the proposed control strategy is illustrated in the experimental studies on the real mobile robot based on the Pioneer3-AT platform. Experimental results verify that the API-based learning controller has better convergence and path following accuracy compared to conventional PD control methods. Finally, the learning control performance of the two API methods is also evaluated and compared.
Supported by the National Natural Science Foundation of China (NSFC) under Grants 60774076, 90820302, the Fok Ying Tung Education Foundation under Grant No.114005, and the Natural Science Foundation of Hunan Province under Grant 07JJ3122.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Campion, G.: Structural Properties and Classification of Dynamic Models of Wheeled Mobile Robots. IEEE Trans. on Robotics and Automation 12, 47–62 (1996)
Alexander, J.C., Brooks, J.H.: On the Kinematics of Wheeled Mobile Robots. Int. J. of Robotics Research 8, 15–27 (1989)
Chiacchio, P.: Exploiting Redundancy in Minimum-time Path Following Robot Control. In: American Control Conference (1982)
Sarkar, N., Gen, V.: Dynamic Path Following: A New Control Algorithm for Mobile Robots. In: 32nd Conference on Decision and Control, pp. 2670–2675. IEEE Press, New York (1993)
Coelho, P., Numes, U.: Path Following Control of Mobile Robots in Presence of Uncertainties. IEEE Transaction on Robotics 21, 252–261 (2005)
Brooks, R.: A Hardware Retargetable Distributed Layered Architecture for Mobile Robot Control. In: IEEE International Conference on Robotics and Automation, pp. 106–110. IEEE Press, New York (1987)
Chen, C.L., Chen, C.H.: Reinforcement Learning for Mobile Robot from Reaction to Deliberation. Journal of Systems Engineering and Electronic 16, 611–617 (2005)
Sutton, R., Barto, A.: Reinforcement Learning, an Introduction. MIT Press, Cambridge (1998)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. J. Artif. Intell. Res. 4, 237–285 (1996)
Smart, W.D., Kaelbing, L.P.: Effective Reinforcement Learning for Mobile Robots. In: IEEE International Conference on Robotics and Automation, pp. 3404–3410. IEEE Press, New York (2002)
Xu, X., Hu, D.W., Lu, X.C.: Kernel-Based Least Squares Policy Iteration for Reinforcement Learning. IEEE Transaction on Neural Networks 18, 973–992 (2007)
Canudas, C., Sordalen, O.J.: Exponential Stabilization of Mobile Robots with Nonholonomic Constraints. IEEE Transactions on Automatic Control 33, 672–677 (1992)
Boyan, J.: Technical Update: Least-squares Temporal Difference Learning. Mach. Learn. 49, 233–246 (2002)
Lagoudakis, M.G., Parr, R.: Least-squares Policy Iteration. J. Mach. Learn. Res. 4, 1107–1149 (2003)
Engel, Y., Mannor, S., Meir, R.: The Kernel Recursive Least-squares Algorithm. IEEE Trans. Signal Process. 52, 2275–2285 (2004)
Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific, Belmont (1996)
Lagoudakis, M.G., Parr, R.: Least-squares Policy Iteration. J. Mach. Learn. Res. 4, 1107–1149 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhang, P., Xu, X., Liu, C., Yuan, Q. (2009). Reinforcement Learning Control of a Real Mobile Robot Using Approximate Policy Iteration. In: Yu, W., He, H., Zhang, N. (eds) Advances in Neural Networks – ISNN 2009. ISNN 2009. Lecture Notes in Computer Science, vol 5553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01513-7_30
Download citation
DOI: https://doi.org/10.1007/978-3-642-01513-7_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01512-0
Online ISBN: 978-3-642-01513-7
eBook Packages: Computer ScienceComputer Science (R0)