Mobile Robot Navigation: Neural Q-Learning

  • Soh Chin Yun
  • S. Parasuraman
  • V. Ganapathy
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 178)


This paper presents the mobile robot navigation technique which utilizes Reinforcement Learning (RL) algorithms and Artificial Neural Network (ANN) to learn in an unknown environment for mobile robot navigation. This process is divided into two stages. In the initial stage, the agent will map the environment through collecting state-action information according to the Q-Learning procedure. Second training process involves Neural Network, which utilizes the state-action information gathered in the earlier phase of training samples. During final application of the controller, Q-Learning would be used as primary navigating tool whereas the trained Neural Network will be employed when approximation is needed. MATLAB simulation was developed to verify and validate the algorithm before real time implementation using Team AmigoBotTM robot. The results obtained from both simulation and real world experiments are discussed.


Reinforcement Learning (RL) Q-Learning Artificial Neural Network (ANN) Neural Q-Learning Team AmigoBotTM MATLAB 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Watkins, C. J.C.H., Dayan, P.: Q-Learning. In: Machine Learning, vol. 8, pp. 279–292 (1992)Google Scholar
  2. 2.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: an Introduction. MIT Press, Cambridge (1998)Google Scholar
  3. 3.
    Kam, M., Zhu, X., Kalata, P.: Sensor Fusion for Mobile Robot Navigation. Proceedings of the IEEE 85(1), 108–119 (1997)CrossRefGoogle Scholar
  4. 4.
    Tsay, J.T.I., Lai, C.H.: Development of a Humanoid Robot. International Journal of Computer Applications in Technology 36(2), 125–133 (2009)CrossRefGoogle Scholar
  5. 5.
    Kang, L., Xu, J., Yang, C., Yang, B., Wu, L.: An Efficient Simplification and Real-Time Rendering Algorithm for Large-Scale Terrain. International Journal of Computer Applications in Technology 38(1/2/3), 106–112 (2010)CrossRefGoogle Scholar
  6. 6.
    Li, C., Zhang, J., Li, Y.: Application of Artificial Neural Network Based on Q-Learning for Mobile Robot Path Planning. In: Proceedings of 2006 IEEE International Conference on Information Acquisition, Shandong, People’s Republic of China, pp. 978–982 (2006)Google Scholar
  7. 7.
    Pyeatt, L.D., Howe, A.E.: Decision Tree Function Approximation in Reinforcement Learning. In: Proceedings of the Third International Symposium on Adaptive Systems: Evolutionary Computation and Probabilistic Graphical Models, Havana, Cuba (1998)Google Scholar
  8. 8.
    Smart, W.D., Kiebling, L. P.: Effective Reinforcement Learning for Mobile Robots. In: Proceedings of the 2002 IEEE International Conference on Robotics & Automation, Washington, DC, USA, pp. 3404–3410 (2002)Google Scholar
  9. 9.
    Kondo, T., Ito, K.: A Reinforcement Learning with Evolutionary State Recruitment Strategy for Autonomous Mobile Robots Control. Robotics and Autonomous Systems 46(2), 111–124 (2004)Google Scholar
  10. 10.
    Liu, Z., Mitani, J., Fukui, Y., Nishihara, S.: A 3D Shape Classifier with Neural Network Supervision. International Journal of Computer Applications in Technology 38(1/2/3), 134–143 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Soh Chin Yun
    • 1
  • S. Parasuraman
    • 1
  • V. Ganapathy
    • 2
  1. 1.School of EngineeringMonash UniversityBandar SunwayMalaysia
  2. 2.Department of Electrical Engineering, Faculty of EngineeringUniversity of MalayaKuala LumpurMalaysia

Personalised recommendations