Advertisement

Reinforcement Learning-Based Two-Wheel Robot Control

  • Ching-Lung Chang
  • Kang-Hao Liou
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 110)

Abstract

In this paper, reinforcement learning (RL) with PID control is used to design the balance and self-control system to verify the feasibility of RL technology in this field. We can use straight line command and turn command via WiFi interface to control the robot. Thus the robot acts according to the received command.

The system is divided into three parts: sensing module, learning control module and motor drive module. A Q-Learning algorithm is implemented by learning control module using ARM A8 embedded platform. The sensing module contains an accelerometer (ADXL345) and a gyroscope (L3G4200D) that senses the current tilt angle and angular velocity of robot. Rely on the Q-learning algorithm which based on the input data from sensing module, an optimal response control is derived in motor driving control. The realization results shown that the two-wheel robot can back to balance within 2 ms once it goes to unbalance state.

Keywords

PID control Reinforcement learning Q-Learning Self-balancing Two-wheeled robots 

References

  1. 1.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, pp. 340. MIT Press, Cambridge, September 1998Google Scholar
  2. 2.
    Thao, N.G.M., Nghia, D.H., Phuc, N.H.: A PID backstepping controller for two-wheeled self-balancing robot. In: International Forum on Strategic Technology 2010, pp. 76–81, October 2010Google Scholar
  3. 3.
    Wu, J., Zhang, W.: Design of fuzzy logic controller for two-wheeled self-balancing robot. In: Proceedings of 2011 6th International Forum on Strategic Technology, pp. 1266–1270, August 2011Google Scholar
  4. 4.
    Ruan, X., Cai, J., Chen, J.: Learning to control two-wheeled self-balancing robot using reinforcement learning rules and fuzzy neural networks. In: Fourth International Conference on Natural Computation, ICNC 2008, pp. 395–398, October 2008Google Scholar
  5. 5.
    Chang, C.L., Chang, S.Y.: Using reinforcement learning to achieve two wheeled self balancing control. In: International Computer Symposium (ICS), pp. 104–107, 20 February 2017Google Scholar
  6. 6.
    Pratama, D, Binugroho, E.H., Ardilla, F.: Movement control of two wheels balancing robot using cascaded PID controller. In: International Electronics Symposium (IES), pp. 94–99, September 2015Google Scholar
  7. 7.
    Ciężkowski, M., Pawłuszewicz, E.: Determination of interactions between two-wheeled self-balancing vehicle and its rider. In: 20th International Conference on Methods and Models in Automation and Robotics (MMAR), pp. 851–855, August 2015Google Scholar
  8. 8.
    Tseng, S.P., Li, W.L., Sheng, C.Y., Hsu, J.W., Chen, C.S.: Motion and attitude estimation using inertial measurements with complementary filter. In: Control Conference (ASCC), May 2011Google Scholar
  9. 9.
    Colton, S.: The Balance Filter. Massachusetts Institute of Technology, Technical Report (2007)Google Scholar
  10. 10.
    Xie, R., Cao, J.: Accelerometer-Based hand gesture recognition by neural network and similarity matching. IEEE Sens. J., 4537–4545, March 2016CrossRefGoogle Scholar
  11. 11.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science and Information EngineeringNational Yunlin University of Science and TechnologyDouliuTaiwan, R.O.C.

Personalised recommendations