Abstract
A two-wheeled self-balancing robot (SBR) is a typical example in control systems that works on the principle of an inverted pendulum. In this paper, we experiment to see how the learning and stability performance varies based on Kalman filter introduction for IMU noise filtering and controlling the robot using reinforcement learning. All the implementation is performed in ROS and Gazebo, and Q-learning is implemented using OpenAI (toolkit for development of Reinforcement learning) for ROS, i.e., Openai_ros package. Our work deals with a novel approach of providing the angular output from IMU to Kalman filter and passing it to the input of Q-learning for balancing control. Finally, we analyze the results with and without using Kalman filter from the output of IMU before passing it to Q-learning and evaluate the performance based on robot’s learning behavior and its robustness.
This is a preview of subscription content, access via your institution.










Code Availability
The code will be available based on the request.
References
C.Watkins (1989) Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England.
Docs.ros.org. 2021. geometry_msgs/Wrench Documentation. [online] Available at: <http://docs.ros.org/en/melodic/api/geometry_msgs/html/msg/Wren h.html> [Accessed 14 June 2021].
Jayakody, D. P. V. J., & Sucharitharathna, K. P. G. C. (2019). Control unit for a self-balancing prototype. The Global Journal of Researches in Engineering. Available at: https://www.engineeringresearch.org/index.php/GJRE/article/view/1881. Accessed 22 Aug 2021.
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transaction of the ASME - Journal of Basic Engineering, 82, 35–45.
Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11), 1238–1274.
Madhira, K., Gandhi, A., & Gujral, A. (2016). Self balancing robot using complementary filter: Implementation and analysis of complementary filter on SBR. In: 2016 International conference on electrical, electronics, and optimization techniques (ICEEOT), pp. 2950–2954.
Montella, C. (2011). The kalman filter and related algorithms: A literature review. ResearchGate. Available online: https://www.researchgate.net/publication/236897001_The_Kalman_Filter_and_Related_Algorithms_A_Literature_Review. Accessed 20 Oct 2019.
Rahman, M. M., Rashid, S. M. H., & Hossain, M. M. (2018). Implementation of Q learning and deep Q network for controlling a self balancing robot model. Robot. Biomim., 5, 8. https://doi.org/10.1186/s40638-018-0091-9
Sun, F, Yu, Z & Yang, H. (2014). A design for two-wheeled self-balancing robot based on Kalman filter and LQR. pp. 612–616. https://doi.org/10.1109/ICMC.2014.7231628.
Technical Note Q-Learning Christopher J.C.H. Watkins and Peter Dayan Centre for Cognitive Science, University of Edinburgh, Scotland Machine Learning, 8, 279–292(1992).
Watkins, C. J. C. H., & Dayan, P. (1992). Q-learning. Machine Learning, 8(3), 279–292.
Zamora, I., Lopez, N.G., Vilches, V.M., & Cordero, A.H. (2016). Extending the OpenAI Gym for robotics: A toolkit for reinforcement learning using ROS and Gazebo. ArXiv, abs/1608.05742.
Funding
None.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors hereby declare that there is no conflict of interest in this research work/paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Srichandan, A., Dhingra, J. & Hota, M.K. An Improved Q-learning Approach with Kalman Filter for Self-balancing Robot Using OpenAI. J Control Autom Electr Syst 32, 1521–1530 (2021). https://doi.org/10.1007/s40313-021-00786-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40313-021-00786-x