Multi-feature Fusion for Deep Reinforcement Learning: Sequential Control of Mobile Robots

  • Haotian Wang
  • Wenjing YangEmail author
  • Wanrong Huang
  • Zhipeng Lin
  • Yuhua Tang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11307)


Compared with traditional motion planners, deep reinforcement learning has been applied more and more widely to achieving sequential behaviours control of mobile robots in indoor environment. However, the state of robot in deep reinforcement learning is commonly obtained through single sensor, which lacks accuracy and stability. In this paper, we propose a novel approach called multi-feature fusion framework. The multi-feature fusion framework utilizes multiple sensors to gather different scene images around the robot. Once environment information is gathered, a well-trained autoencoder achieves the fusion and extraction of multiple visual features. With more accurate and stable states extracted from the autoencoder, we train the mobile robot to patrol and navigate in 3D simulation environment with an asynchronous deep reinforcement learning algorithm. Extensive simulation experiments demonstrate that the proposed multi-feature fusion framework improves not only the convergence rate of training phase but also the testing performance of the mobile robot.


Mobile robot Deep reinforcement learning Feature fusion 


  1. 1.
    Bohez, S., Verbelen, T., De Coninck, E., Vankeirsbilck, B., Simoens, P., Dhoedt, B.: Sensor fusion for robot control through deep reinforcement learning. arXiv Preprint arXiv:1703.04550 (2017)
  2. 2.
    Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 1, pp. 539–546. IEEE (2005)Google Scholar
  3. 3.
    DeSouza, G.N., Kak, A.C.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 237–267 (2002)CrossRefGoogle Scholar
  4. 4.
    Dobrev, Y., Flores, S., Vossiek, M.: Multi-modal sensor fusion for indoor mobile robot pose estimation. In: 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS), pp. 553–556. IEEE (2016)Google Scholar
  5. 5.
    Eitel, A., Springenberg, J.T., Spinello, L., Riedmiller, M., Burgard, W.: Multimodal deep learning for robust RGB-D object recognition. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 681–687. IEEE (2015)Google Scholar
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  7. 7.
    Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv Preprint arXiv:1509.02971 (2015)
  8. 8.
    Malyavej, V., Kumkeaw, W., Aorpimai, M.: Indoor robot localization by RSSI/IMU sensor fusion. In: 2013 10th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), pp. 1–6. IEEE (2013)Google Scholar
  9. 9.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)CrossRefGoogle Scholar
  10. 10.
    Richter, M., Sandamirskaya, Y., Schöner, G.: A robotic architecture for action selection and behavioral organization inspired by human cognition. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2457–2464. IEEE (2012)Google Scholar
  11. 11.
    Rimon, E., Koditschek, D.E.: Exact robot navigation using artificial potential functions. IEEE Trans. Robot. Autom. 8(5), 501–518 (1992)CrossRefGoogle Scholar
  12. 12.
    Stroupe, A.W., Martin, M.C., Balch, T.: Distributed sensor fusion for object position estimation by multi-robot systems. In: Proceedings of 2001 IEEE International Conference on Robotics and Automation, ICRA 2001, vol. 2, pp. 1092–1098. IEEE (2001)Google Scholar
  13. 13.
    Tai, L., Paolo, G., Liu, M.: Virtual-to-real deep reinforcement learning: continuous control of mobile robots for mapless navigation. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 31–36. IEEE (2017)Google Scholar
  14. 14.
    Wu, Y., Zhang, W., Song, K.: Master-Slave Curriculum Design for Reinforcement Learning. In: IJCAI, pp. 1523–1529 (2018)Google Scholar
  15. 15.
    Yang, Z., Merrick, K.E., Abbass, H.A., Jin, L.: Multi-task deep reinforcement learning for continuous action control. In: IJCAI, pp. 3301–3307 (2017)Google Scholar
  16. 16.
    Zhang, J., Springenberg, J.T., Boedecker, J., Burgard, W.: Deep reinforcement learning with successor features for navigation across similar environments. arXiv Preprint arXiv:1612.05533 (2016)
  17. 17.
    Zhu, Y., et al.: Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3357–3364. IEEE (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Haotian Wang
    • 1
  • Wenjing Yang
    • 1
    Email author
  • Wanrong Huang
    • 1
  • Zhipeng Lin
    • 1
  • Yuhua Tang
    • 1
  1. 1.National University of Defense TechnologyChangshaChina

Personalised recommendations