Advertisement

A Fast Visual Feature Matching Algorithm in Multi-robot Visual SLAM

  • Nian Liu
  • Mingzhu WeiEmail author
  • Xiaomei Xie
  • Mechali Omar
  • Xin Chen
  • Weihuai Wu
  • Peng Yan
  • Limei Xu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11740)

Abstract

To reduce the feature matching time in visual based multi-robot Simultaneous Localization and Mapping (SLAM), a feature matching algorithm based on map environment is proposed in this paper. The key idea of our algorithm is to establish feature libraries by classifying the collected features into two categories during the mobile process of every sub-robot. Then all features are matched based on the categories so that the invalid feature matching time will be reduced. At last, experiment is conducted to verify the performance of proposed algorithm. In comparison with traditional BoW method, its feature matching time is reduced by 20% at no expense of accuracy.

Keywords

Multi-robot Feature matching Visual SLAM 

Notes

Acknowledgments

The author would like to acknowledge the support from the Advanced Research Project of Manned Space under Grant No. 0603(17700630), the National Natural Science Foundation of China under Grant No. 61803075, the Fundamental Research Funds for the Central Universities under Grant No. ZYGX2018KYQD211.

References

  1. 1.
    Saeedi, S., Paull, L., Trentini, M., et al.: Group mapping: a topological approach to map merging for multiple robots. IEEE Robot. Autom. Mag. 21(2), 60–72 (2014)CrossRefGoogle Scholar
  2. 2.
    Zhou, X.S., Roumeliotis, S.I.: Multi-robot SLAM with unknown initial correspondence: the robot rendezvous case. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1785–1792. IEEE, New York (2006)Google Scholar
  3. 3.
    Ahmad, A., Lawless, G., Lima, P.: An online scalable approach to unified multirobot cooperative localization and object tracking. IEEE Trans. Robot. 19(7), 1–16 (2017)Google Scholar
  4. 4.
    Wu, P., Liu, Y., Ye, M., et al.: Geometry guided multi-scale depth map fusion via graph optimization. IEEE Trans. Image Process. 26(3), 1315–1329 (2017)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Lei, J., Wu, M., Zhang, C., et al.: Depth-preserving stereo image retargeting based on pixel fusion. IEEE Trans. Multimed. 19(7), 1442–1453 (2017)CrossRefGoogle Scholar
  6. 6.
    Howard, A., Parker, L.E., Sukhatme, G.S.: Experiments with a large heterogeneous mobile robot team: exploration, mapping, deployment and detection. Int. J. Robot. Res. 25(5), 431–447 (2005)Google Scholar
  7. 7.
    Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 23(1), 34–46 (2007)CrossRefGoogle Scholar
  8. 8.
    Gálvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5), 1188–1197 (2012)CrossRefGoogle Scholar
  9. 9.
    Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  10. 10.
    Potena, C., Khanna, R., Nieto, J., et al.: AgriColMap: aerial-ground collaborative 3D mapping for precision farming. IEEE Robot. Autom. Lett. 4(2), 1085–1092 (2019)CrossRefGoogle Scholar
  11. 11.
    Rublee, E., Rabaud, V., Konolige, K., et al.: ORB: an efficient alternative to SIFT or SURF. In: ICCV 2011, vol. 11, no. 1, p. 2 (2011)Google Scholar
  12. 12.
  13. 13.
    http://www.opencv.org.cn. Accessed 16 Mar 2019

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Nian Liu
    • 1
  • Mingzhu Wei
    • 1
    Email author
  • Xiaomei Xie
    • 1
  • Mechali Omar
    • 1
  • Xin Chen
    • 1
  • Weihuai Wu
    • 1
  • Peng Yan
    • 1
  • Limei Xu
    • 1
  1. 1.University of Electronic Science and Technology of China (UESTC)ChengduChina

Personalised recommendations