Skip to main content
Log in

Loop Closure Detection based on Image Covariance Matrix Matching for Visual SLAM

  • Regular Papers
  • Control Theory and Applications
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

Loop closure detection is an indispensable part of visual simultaneous location and mapping (SLAM). Correct detection of loop closure can help mobile robot to reduce the problem of cumulative pose drift. At present, the main method for detecting visual SLAM loop closure is the bag of words (BoW) model, but it lacks the spatial distribution information of local features of the image, and the scale will become larger and larger with the increase of data, resulting in the slow operation speed. In order to solve these problems, the image histogram and the key region covariance matrix matching method are used to visually detect the loop closure combined with the global and local image features. In this paper, three different place recognition techniques are studied: histogram only, image covariance matrix matching (ICMM) and cluster loop. Experiments on real datasets show that the proposed method of detecting the loop closure is better than the traditional methods in detecting accuracy and recalling rate, which also improves the operation effect of the SLAM algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. X. Gao and T. Zhang, “Unsupervised learning to detect loops using deep neural networks for visual SLAM system,” Autonomous Robots, vol. 41, no. 1, pp. 1–18, December 2017.

    Article  MathSciNet  Google Scholar 

  2. S. Opiyo, J. Zhou, E. Mwangi, K. Wang, and S. Idris, “A review on teleoperation of mobile ground robots: Architecture and situation awareness,” International Journal of Control, Automation and Systems, vol. 19, no. 3, pp. 1384–1407, March 2021.

    Article  Google Scholar 

  3. M. Sualeh and G. Kim, “Simultaneous localization and mapping in the epoch of semantics: A survey,” International Journal of Control, Automation and Systems, vol. 17, no. 3, pp. 729–742, December 2019.

    Article  Google Scholar 

  4. C. Park, H. Chae, and J. Song, “Robust place recognition using illumination-compensated image-based deep convolutional autoencoder features,” Int. J. Cont. Autom. Syst., vol. 18, no. 10, pp. 2699–2707, June 2020.

    Article  Google Scholar 

  5. L. Mahon, S. B. Williams, O. Pizarro, and M. J. Roberson, “Efficient view-based SLAM using visual loop closures,” IEEE Trans. on Robotics, vol. 14, no. 5, pp. 1002–1014, October 2008.

    Article  Google Scholar 

  6. H. Yue and W. Chen, “Comments on automatic visual bag-of-words for online robot navigation and mapping,” IEEE Trans. on Robotics, vol. 31, no. 1, pp. 223–224, February 2015.

    Article  Google Scholar 

  7. S. J. Lee, H. Choi, and S. S. Hwang, “Real-time depth estimation using recurrent CNN with sparse depth cues for SLAM system,” International Journal of Control, Automation and Systems, vol. 18, no. 1, pp. 206–216, January 2020.

    Article  Google Scholar 

  8. P. Beeson, J. Modayil, and B. Kuipers, “Factoring the mapping problem: Mobile robot map-building in the hybrid spatial semantic hierarchy,” Internationl Journal of Robotics Research, vol. 29, no. 4, pp. 428–459, May 2010.

    Article  Google Scholar 

  9. E. G. Fidalgo and A. Ortiz, “IBoW-Lcd: An appearance-based loop-closure detection approach using incremental bags of binary words,” IEEE Trans. on Robotics and Automation, vol. 3, no. 4, pp. 3051–3057, October 2018.

    Article  Google Scholar 

  10. V. R. Cervantes and S. G. Lee, “3D localization of a mobile robot by using Monte Carlo algorithm and 2D features of 3D point cloud,” International Journal of Control, Automation and Systems, vol. 18, no. 11, pp. 2955–2965, November 2020.

    Article  Google Scholar 

  11. S. M. Siam and H. Zhang, “Fast-SeqSLAM: A fast appearance based place recognition algorithm,” Proc. of Conf. Robotics and Automation, pp. 5702–5708, 2017.

  12. Y. Latif, C. Cadena, and J. Neira, “Robust loop closing over time for pose graph SLAM,” Internationl Journal of Robotics Research, vol. 32, no. 14, pp. 1611–1626, October 2013.

    Article  Google Scholar 

  13. M. Labbe and F. Michaud, “Appearance-based loop closure detection for online large-scale and long-term operation,” IEEE Trans. on Robotics, vol. 29, no. 3, pp. 734–745, June 2013.

    Article  Google Scholar 

  14. Y. Latif, C. Cadena, and J. Neira, “Realizing, reversing, recovering: Incremental robust loop closing over time using the iRRR algorithm,” Proc. of Conf. Intelligent Robots and Systems, pp. 4211–4217, 2012.

  15. D. G. Lopez and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. on Robotics, vol. 28, no. 5, pp. 1188–1197, October 2012.

    Article  Google Scholar 

  16. K. Kesorn and S. Poslad, “An enhanced bag-of-visual word vector space model to represent visual conten in athetics images,” IEEE Trans. on Multimedia, vol. 14, no. 1, pp. 211–222, February 2012.

    Article  Google Scholar 

  17. S. J. Lee and S. S. Hwang, “Bag of sampled words: A sampling-based strategy for fast and accurate visual place recognition in changing environments,” International Journal of Control, Automation and Systems, vol. 17, no. 10, pp. 2597–2609, July 2019.

    Article  Google Scholar 

  18. H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded up robust features,” Proc. of European Conf. Computer Vision, pp. 404–417, 2006.

  19. A. Angeli, D. Filliat, S. Doncieux, and J. A. Meyer, “Fast and incremental method for loop-closure detection using bags of visual words,” IEEE Trans. on Robotics, vol. 24, no. 5, pp. 1027–1037, October 2008.

    Article  Google Scholar 

  20. H. Abidi, M. Chtourou, and K. Kaaniche, “Visual servoing based on efficient histogram information,” International Journal of Control, Automation and Systems, vol. 15, no. 4, pp. 1746–1753, June 2017.

    Article  Google Scholar 

  21. K. Azhar, F. Murtaza, M. H. Yousaf, and H. A. Habib, “Computer vision based detection and localization of potholes in asphalt pavement images,” Proc. of Canadian Conf. Electrical and Computer Engineering, pp. 1–5, 2016.

  22. R. Arandjelovic and A. Zisserman, “Three things everyone should know to improve object retrieval,” Proc. of Conf. Computer Vision and Patter Recognition, pp. 2911–2918, 2012.

  23. T. T. Q. Bui, T. T. Vu, and K. Hong, “Extraction of sparse features of color images in recognizing objects,” International Journal of Control, Automation and Systems, vol. 14, no. 2, pp. 616–627, April 2016.

    Article  Google Scholar 

  24. O. Guclu and A. B. Can, “Fast end effective loop closure detection to improve SLAM performance,” Journal of Intelligent and Robotic Systems, vol. 93, no. 2, pp. 495–517, October 2017.

    Google Scholar 

  25. J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Trans. on Image Processing, vol. 17, no. 1, pp. 53–69, January 2008.

    Article  MathSciNet  Google Scholar 

  26. C. Cho and S. Lee, “Effective five directional partial derivatives-based image smoothing and a parallel structure design,” IEEE Trans. on Image Processing, vol. 25, no. 4, pp. 1617–1625, April 2016.

    Article  MathSciNet  Google Scholar 

  27. A. Pratondo, C. K. Chui, and S. H. Ong, “Robust edge-stop functions for edge-based active contour models in medical image segmentation,” IEEE Trans. on Image Processing, vol. 23, no. 2, pp. 222–226, February 2016.

    Google Scholar 

  28. D. Ding, S. Ram, and J. J. Rodriguez, “Image inpainting using nonlocal texture matching and nonlinear filtering,” IEEE Trans. on Image Processing, vol. 28, no. 4, pp. 1705–1719, April 2019.

    Article  MathSciNet  Google Scholar 

  29. K. Pyun, J. Lim, and R. M. Gray, “A covariance adjustment method in compressed domain for noisy image segmentation,” Proc. of Conf. Image Processing, pp. 2268–2271, 2008.

  30. A. Swilem, A. H. Ellah, and S. Elaw, “A fast image detection method using variance and variance covariance matrix,” Proc. of Conf. Informatics and Systems, pp. 1–6, 2010.

  31. X. Zhang and B. Ma, “Gaussian mixture model on tensor field for visual tracking,” IEEE Signal Processing Letters, vol. 19, no. 11, pp. 733–736, November 2012.

    Article  Google Scholar 

  32. C. Park and J. Song, “Global localization using low-frequency image-based descriptor and range data-based validation,” International Journal of Control, Automation and Systems, vol. 16, no. 3, pp. 1332–1340, May 2018.

    Article  Google Scholar 

  33. O. Tuzel, F. Porikli, and P. Meer, “Region covariance: A fast descriptor for detection and classification,” Proc. of European Conf. Computer Vision, pp. 589–600, 2006.

  34. J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” Proc. of Conf. Intelligent Robots and Systems, pp. 573–580, 2012.

  35. M. Cummins and P. Newman, “FAB-MAP: Probabilistic localization and mapping in the space of appearance,” Internation Journal of Robotics Research, vol. 27, no. 6, pp. 647–665, June 2008.

    Article  Google Scholar 

  36. R. M. Artal, J. M. M. Montiel, and J. D. Tardos, “ORBSLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. on Robotics, vol. 31, no. 5, pp. 1147–1163, October 2015.

    Article  Google Scholar 

  37. M. Capdevila, F. Marquez, and W. Oscar, “A communication perspective on automatic text categorization,” IEEE Trans. on Knowledge and Data Engineering, vol. 21, no. 7, pp. 1027–1041, July 2009.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaicheng Yan.

Additional information

Recommended by Associate Editor Gon-Woo Kim under the direction of Editor Euntai Kim.

This work was supported by National Natural Science Foundation of China (62073143, 61922063), Program of Shanghai Academic Research Leader (19XD1421000), Shanghai International Science and Technology Cooperation Project (18510711100), Shanghai and HongKong-Macao-Taiwan Science and Technology Cooperation Project (19510760200), Shanghai Shuguang Project (18SG18), and Innovation Program of Shanghai Municipal Education Commission (2021-01-07-00-02-E00107).

Tao Ying is currently studying for a master’s degree in control science and engineering at East China University of Science and Technology, Shanghai, China. His main research interest is mobile robot visual SLAM.

Huaicheng Yan received his B.Sc. degree in automatic control from Wuhan University of Technology, China, in 2001, and a Ph.D. degree in control theory and control engineering from Huazhong University of Science and Technology, China, in 2007. From 2007 to 2009, he was a Postdoctoral Fellow with the Chinese University of Hong Kong. Currently, he is a Professor with the School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China. He is an associate editor for IEEE Transactions on Neural Networks and Learning Systems, International Journal of Robotics and Automation and IEEE Open Journal of Circuits and Systems. His research interests include networked control systems, multi-agent systems and robotics.

Zhichen Li received his B.S. degree in automation and a Ph.D. degree in pattern recognition and intelligent systems from North China Electric Power University, Beijing, China, in 2011 and 2017, respectively. From 2017 to 2019, Dr. Li was a Post-Doctoral Fellow with School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China, and currently is an Associate Professor. His research interests include networked control systems, fuzzy modeling and control.

Kaibo Shi received his Ph.D. degree in the School of Automation Engineering at the University of Electronic Science and Technology of China. He is a professor in the School of Information Sciences and Engineering, Chengdu University. His current research interests include stability theorem, robust control, sampled-data control systems, etc.

Xiangsai Feng received his Ph.D. degree in power engineering and engineering thermal physics from East China University of Science and Technology, Shanghai, China in 2015. Dr. Feng is a senior engineer of Shanghai Municipality energy conservation and emission reduction center, his research interests include the application and promotion of new energy power generation systems of solar energy, wind energy, and hydrogen.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ying, T., Yan, H., Li, Z. et al. Loop Closure Detection based on Image Covariance Matrix Matching for Visual SLAM. Int. J. Control Autom. Syst. 19, 3708–3719 (2021). https://doi.org/10.1007/s12555-020-0730-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-020-0730-0

Keywords

Navigation