Skip to main content
Log in

Visual attention and clustering-based automatic selection of landmarks using single camera

  • Published:
Journal of Central South University Aims and scope Submit manuscript

Abstract

An improved method with better selection capability using a single camera was presented in comparison with previous method. To improve performance, two methods were applied to landmark selection in an unfamiliar indoor environment. First, a modified visual attention method was proposed to automatically select a candidate region as a more useful landmark. In visual attention, candidate landmark regions were selected with different characteristics of ambient color and intensity in the image. Then, the more useful landmarks were selected by combining the candidate regions using clustering. As generally implemented, automatic landmark selection by vision-based simultaneous localization and mapping (SLAM) results in many useless landmarks, because the features of images are distinguished from the surrounding environment but detected repeatedly. These useless landmarks create a serious problem for the SLAM system because they complicate data association. To address this, a method was proposed in which the robot initially collected landmarks through automatic detection while traversing the entire area where the robot performed SLAM, and then, the robot selected only those landmarks that exhibited high rarity through clustering, which enhanced the system performance. Experimental results show that this method of automatic landmark selection results in selection of a high-rarity landmark. The average error of the performance of SLAM decreases 52% compared with conventional methods and the accuracy of data associations increases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. THRUN S, BURGARD W, FOX D. Probabilistic robotics [M]. Cambridge, USA: MIT Press, 2006: 309–312.

    Google Scholar 

  2. LOWE D. Object recognition from local scale-invariant features [C]// Proceedings of the International Conference on Computer Vision. Kerkira, Greece: IEEE, 1999: 1150–1157.

    Chapter  Google Scholar 

  3. BAY H, ESS A, TUYTELAARS T, GOOL L. Speeded-Up robust features (SURF) [J]. Computer Vision and Image Understanding, 2008, 110: 346–359.

    Article  Google Scholar 

  4. HARRIS C, STEPHENS M. A combined corner and edge detector [C]// Proceedings of the 4th Alvey Vision Conference. London, UK: The Pleassey Company, 1988: 147–151.

    Google Scholar 

  5. SIAGIAN C, ITTI L. Biologically inspired mobile robot vision localization [J]. IEEE Transactions on Robotics, 2009, 25(4): 861–873.

    Article  Google Scholar 

  6. SIAGIAN C, ITTI L. Rapid biologically-inspired scene classification using features shared with visual attention [J]. IEEE Trans Pattern Anal Mach Intell, 2007, 29(2): 300–312.

    Article  Google Scholar 

  7. FRINTROP S, JENSFELT P. Attentional landmarks and active gaze control for visual SLAM [M]. IEEE Transactions on Robotics, 2008, 24(5): 1054–1065.

    Article  Google Scholar 

  8. FRINTROP S, JENSFELT P, CHRISTENSEN H. Attentional landmark selection for visual SLAM [C]// Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Beijing, China: IEEE, 2006: 2582–2587.

    Google Scholar 

  9. THRUN S, BURGARD W, FOX D. Probabilistic robotics [M]. Cambridge, USA: MIT Press, 2006: 323–329.

    Google Scholar 

  10. COMANICIU D, MEER P. Mean shift: A robust approach toward feature space analysis [J]. IEEE Trans Pattern Anal Machine Intell, 2002, 24: 603–619.

    Article  Google Scholar 

  11. RAWSEEDS. Robotics advancement through web-publishing of sensorial and elaborated extensive data sets [EB/OL]. 2009-02-25. http://www.rawseeds.org/rs/capturesessions/view/5.

    Google Scholar 

  12. ITTI L, KOCH C, NIEBUR E. Model of saliency based visual attention for rapid scene analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1998, 20(11): 1254–1259.

    Article  Google Scholar 

  13. LIVINGSTONE M, HUBEL D. Anatomy and physiology of a color system in the primate visual cortex [J]. Journal Neurosci, 1984, 4: 309–356.

    Google Scholar 

  14. ENGEL S, ZHANG X, WANDELL B. Colour tuning in human visual cortex measured with functional magnetic resonance imaging [J]. Nature, 1997, 388(6637): 68–71.

    Article  Google Scholar 

  15. COMANICIU D, MEER P. Mean shift: A robust approach toward feature space analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(5): 603–619.

    Article  Google Scholar 

  16. RAWSEEDS: Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets, Bicocca 2009-02-25b [EB/OL]. 2009-02-25. http://www.rawseeds.org/rs/capturesessions/view/5.

    Google Scholar 

  17. WILLIAMS B, CUMMINS M, NEIRA J, NEWMAN P, REID I, TARDOS J. A comparison of loop closing techniques in monocular SLAM [J]. Robotics and Autonomous Systems, 2009, 57(12): 1188–1197.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cho Jungwon.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chuho, Y., Yongmin, S. & Jungwon, C. Visual attention and clustering-based automatic selection of landmarks using single camera. J. Cent. South Univ. 21, 3525–3533 (2014). https://doi.org/10.1007/s11771-014-2332-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11771-014-2332-6

Key words

Navigation