Advertisement

A Biologically Inspired Architecture for Visual Self-location

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 449)

Abstract

Self-location—recognizing one’s surroundings and reliably keeping track of current position relative to a known environment—is a fundamental cognitive skill for entities biological and artificial alike. At a minimum, it requires the ability to match current sensory (mainly visual) inputs to memories of previously visited places, and to correlate perceptual changes to physical movement. Both tasks are complicated by variations such as light source changes and the presence of moving obstacles. This article presents the Difference Image Correspondence Hierarchy (DICH), a biologically inspired architecture for enabling self-location in mobile robots. Experiments demonstrate DICH works effectively despite varying environment conditions.

Keywords

Mobile Robot Difference Image Cosine Similarity Shift Vector Walk Away 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This research work was supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) (grant 201799/2012-0).

References

  1. 1.
    Burschka, D., Hager, G.: Vision-based control of mobile robots. In: IEEE International Conference on Robotics and Automation (ICRA 2001), vol. 2, pp. 1707–1713 (2001)Google Scholar
  2. 2.
    Dumais, S.T.: Latent semantic analysis. Ann. Rev. Inf. Sci. Technol. 38(1), 188–230 (2004)CrossRefGoogle Scholar
  3. 3.
    Georgeon, O.L., Marshall, J.B., Manzotti, R.: Eca: an enactivist cognitive architecture based sensorimotor modeling. Biol Inspired Cogn. Archit. 6, 46–57 (2013)Google Scholar
  4. 4.
    Madl, T., Franklin, S., Chen, K., Trappl, R.: Spatial working memory in the lida cognitive architecture. In: Proceedings of the 12th International Conference on Cognitive Modelling, pp. 384–390 (2013)Google Scholar
  5. 5.
    Martinez-Conde, S., Macknik, S.L.: Fixational eye movements across vertebrates: comparative dynamics, physiology, and perception. J. Vis. 8(14), 28 (2008)CrossRefGoogle Scholar
  6. 6.
    Moser, E.I., Kropff, E., Moser, M.B.: Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci. 31, 69–89 (2008)CrossRefGoogle Scholar
  7. 7.
    Perroni Filho, H., De Souza, A.: On multichannel neurons, with an application to template search. J. Netw. Innovative Comput. 2(1), 10–21 (2014)Google Scholar
  8. 8.
    Perroni Filho, H., Ohya, A.: Mobile robot path drift estimation using visual streams. In: IEEE/SICE International Symposium on System Integration (SII 2014), IEEE, pp. 192–197 (2014)Google Scholar
  9. 9.
    Privitera, C.M., Stark, L.W.: Algorithms for defining visual regions-of-interest: comparison with eye fixations. IEEE Trans. Pattern Anal. Mach. Intell. 22(9), 970–982 (2000)CrossRefGoogle Scholar
  10. 10.
    Roger, H., Charles, R.J.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1994)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Intelligent Robot LaboratoryUniversity of TsukubaTsukubaJapan

Personalised recommendations