Cognitive Processing

, Volume 9, Issue 4, pp 283–297 | Cite as

Sensorimotor representation and knowledge-based reasoning for spatial exploration and localisation

Research Report

Abstract

We investigate a hybrid system for autonomous exploration and navigation, and implement it in a virtual mobile agent, which operates in virtual spatial environments. The system is based on several distinguishing properties. The representation is not map-like, but based on sensorimotor features, i.e. on combinations of sensory features and motor actions. The system has a hybrid architecture, which integrates a bottom-up processing of sensorimotor features with a top-down, knowledge-based reasoning strategy. This strategy selects the optimal motor action in each step according to the principle of maximum information gain. Two sensorimotor levels with different behavioural granularity are implemented, a macro-level, which controls the movements of the agent in space, and a micro-level, which controls its eye movements. At each level, the same type of hybrid architecture and the same principle of information gain are used for sensorimotor control. The localisation performance of the system is tested with large sets of virtual rooms containing different mixtures of unique and non-unique objects. The results demonstrate that the system efficiently performs those exploratory motor actions that yield a maximum amount of information about the current environment. Localisation is typically achieved within a few steps. Furthermore, the computational complexity of the underlying computations is limited, and the system is robust with respect to minor variations in the spatial environments.

Keywords

Autonomous agents Attention Hybrid systems 

Notes

Acknowledgments

We thank the anonymous referees and the editor for their helpful comments and their constructive criticism. Torben Gerkensmeyer helped in carrying out the simulations for the performance evaluation and Freek Stulp provided valuable information on feedforward models. This study has been supported by DFG (SFB TR 8 “Spatial Cognition” A5-[ActionSpace]).

References

  1. Attneave F (1954) Some informational aspects of visual perception. Psychol Rev 61(3):183–193PubMedCrossRefGoogle Scholar
  2. Aloimonos Y (ed) (1992) Special issue: purposive, qualitative and active vision. Image understanding 56Google Scholar
  3. Ballard DH (1991) Animate vision. Artif Intell 48:57–86CrossRefGoogle Scholar
  4. Basso D, Belardinelli MO (2006) The role of the feedforward paradigm in cognitive psychology. Cogn Process 7(2):73–88PubMedCrossRefGoogle Scholar
  5. Biederman I (1987) Recognition-by-components: a theory of human image understanding. Psychol Rev 94(2):115–147PubMedCrossRefGoogle Scholar
  6. Boella G (2002) Intentions: choice first, commitment follows. Proceedings of autonomous agents and multiagent systems (AAMAS), pp 1165–1166Google Scholar
  7. Chernyak DA, Stark L (2001) Top-down guided eye movements. IEEE Trans Syst Man Cybern 31(4):514–522CrossRefGoogle Scholar
  8. DeSouza G, Kak A (2002) Vision for mobile robot navigation: a survey. IEEE Trans Syst Man Cybern 24(2):237–267Google Scholar
  9. Egner S (1997) Zur Rolle der Aufmerksamkeit fuer die Objekterkennung: Modellierung, Simulation, Empirie. PhD thesis, University of HamburgGoogle Scholar
  10. Elfes A (1987) Sonar-based real-world mapping and navigation. IEEE J Rob Automat 3(3):249–265CrossRefGoogle Scholar
  11. Foo P, Warren WH, Duchon A, Tarr MJ (2005) Do humans integrate routes into a cognitive map? Map- versus landmark-based navigation of novel shortcuts. J Exp Psychol 31(2):195–215Google Scholar
  12. Gibson JJ (1979) The ecological approach to visual perception. Houghton Mifflin, BostonGoogle Scholar
  13. Gillner S, Mallot HA (1998) Navigation and acquisition of spatial knowledge in a virtual maze. J Cogn Neurosci 10(4):445–463PubMedCrossRefGoogle Scholar
  14. Gordon J, Shortliffe EH (1985) A method for managing evidential reasoning in a hierarchical hypothesis space. Artif Intell 26(3):323–357CrossRefGoogle Scholar
  15. Haigh KZ, Veloso M (1995) Route planning by analogy. Proc Int Conf Case Based Reason 95:160–180Google Scholar
  16. Hommel B, Muesseler J, Aschersleben G, Prinz W (2001) The theory of event coding (tec): a framework for perception and action planning. Behavior Brain Sci 24:849–878Google Scholar
  17. Itti L, Koch C (2001) Feature combination strategies for saliency-based visual attention systems. J Electron Imaging 10(1):161–169CrossRefGoogle Scholar
  18. Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4:219–227PubMedGoogle Scholar
  19. Kortenkamp D, Weymouth T (1994) Topological mapping for mobile robots using a combination of sonar and vision sensing. Proc Natl Conf Artif Intell 12(2):979–984 Google Scholar
  20. Kuipers B (1982) The “map in the head” metaphor. Environ Behav 14(2):202–220CrossRefGoogle Scholar
  21. Kuipers B, Byun YT (1991) A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations. IEEE J Rob Auton Syst 8:47–63CrossRefGoogle Scholar
  22. Kuipers B (2000) The spatial semantic hierarchy. Artif Intell 119:191–233CrossRefGoogle Scholar
  23. Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22(1):79–86CrossRefGoogle Scholar
  24. Laeng B, Teodorescu DS (2002) Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cogn Sci 26:207–231CrossRefGoogle Scholar
  25. Locke EA, Latham GP (2002) Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. Am Psychol 57(9):705–717PubMedCrossRefGoogle Scholar
  26. Mackworth NH, Morandi AJ (1967) The gaze selects informative details within pictures. Percept Psychophys 2(11):547–552Google Scholar
  27. Mataric MJ (1992) Integration of representation into goal-driven behavior-based robots. IEEE Transact Rob Automat 8(3):304–312CrossRefGoogle Scholar
  28. Moore T (1999) Shape representations and visual guidance of saccadic eye movements. Sciences 285(5435):1914–1917Google Scholar
  29. Moorman K, Ram A (1992) A case-based approach to reactive control for autonomous robots. Proceedings of the AAAI Fall symposium on AI for real-world autonomous mobile robots Google Scholar
  30. Moravec HP (1988) Sensor fusion in certainty grids for mobile robots. AI Mag 9(29):61–74Google Scholar
  31. Newman P, Ho K (2005) SLAM-loop closing with visually salient features. Proceedings of the 2005 IEEE international conference on robotics automation, pp 18–22Google Scholar
  32. Noton D, Stark L (1971) Scan paths in saccadic eye movements while viewing and recognizing patterns. Vision Res 11:929–942PubMedCrossRefGoogle Scholar
  33. Olshausen BA, Anderson CH, Van Essen DC (1993) A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J Neurosci 13(11):4700–4719PubMedGoogle Scholar
  34. O’Regan JK, Noë A (2001) A sensorimotor account of vision and visual consciousness. Behavior Brain Sci 24:939–973CrossRefGoogle Scholar
  35. Prinz W (1990) A common coding approach to perception and action. In: Neumann O, Prinz W (eds) Relationships between perception and action: current approaches. Springer, Berlin, pp 167–203Google Scholar
  36. Rao RPN, Zelinsky GJ, Hayhoe MM, Ballard DH (1997) Eye movements in visual cognition: a computational study. Technical report 97.1, University of RochesterGoogle Scholar
  37. Rybak IA, Golovan AV, Podladchikova LN, Shevtsova NA (1998) A model of attention-guided visual perception and recognition. Vision Res 38:2387–2400PubMedCrossRefGoogle Scholar
  38. Schill K (1997) Decision support systems with adaptive reasoning strategies. In: Freksa C, Jantzen M, Valk R (eds) Lecture notes in computer science. Springer, Berlin, pp 417–427Google Scholar
  39. Schill K, Umkehrer E, Beinlich S, Krieger G, Zetzsche C (2001) Scene analysis with saccadic eye movements: top-down and bottom-up modeling. J Electron Imaging 10(1):152–160CrossRefGoogle Scholar
  40. Se S, Lowe D, Little J (2002) Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks. Int J Rob Res 21(8):735–758CrossRefGoogle Scholar
  41. Shafer G (1976) A mathematical theory of evidence. Princeton University Press, PrincetonGoogle Scholar
  42. Shafer G, Logan R (1987) Implementing Dempster’s rule for hierarchical evidence. Artif Intell 33:272–298CrossRefGoogle Scholar
  43. Shapley R (2004) A new view of the primary visual cortex. Neural Netw 17(5–6):615–623PubMedCrossRefGoogle Scholar
  44. Stachniss C, Grisetti G, Burgard W: (2005) Information gain-based exploration using Rao-Blackwellized particle filters. In: Proceedings of robotics, Cambridge, MA, USA, pp 65–72Google Scholar
  45. Terzopoulos D, Rabie TF (1997) Animat vision: active vision in artificial animals. Videre: J Comput Vis Res 1(1):2–19Google Scholar
  46. Thrun S (2003) Robotic mapping: a survey. In: Lakemeyer G, Nebel B (eds) Exploring artificial intelligence in the New Millennium. Morgan Kaufmann, San Francisco, pp 1–35Google Scholar
  47. Tsotsos JK, Culhane S, Wai WYK, Lai Y, Davis N, Nuflo F (1995) Modelling visual attention via selective tuning. Artif Intell 78(1–2):507–547CrossRefGoogle Scholar
  48. Tversky B (1992) Distortions in cognitive maps. Geoforum 23(2):131–138CrossRefGoogle Scholar
  49. Wang RF, Spelke ES (2000) Updating egocentric representations in human navigation. Cognition 77:215–250PubMedCrossRefGoogle Scholar
  50. Werner S, Krieg-Brückner B, Herrmann T (2000) In: Freksa C (ed) Modelling navigational knowledge by route graphs. Spatial cognition II: integrating abstract theories, empirical studies, formal methods, and practical applications. Springer, Berlin, pp 295–316Google Scholar
  51. Wiener JM, Mallot HA (2003) ‘Fine-to-coarse’ route planning and navigation in regionalized environments. Spat Cogn Comput 3(4):331–358CrossRefGoogle Scholar
  52. Wilson SW (1991): The animat path to AI. In: Meyer JA, Wilson S (eds) From animals to animats. MIT, Cambridge, pp 15–21Google Scholar
  53. Wolpert DM, Ghahramani Z (2000) Computational principles of movement neuroscience. Nat Neurosci 3(Suppl.):1212–1217PubMedCrossRefGoogle Scholar
  54. Yarbus AL (1967) Eye movements and vision. Plenum, New YorkGoogle Scholar
  55. Zetzsche C, Galbraith C, Wolter J, Schill K (2007) Navigation based on a sensorimotor representation: a virtual reality study. In: Rogowitz BE, Pappas TN, Daly S (eds) Proceedings of SPIE of human vision and electronic imaging XII, 6492Google Scholar
  56. Zetzsche C, Krieger G (2001) Nonlinear mechanisms and higher-order statistics in biological vision and electronic image processing: review and perspectives. J Electron Imaging 10(1):56–99CrossRefGoogle Scholar
  57. Zetzsche C, Nuding U (2005) Nonlinear and higher-order approaches to the encoding of natural scenes. Netw: Comput Neural Syst 16(2–3):191–221CrossRefGoogle Scholar
  58. Zetzsche C, Hauske G (1989) Multiple channel model for the prediction of subjective image quality. Proc SPIE Hum Vision Visual Process Digit Disp 1077:209–216Google Scholar
  59. Zetzsche C, Schill K, Deubel H, Krieger G, Umkehrer E, Beinlich S (1998) Investigation of a sensorimotor system for saccadic scene analysis: an integrated approach. In: Pfeifer R, Blumberg B, Meyer JA, Wilson SW (eds) From animals to animates 5: Proceedings of the fifth international conference on the simulation of adaptive behavior. MIT, Cambridge, pp 120–126Google Scholar

Copyright information

© Marta Olivetti Belardinelli and Springer-Verlag 2008

Authors and Affiliations

  1. 1.Kognitive NeuroinformatikUniversität BremenBremenGermany

Personalised recommendations