Advertisement

Cognitive Processing

, Volume 13, Supplement 1, pp 155–159 | Cite as

Learning emergent behaviours for a hierarchical Bayesian framework for active robotic perception

  • João Filipe Ferreira
  • Christiana Tsiourti
  • Jorge Dias
Short Report

Abstract

In this research work, we contribute with a behaviour learning process for a hierarchical Bayesian framework for multimodal active perception, devised to be emergent, scalable and adaptive. This framework is composed by models built upon a common spatial configuration for encoding perception and action that is naturally fitting for the integration of readings from multiple sensors, using a Bayesian approach devised in previous work. The proposed learning process is shown to reproduce goal-dependent human-like active perception behaviours by learning model parameters (referred to as “attentional sets”) for different free-viewing and active search tasks. Learning was performed by presenting several 3D audiovisual virtual scenarios using a head-mounted display, while logging the spatial distribution of fixations of the subject (in 2D, on left and right images, and in 3D space), data which are consequently used as the training set for the framework. As a consequence, the hierarchical Bayesian framework adequately implements high-level behaviour resulting from low-level interaction of simpler building blocks by using the attentional sets learned for each task, and is able to change these attentional sets “on the fly,” allowing the implementation of goal-dependent behaviours (i.e., top-down influences).

Keywords

Multisensory active perception Hierarchical Bayes models Bioinspired robotics Human–robot interaction Emergence Scalability Adaptive behaviour 

Notes

Acknowledgments

The authors would particularly like to thank, at the Institute of Biomedical Research in Light and Image of the University of Coimbra (IBILI/UC), Prof. Miguel Castelo-Branco, João Castelhano, Carlos Amaral and Marco Simões for their help with the psychophysical experiments.

Conflict of interest

This supplement was not sponsored by outside commercial interests. It was funded entirely by ECONA, Via dei Marsi, 78, 00185 Roma, Italy.

References

  1. Aloimonos J, Weiss I, Bandyopadhyay A (1987) Active vision. Int J Comput Vis 1:333–356CrossRefGoogle Scholar
  2. Bajcsy R (1985) Active perception vs passive perception. In: Third IEEE workshop on computer vision, Bellair, Michiganm, pp 55–59Google Scholar
  3. Belardinelli A, Pirri F, Carbone A (2007) Bottom-up gaze shifts and fixations learning by imitation. IEEE Trans Syst Man Cybern B 37(2):256–271CrossRefGoogle Scholar
  4. Bessière P, Laugier C, Siegwart R (eds) (2008) Probabilistic reasoning and decision making in sensory-motor systems, volume 46 of Springer tracts in advanced robotics, Springer. ISBN: 978-3-540-79006-8Google Scholar
  5. Breazeal C, Edsinger A, Fitzpatrick P, Scassellati B (2001) Active vision for sociable robots. IEEE Trans Syst Man Cybern A Syst Hum 31(5):443–453CrossRefGoogle Scholar
  6. Buswell GT (1935) How people look at pictures: a study of the psychology and perception in art. University Chicago Press, ChicagoGoogle Scholar
  7. Castelhano MS, Mack ML, Henderson JM (2009) Viewing task influences eye movement control during active scene perception. J Vis 9:1–15PubMedCrossRefGoogle Scholar
  8. Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3(3):201–215PubMedCrossRefGoogle Scholar
  9. Elfes A (1989) Using occupancy grids for mobile robot perception and navigation. IEEE Comput 22(6):46–57CrossRefGoogle Scholar
  10. Ferreira JF (2011) Bayesian cognitive models for 3D structure and motion multimodal perception. PhD thesis, Faculty of Sciences and Technology of the University of Coimbra (FCTUC)Google Scholar
  11. Ferreira JF, Lobo J, Dias J (2011) Bayesian real-time perception algorithms on GPU—Real-time implementation of Bayesian models for multimodal perception using CUDA. J Real-Time Image Proc 6(3):171–186CrossRefGoogle Scholar
  12. Ferreira JF, Castelo-Branco M, Dias J (2012) A hierarchical Bayesian framework for multimodal active perception. Adapt Behav 20(3):172–190 Published online ahead of print, March 1CrossRefGoogle Scholar
  13. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259CrossRefGoogle Scholar
  14. Mills M, Hollingworth A, Dodd MD (2011) Examining the influence of task set on eye movements and fixations. J Vis 11:1–15CrossRefGoogle Scholar

Copyright information

© Marta Olivetti Belardinelli and Springer-Verlag 2012

Authors and Affiliations

  • João Filipe Ferreira
    • 1
  • Christiana Tsiourti
    • 1
  • Jorge Dias
    • 1
    • 2
  1. 1.ISRUniversity of CoimbraCoimbraPortugal
  2. 2.Khalifa University of Science, Technology and Research (KUSTAR)Abu DhabiUAE

Personalised recommendations