Visual Attention Mechanisms Revisited

  • Cristina Mendoza
  • Pilar Bachiller
  • Antonio Bandera
  • Pablo Bustos
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 855)


Currently robots are evolving into an increasing complexity and have to support a large workload that directly affects their functions. To compensate this situation they must make a better use of their available resources while behaving in a reliable way. The goal of this project is to endow Shelly, the social robot created by RoboLab with a predictive system of visual attention that allows it to maintain an updated internal representation of its environment, providing it with a basic sense of awareness. This improvement allows the robot to foresee simple facts, react to unpredicted situations and integrate changes of the environment in its internal memory. To achieve this level of functionality we have combined overt and covert head movements with an updatable internal model of the environment through a predictive and dynamic attention loop. The system has been developed using the RoboComp framework [21] and the new components have been integrated in the CORTEX cognitive architecture. The implementation is available for public use.


Visual attention Predictive system Spatial position 


  1. 1.
    Bachiller, P., Bustos, P., Manso, L.J.: Attentional selection for action in mobile robots. In: Advances in Robotics, Automation and Control, pp. 111–136 (2008)Google Scholar
  2. 2.
    Bledt, G., Wensing, P., Sangbae, K.: Policy-regularized model predictive control to stabilize diverse quadrupedal gaits for the MIT cheetah. In: IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, pp. 4102–4109 (2017)Google Scholar
  3. 3.
    Breazeal, C., Scassellati, B.: A context-dependent attention system for a social robot. In: In IJCAI International Joint Conference on Artificial Intelligence, San Francisco, CA, USA, vol. 2, pp. 1146–1151 (1999)Google Scholar
  4. 4.
    Bridewell, W., Bello, P.F.: Incremental object perception in an attention-driven cognitive architecture. In: Proceedings of the 37th Annual Meeting of the Cognitive Science Society, Atlanta, Georgia, pp. 279–284 (2015)Google Scholar
  5. 5.
    Bruce, N., Tsotsos, J.: Attention based on information maximization. J. Vis. 7, 950–952 (2007)CrossRefGoogle Scholar
  6. 6.
    Calderita, L.V.: Deep state representation: an unified internal representation for the robotics cognitive architecture cortex. Master’s thesis, University of Extremadura, Cáceres, Spain (2016)Google Scholar
  7. 7.
    Carpenter, R.H.S.: Movements of the Eyes, 2nd edn. Pion Limited, London (1988)Google Scholar
  8. 8.
    Clark, A.: Surfing Uncertainty. Oxford University Press, England (2016)CrossRefGoogle Scholar
  9. 9.
    Danks, D.: Unifying the Mind. MIT Press, Massachusetts (2014)Google Scholar
  10. 10.
    Deutsch, S.E., Macmillan, J., Camer, M.L., Chopra, S.: Operability model architecture: Demonstration final report. Technical Report AL/HR-TR-1996-0161 (1997)Google Scholar
  11. 11.
    Fischer, B., Breitmeyer, B.: Mechanisms of visual attention revealed by saccadic eye movement. Pergamon Journals Ltd (1987)Google Scholar
  12. 12.
    Fox, D., Burgard, W., Thrun, S.: The dynamic window approach to collision avoidance. Robot. Autom. Mag. 4 (1997)CrossRefGoogle Scholar
  13. 13.
    Gore, B.F., Hooey, B.L., Wickens, C.D., Scott-Nash, S.: A computational implementation of a human attention guiding mechanism in MIDAS v5. In: International Conference on Digital Human Modelling, California, USA (2009)CrossRefGoogle Scholar
  14. 14.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)CrossRefGoogle Scholar
  15. 15.
    Kawamura, K., Dodd, W., Ratanaswasd, P., Gutiérrez, R.A.: Development of a robot with a sense of self. In: IEEE International Symposium on Computational Intelligence in Robotics and Automation, Espoo, Finland (2005)Google Scholar
  16. 16.
    Kieras, D.E., Wakefield, G.H., Thompson, E.R., Iyer, N., Simpson, B.D.: Modeling two-channel speech processing with the epic cognitive architecture. Top. Cognit. Sci. 8, 291–304 (2016)CrossRefGoogle Scholar
  17. 17.
    Kotseruba, I.: Visual attention in dynamic environments and its application to playing online games. Master’s thesis, York University, Toronto, Canada (2016)Google Scholar
  18. 18.
    Kotseruba, I., Tsotsos, J.K.: A review of 40 years in cognitive architecture research core cognitive abilities and practical applications. Cornell University Library (2018)Google Scholar
  19. 19.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. (IJCV) 60, 91–110 (2004)CrossRefGoogle Scholar
  20. 20.
    Mancas, M., Ferrera, V.P., Riche, N., Taylor, J.G.: From Human Attention to Computational Attention: A Multidisciplinary Approach, 1st edn. Springer, Crete (2015)Google Scholar
  21. 21.
    Manso, L., Bachiller, P., Bustos, P., Núñez, P., Cintas, R., Calderita, L.: RoboComp: a tool-based robotics framework. In: SIMPAR. LNCS, vol. 6472, pp. 251–262. Springer (2010)Google Scholar
  22. 22.
    Manso, L.J., Bustos, P., Bachiller, P.: Multi-cue visual obstacle detection for mobile robots. J. Phys. Agents 4, 3–10 (2010)Google Scholar
  23. 23.
    Manso, L.J., Bustos, P., Bachiller, P., Franco, J.: Indoor scene perception for object detection and manipulation. In: 5th International Conference Symposium on Spatial Cognition in Robotics, Rome, Italy (2012)Google Scholar
  24. 24.
    Manso, L.J., Gutiérrez, M., Bustos, P., Bachiller, P.: Integrating planning perception and action for informed object search. Cognit. Process. 19, 285–296 (2018)CrossRefGoogle Scholar
  25. 25.
    Mathews, Z., Bermudez I Badia, S., Verschur, P.: PASAR: an integrated model of prediction, anticipation, sensation, attention and response for artificial sensorimotor systems. Inf. Sci. 186, 1–19 (2012)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Nyamsuren, E., Taatgen, N.A.: Pre-attentive and attentive vision module. Cognit. Syst. Res. 211–216 (2013)Google Scholar
  27. 27.
    Pahlavan, K.: Active Robot Vision and Primary Ocular Processes, 1st edn. Royal Institute of Technology Stockholm, Computational Vision and Active Perception Laboratory (CVAP), Sweden (1993)Google Scholar
  28. 28.
    Palomino, A., Marfil, R., Bandera, J.P., Bandera, A.J.: A novel biologically inspired attention mechanism for a social robot. EURASIP J. Adv. Sig. Process. 1–10 (2011)Google Scholar
  29. 29.
    Purves, D., Augustine, G., Fitzpatrick, D., Hall, W., Lamantia, A., Mcnamara, J., Williams, S.: Neuroscience, 3rd edn. Sinauer Associates (2004)Google Scholar
  30. 30.
    Redmon, J.: Yolo: Real-time object detection (2018).
  31. 31.
    Ruesch, J., Lopes, M., Bernardino, A., Hornstein, J., Santos-Victor, J., Pfeifer, R.: Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub. In: Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, pp. 962–967 (2008)Google Scholar
  32. 32.
    Steinman, S., Steinman, B.: Topics in Biomedical Engineering International Book Series, Models of the Visual System. Springer, Boston (2002)CrossRefGoogle Scholar
  33. 33.
    Um, D., Gutiérrez, M.A., Bustos, P., Kang, S.: Simultaneous planning and mapping (SPAM) for a manipulator by best next move in unknown environments, Tokyo, Japan, pp. 5273–5278. IEEE (2013)Google Scholar
  34. 34.
    Vega, A., Manso, L.J., Macharet, D.G., Bustos, P., Núñez, P.: A new strategy based on an adaptive spatial density function for social robot navigation in human-populated environments. In: REACTS Workshop at the International Conference on Computer Analysis and Patterns, CAIP. Ystad Saltsjbad (2017)Google Scholar
  35. 35.
    Wolfe, J.M.: Guided search 2.0 a revised model of visual search. Psychon. Bull. Rev. 1, 202–238 (1994)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Cristina Mendoza
    • 1
  • Pilar Bachiller
    • 1
  • Antonio Bandera
    • 2
  • Pablo Bustos
    • 1
  1. 1.RoboLabUniversidad de ExtremaduraCáceresSpain
  2. 2.Universidad de MálagaMálagaSpain

Personalised recommendations