Perceptions or Actions? Grounding How Agents Interact Within a Software Architecture for Cognitive Robotics


One of the aims of cognitive robotics is to endow robots with the ability to plan solutions for complex goals and then to enact those plans. Additionally, robots should react properly upon encountering unexpected changes in their environment that are not part of their planned course of actions. This requires a close coupling between deliberative and reactive control flows. From the perspective of robotics, this coupling generally entails a tightly integrated perceptuomotor system, which is then loosely connected to some specific form of deliberative system such as a planner. From the high-level perspective of automated planning, the emphasis is on a highly functional system that, taken to its extreme, calls perceptual and motor modules as services when required. This paper proposes to join the perceptual and acting perspectives via a unique representation where the responses of all software modules in the architecture are generalized using the same set of tokens. The proposed representation integrates symbolic and metric information. The proposed approach has been successfully tested in CLARC, a robot that performs Comprehensive Geriatric Assessments of elderly patients. The robot was favourably appraised in a survey conducted to assess its behaviour. For instance, using a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree), patients reported an average of 4.86 when asked if they felt confident during the interaction with the robot. This paper proposes a mechanism for bringing the perceptual and acting perspectives closer within a distributed robotics architecture. The idea is built on top of the blackboard model and scene graphs. The modules in our proposal communicate using a short-term memory, writing the perceptual information they need to share with other agents and accessing the information they need for determining the next goals to address.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8


  1. 1.

  2. 2.

  3. 3.


  1. 1.

    Coradeschi S, Loutfi A, Wrede B. A short review of symbol grounding in robotic and intelligent systems. Künstliche Intell 2013;27(2):129–136.

    Article  Google Scholar 

  2. 2.

    Jamone L, Ugur E, Cangelosi A, Fadiga L, Bernardino A, Piater J, Santos-Victor J. Affordances in psychology, neuroscience and robotics: a survey. IEEE Transactions on Cognitive and Developmental Systems, 2017. p. 99.

  3. 3.

    Minsky ML. 1986. The society of mind. Simon and Schuster.

  4. 4.

    Krüger N, Piater J, Wörgötter F, Geib C, Petrick R, Steedman M, Ude A, Asfour T, Kraft D, Omrcen D, Hommel B, Agostino A, Kragic D, Eklundh J, Krüger V, Dillmann R. 2009. A formal definition of object action complexes and examples at different levels of the process hierarchy. EU project PACO-PLUS.

  5. 5.

    Hoyningen-Huene NV, Kirchlechner B, Beetz M. GrAM: reasoning with grounded action models by combining knowledge representation and data mining. In: Rome E, Hertzberg J, and Dorffner G, editors. International conference on towards affordance-based robot control. Berlin: Springer; 2008. p. 47–62.

  6. 6.

    Blumenthal S, Bruyninckx H, Nowak W, Prassler E. A scene graph based shared 3d world model for robotic applications. IEEE International Conference on Robotics and Automation, 2013. p. 453–460.

  7. 7.

    Hawes N, Wyatt J, Sloman A. 2006. An architecture schema for embodied cognitive systems. Technical. Report. CSR-06-12, University of Birmingham School of Computer Science.

  8. 8.

    Manso LJ, Calderita LV, Bustos P, García J, Martínez M, Fernández F, Romero-garcés A, Bandera A. A genera-purpose architecture for control mobile robots. Workshop on Physical Agents (WAF), 2014. p. 105–116.

  9. 9.

    Mohan V, Morasso P, Sandini G, Kasderidis S. Inference through embodied simulation in cognitive robots. Cogn Comput 2013;5(3):355–382.

    Article  Google Scholar 

  10. 10.

    Magill K, Erden Y. Autonomy and desire in machines and cognitive agent systems. Cogn Comput 2012;4 (3):354–364.

    Article  Google Scholar 

  11. 11.

    Newell A. Some problems of basic organization in problem solving programs. In: Yovits MC, Jacobi GT, and Goldstein GD, editors. Conference on Self-Organizing Systems. Washington: Spartan Books; 1962. p. 393–423.

  12. 12.

    Laird JE. The Soar cognitive architecture. Cambridge: MIT Press; 2012.

    Google Scholar 

  13. 13.

    Anderson JR. How can the human mind occur in the physical universe?. New York: Oxford University Press; 2007.

    Google Scholar 

  14. 14.

    Hayes-Roth B. A domain-specific software architecture for a class of intelligent patient monitoring agents. J Exp Theor Artif Intell 1996;8(2):149–171.

    Article  Google Scholar 

  15. 15.

    Haber A, Sammut C. A cognitive architecture for autonomous robots. Adv Cogn Syst 2013;2:257–275.

    Google Scholar 

  16. 16.

    Lemaignan S, Warnier M, Sisbot EA, Clodic A, Alami R. Artificial cognition for social human—robot interaction: an implementation. Artif Intell 2017;247:45–69.

    Article  Google Scholar 

  17. 17.

    Beetz M, Mösenlechner L, Tenorth M. 2010. CRAM — A cognitive robot abstract machine for everyday manipulation in human environments. IEEE/RSJ International Conference on Intelligent Robots and Systems.

  18. 18.

    Arrabales R, Ledezma A, Sanchis A. Simulating visual qualia in the cera-cranium cognitive architecture. From Brains to Systems: Brain-Inspired Cognitive Systems 2010, Advances in Experimental Medicine and Biology. Springer; 2011. p. 223–238.

  19. 19.

    Bello P, Bridewell W, Wasylyshyn C. Attentive and pre-attentive processes in multiple object tracking: a computational investigation. In: Proceedings of 38th Annual Meeting of the Cognitive Science Society. Philadelphia, 2016. p. 1517–1522.

  20. 20.

    Conforth M, Meng Y. Embodied intelligent agents with cognitive conscious and unconscious reasoning. In: Proceedings of the International Conference on Brain-Mind. Brain-Mind Institute, 2012. pp 15–20.

  21. 21.

    Hawes N, Zillich M, Wyatt J. BALT & CAST: Middleware for cognitive robotics. Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication, ROMAN 2007, 2007. p. 998–1003.

  22. 22.

    Tenorth M, Beetz M. 2009. KNOWROB – Knowledge processing for autonomous personal robots. IEEE/RSJ International Conference on Intelligent Robots and Systems.

  23. 23.

    Zhang S, Sridharan M, Gelfond M, Wyatt J. Towards an architecture for knowledge representation and reasoning in robotics. In International Conference on Social Robotics (ICSR). 2014.

    Google Scholar 

  24. 24.

    Wintermute S, Laird JE. Imagery as compensation for an imperfect abstract problem representation. In: Taatgen NA and van Rijn H, editors. Proceedings of the 31st Annual Conference of the Cognitive Science Society; 2009.

  25. 25.

    Manso LJ, Bustos P, Bachiller P, Núñez P. A Perception-aware Architecture for Autonomous Robots. Int J Adv Robot Syst 2015;12(174):13. 2015.

    Google Scholar 

  26. 26.

    Calderita LV, Manso LJ, Bustos P, Suárez-Mejías C, Fernández F, Bandera A. THERAPIST: Towards an autonomous socially interactive robot for motor and neurorehabilitation therapies for children. JMIR Rehabil Assist Technol. 2014;1(1).

    Article  Google Scholar 

  27. 27.

    Romero-Garcés A, Calderita LV, Martínez-gómez J, Bandera JP, Marfil R, Manso LJ, Bandera A, Bustos P. 2015. Testing a fully autonomous robotic salesman in real scenarios. IEEE International Conference on Autonomous Robots Systems and Competitions.

  28. 28.

    Alcázar V, Guzmán C, Prior D, Borrajo D, Castillo L, Onaindia E. Pelea: planning, learning and execution architecture. PlanSIG’10. 2010: 17—24.

  29. 29.

    Hartley R, Pipitone F. 1991. Experiments with the subsumption architecture. Proceedings of the International Conference on Robotics and Automation (ICRA).

  30. 30.

    Gat E. On three-layer architectures, artificial intelligence and mobile robots. Cambridge: MIT Press; 1998, pp. 195–210.

    Google Scholar 

  31. 31.

    Katter V, Mahrt N. Reduced representations of rooted trees. J Algebra 2014;412:41–49.

    Article  Google Scholar 

  32. 32.

    Xiong W, Wu L, Alleva F, Droppo J, Huang X, Stolcke A. 2017. The microsoft 2017 conversational speech recognition system. arXiv:1708.06073.

  33. 33.

    Tsui KM, Dalphond JM, Brooks DJ, Medvedev MS, McCann E, Allspaw J, Kontak D, Yanco HA. Accessible human-robot interaction for telepresence robots: a case study Paladyn. 2015;6(1).

  34. 34.

    Kurniawan S, Zaphiris P. Research-derived web design guidelines for older people. Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility, Assets ’05. New York: ACM; 2005. p. 129–135.

  35. 35.

    Mahoney FI, Barthel D. Functional evaluation: the Barthel index. Maryland State Med J 1965;14:56–61.

    Google Scholar 

  36. 36.

    Folstein M, Folstein S, McHugh P. Mini-mental state: a practical method for grading the cognitive state of patients for the clinician. J Psych Res 1975;12:189–198.

    CAS  Article  Google Scholar 

  37. 37.

    Mathias S, Nayak USL, Isaacs B. Balance in elderly patients: the “get-up and go” test. Arch Phys Med Rehabil 1986;67:387–389.

    CAS  PubMed  Google Scholar 

  38. 38.

    Benus S. Social aspects of entrainment in spoken interaction. Cogn Comput 2014;6(4):802–813.

    Article  Google Scholar 

  39. 39.

    Kotseruba I, Avella Gonzalez O, Tsotsos JK. 2016. A review of 40 years of cognitive architecture research: focus on perception, attention, learning and applications. arXiv:1610.08602.

  40. 40.

    Manso L, Bustos P, Bachiller P, Gutierrez M. Graph grammars for active perception 12th International Conference on Autonomous Robot Systems and Competitions. Guimaraes, 2012. p. 4–8.

Download references


The authors warmly thank the members of the ”Amis du Living Lab” community and the patients and clinicians of Hospital Virgen del Rocío (Seville) for their participation in this research.


This work has been partially funded by the EU ECHORD++ project (FP7-ICT-601116) and the RTI2018-099522-B (MICINN and FEDER funds).

Author information



Corresponding author

Correspondence to A. Bandera.

Ethics declarations

Conflict of Interest

Rebeca Marfil, Adrián Romero-Garcés, Juan P. Bandera, Luis J. Manso, Luis V. Calderita, Pablo Bustos, Antonio Bandera, Fernando Fernández and Dimitri Voilmy declare that they have no conflict of interest. Javier García is partially supported by funds from the Comunidad de Madrid (Spain) under research project 2016-T2/TIC-1712.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Marfil, R., Romero-Garces, A., Bandera, J.P. et al. Perceptions or Actions? Grounding How Agents Interact Within a Software Architecture for Cognitive Robotics. Cogn Comput 12, 479–497 (2020).

Download citation


  • Cognitive robotics
  • Automatic planning
  • Software architectures