Cognitive Computation

, Volume 2, Issue 3, pp 230–241 | Cite as

Flexible Latching: A Biologically-Inspired Mechanism for Improving the Management of Homeostatic Goals

  • Philipp Rohlfshagen
  • Joanna J. BrysonEmail author


Controlling cognitive systems like domestic robots or intelligent assistive environments requires striking an appropriate balance between responsiveness and persistence. Basic goal arbitration is an essential element of low level action selection for cognitive systems, necessarily preceding even deliberate control in the direction of attention. In natural intelligence, chemically regulated motivation systems focus an agent’s behavioural attention on one problem at a time. Such simple durative decision state can improve the efficiency of artificial action selection by avoiding dithering, but taken to extremes such systems can be inefficient and produce cognitively implausible results. This article describes and demonstrates an easy-to-implement, general-purpose latching method that allows for a balance between persistence and flexibility in the presence of interruptions. This appraisal-based system facilitates automatic reassessment of the current focus of attention by existing action-selection mechanisms. The proposed mechanism, flexible latching, drastically improves efficiency in handling multiple competing goals at the cost of a surprisingly small amount of additional code (or cognitive) complexity. We discuss implications of these results for understanding natural cognitive systems.


Action selection Drives Modularity Cognitive architectures 



Hagen Lehmann first identified that strict latching seemed inappropriate for modelling primate behaviour; his model provided the initial behaviour code that was extended for this research. The research was conducted by PR based on a suggested fix by JJB. It was first presented at the AAAI 2008 Fall Symposium on Biologically Inspired Cognitive Architectures; we thank the organisers of that meeting, the other participants and the reviewers both for AAAI and Cognitive Computation. Research funded by the British Engineering and Physical Sciences Research Council (EPSRC) grants GR/S79299/01 and EP/E058884/1.


  1. 1.
    Ainslie G. Emotion as a motivated behavior. In: Agents that want and like: motivational and emotional roots of cognition and action (AISB 2005), Hertsfordshire. The Society for the Study of artificial intelligence and the simulation of Behaviour; 2005.Google Scholar
  2. 2.
    Anderson JR, Matessa MP, Lebiere C. ACT-R: a theory of higher level cognition and its relation to visual attention. Human Comput Interaction. 1997;12(4):439–62.CrossRefGoogle Scholar
  3. 3.
    Baars BJ, Franklin S. How conscious experience and working memory interact. Trends Cognit Sci. 2003;7(4):166–72.CrossRefGoogle Scholar
  4. 4.
    Baars BJ, Franklin S. Consciousness is computational: The LIDA model of global workspace theory. Int J Machine Consciousness. 2009;11:23–32.CrossRefGoogle Scholar
  5. 5.
    Bargh JA, Chartrand TL. The unbearable automaticity of being. Am Psychol. 1999;54(7):462–79.CrossRefGoogle Scholar
  6. 6.
    Bonasso RP, Firby RJ, Gat E, Kortenkamp D, Miller DP, Slack MG. Experiences with an architecture for intelligent, reactive agents. J Exper Theor Artificial Intelligence. 1997;9(2/3):237–56.Google Scholar
  7. 7.
    Brooks RA. Intelligence without representation. Artificial Intelligence. 1991;47:139–59.CrossRefGoogle Scholar
  8. 8.
    Bryson JJ. Cross-paradigm analysis of autonomous agent architecture. J Exper Theor Artificial Intelligence. 2000;12(2):165–90.CrossRefGoogle Scholar
  9. 9.
    Bryson JJ. Hierarchy and sequence versus full parallelism in reactive action selection architectures. In: From animals to animats 6 (SAB00). Cambridge, MA: MIT Press; 2000, p. 147–156.Google Scholar
  10. 10.
    Bryson JJ. Action selection and individuation in agent based modelling. In: Sallach DL, Macal C, editors. Proceedings of agent 2003: challenges in social simulation. Argonne, IL: Argonne National Laboratory; 2003, p. 317–330.Google Scholar
  11. 11.
    Bryson JJ. Mechanisms of action selection: introduction to the theme issue. Adaptive Behav. 2007;15(1):5–8.CrossRefGoogle Scholar
  12. 12.
    Bryson JJ, Stein LA. Modularity and design in reactive intelligence. In: Proceedings of the 17th International Joint Conference on artificial intelligence. Seattle: Morgan Kaufmann; 2001, p. 1115–1120.Google Scholar
  13. 13.
    Bryson JJ, Tanguy EAR. Simplifying the design of human-like behaviour: emotions as durative dynamic state for action selection. Int J Synthetic Emotions. 2010;1(1):30–50.Google Scholar
  14. 14.
    Carlson NR. Physiology of behavior, 7th ed. Boston: Allyn and Bacon; 2000.Google Scholar
  15. 15.
    Chapman D. Planning for conjunctive goals. Artificial Intelligence. 1987;32:333–78.CrossRefGoogle Scholar
  16. 16.
    Connell JH. Minimalist mobile robotics: a colony-style architecture for a mobile robot. Cambridge, MA: Academic Press; 1990. also MIT TR-1151.Google Scholar
  17. 17.
    Dunbar RIM. Coevolution of neocortical size, group size and language in humans. Behav Brain Sci. 1993;16(4):681–735.CrossRefGoogle Scholar
  18. 18.
    Fikes RE, Hart PE, Nilsson NJ. Learning and executing generalized robot plans. Artificial Intelligence. 1972;3:251–88.CrossRefGoogle Scholar
  19. 19.
    Franklin S. A “consciousness” based architecture for a functioning mind. Visions of mind: Architectures for Cognition and Affect; 2005, p. 149–175.Google Scholar
  20. 20.
    Gadanho SC. Reinforcement learning in autonomous robots: an empirical investigation of the role of emotions. Ph.D. thesis, University of Edinburgh; 1999.Google Scholar
  21. 21.
    Gigerenzer G, Todd PM, editors. Simple heuristics that make us smart. Oxford: Oxford University Press; 1999.Google Scholar
  22. 22.
    Goetz P, Walters D. The dynamics of recurrent behavior networks. Adaptive Behav. 1997;6(2):247–83.CrossRefGoogle Scholar
  23. 23.
    Hawes N, Wyatt J, Sloman A. Exploring design space for an integrated intelligent system. Knowledge-Based Syst. 2009;22(7):509–15.CrossRefGoogle Scholar
  24. 24.
    Hexmoor H, Horswill I, Kortenkamp D. Special issue: Software architectures for hardware agents. J Exper Theoretical Artificial Intelligence. 1997; 9(2/3):323–36.Google Scholar
  25. 25.
    Korstjens AH, Verhoeckx IL, Dunbar RIM. Time as a constraint on group size in spider monkeys. Behav Ecol Sociobiol. 2006;60(5):683–94.CrossRefGoogle Scholar
  26. 26.
    Kortenkamp D, Bonasso RP, Murphy R, editors. Artificial intelligence and mobile robots: case studies of successful robot systems. Cambridge, MA: MIT Press; 1998.Google Scholar
  27. 27.
    Krichmar JL. The neuromodulatory system: a framework for survival and adaptive behavior in a challenging world. Adaptive Behav. 2008;16(6):385–99.CrossRefGoogle Scholar
  28. 28.
    Laird JE, Rosenbloom PS. The evolution of the Soar cognitive architecture. In: Steier DM, Mitchell TM, editors. Mind matters. Hillsdale: Erlbaum; 1996.Google Scholar
  29. 29.
    LeDoux J. The emotional brain: the mysterious underpinnings of emotional life. Simon and Schuster, New York; 1996.Google Scholar
  30. 30.
    Maes P. The agent network architecture (ANA). SIGART Bulletin. 1991;2(4):115–20.CrossRefGoogle Scholar
  31. 31.
    Millington I, Funge J. Artificial intelligence for games. Amsterdam: Elsevier; 2009.Google Scholar
  32. 32.
    Nilsson NJ. Teleo-reactive programs for agent control. J Artificial Intelligence Res. 1994;1:139–58.Google Scholar
  33. 33.
    Norman DA, Shallice T. Attention to action: willed and automatic control of behavior. In: Davidson R, Schwartz G, Shapiro D, editors. Consciousness and self regulation: advances in research and theory, vol. 4. New York: Plenum; 1986, p. 1–18.Google Scholar
  34. 34.
    Redgrave P, Prescott TJ, Gurney K. Is the short-latency dopamine response too short to signal reward error? Trends Neurosci. 1999;22(4):146–51.CrossRefPubMedGoogle Scholar
  35. 35.
    Rosenblatt K, Payton D. A fine-grained alternative to the subsumption architecture for mobile robot control. In: Proceedings of the IEEE/INNS International Joint Conference on neural networks. Montreal: Springer; 1989.Google Scholar
  36. 36.
    Shanahan MP. Global access, embodiment, and the conscious subject. J Consciousness Stud. 2005;12(12):46–66.Google Scholar
  37. 37.
    Simon HA. Theories of bounded rationality. In: Radner CB, Radner R (eds) Decision and organization. Amsterdam: North Holland; 1972, p. 161–176.Google Scholar
  38. 38.
    Sloman A. What sort of architecture is required for a human-like agent. Foundations of Rational Agency; 1999, p. 35–52.Google Scholar
  39. 39.
    Sun R. Motivational representations within a computational cognitive architecture. Cognit Comput. 2009;1(1):91–103.CrossRefGoogle Scholar
  40. 40.
    Tyrrell T. An evaluation of Maes’s bottom-up mechanism for behavior selection. Adaptive Behav. 1994;2(4):307–48.CrossRefGoogle Scholar
  41. 41.
    Vargas P, Moioli R, de Castro LN, Timmis J, Neal M, Von Zuben FJ. Artificial homeostatic system: a novel approach. Eighth European Conference of advances in artificial life, ECAL; 2005, p. 754–764.Google Scholar
  42. 42.
    Winston PH. Artificial intelligence. Boston, MA: Addison-Wesley; 1984.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.School of Computer ScienceUniversity of BirminghamEdgbaston, BirminghamUK
  2. 2.Department of Computer ScienceUniversity of BathBathUK

Personalised recommendations