Autonomous Agents and Multi-Agent Systems

, Volume 19, Issue 3, pp 272–296 | Cite as

A grounding framework

  • Mary-Anne Williams
  • John McCarthy
  • Peter Gärdenfors
  • Christopher Stanton
  • Alankar Karol


In order for an agent to achieve its objectives, make sound decisions, communicate and collaborate with others effectively it must have high quality representations. Representations can encapsulate objects, situations, experiences, decisions and behavior just to name a few. Our interest is in designing high quality representations, therefore it makes sense to ask of any representation; what does it represent; why is it represented; how is it represented; and importantly how well is it represented. This paper identifies the need to develop a better understanding of the grounding process as key to answering these important questions. The lack of a comprehensive understanding of grounding is a major obstacle in the quest to develop genuinely intelligent systems that can make their own representations as they seek to achieve their objectives. We develop an innovative framework which provides a powerful tool for describing, dissecting and inspecting grounding capabilities with the necessary flexibility to conduct meaningful and insightful analysis and evaluation. The framework is based on a set of clearly articulated principles and has three main applications. First, it can be used at both theoretical and practical levels to analyze grounding capabilities of a single system and to evaluate its performance. Second, it can be used to conduct comparative analysis and evaluation of grounding capabilities across a set of systems. Third, it offers a practical guide to assist the design and construction of high performance systems with effective grounding capabilities.


Knowledge representation Cognitive robotics Grounding Perception Artificial intelligence 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Agnew, N., Brownlow, P., Dissanayake, G., Hartanto, Y., Heinitz, S., Karol, A., et al. (2003). Robot soccer world cup 2003: The power of uts unleashed!Google Scholar
  2. 2.
    Albus J., Barbera A. (2005) RCS: A cognitive architecture for intelligent multi-agent systems. Annual Reviews in Control 29(1): 89–99Google Scholar
  3. 3.
    Alchourron C.E., Gardenfors P., Makinson D. (1985) On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic 50: 510–530zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Barsalou L. (1999) Perceptual symbol systems. Behavioral and Brain Sciences 22: 577–609Google Scholar
  5. 5.
    Brennan S. (1998) The grounding problem in conversations with and through computers. In: Fussell S.R., Kreuz R.J. (eds) Social and cognitive psychological approaches to interpersonal communication. Lawrence Erlbaum, Hillsdale, NJ, pp 201–225Google Scholar
  6. 6.
    Brooks R.A. (1990) Elephants don’t play chess. Robotics and Autonomous Systems 6: 3–15CrossRefGoogle Scholar
  7. 7.
    Brooks, R. A. (1991a). Intelligence without reason. In Proceedings of 12th international joint conference on artificial intelligence, Sydney, Australia, pp. 569–595.Google Scholar
  8. 8.
    Brooks R.A. (1991b) Intelligence without representation. Artificial Intelligence 47: 139–160CrossRefGoogle Scholar
  9. 9.
    Brooks R.A. (1991c) New approaches to robotics. Science 253: 1227–1232CrossRefGoogle Scholar
  10. 10.
    Cahn, J., & Brennan, S. (1999). A psychological model of grounding and repair in dialog. In Proceedings of AAAI fall symposium on psychological models of communication in collaborative systems, American Association for Artificial Intelligence, North Falmouth, MA, pp. 25–33.Google Scholar
  11. 11.
    Cangelosi A., Harnard S. (2001) The adaptive advantage of symbolic theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication 4(1): 117–142CrossRefGoogle Scholar
  12. 12.
    Chalmers, D. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (Ed.), The symbolic and connectionist paradigms: Closing the gap. Lawrence Erlbaum.Google Scholar
  13. 13.
    Chang, M., Dissanayake, G., Gurram, D., Hadzic, F., Healy, G., Karol, A., et al. (2004). Robot world cup soccer 2004: The magic of uts unleashed!Google Scholar
  14. 14.
    Christiansen, M., & Chater, N. (1993). Symbol grounding—the emperor’s new theory of meaning? In Proceedings of the 15th annual conference of the cognitive science society, pp. 155–160.Google Scholar
  15. 15.
    Clark H., Brennan S. (1991) Grounding in communication. In: Resnick L.B., Levine J.M., Teasley S.D. (eds) Perspectives on socially shared cognition. APA, Washington, DC, pp 127–149CrossRefGoogle Scholar
  16. 16.
    Clark H., Schaefer E. (1989) Contributing to discourse. Cognitive Science 13: 259–294CrossRefGoogle Scholar
  17. 17.
    Cohen, P., Oates, T., Beal, C., & Adams, N. (2002). Contentful mental states for robot baby. In Eighteenth national conference on artificial intelligence, American Association for Artificial Intelligence, Menlo Park, CA, USA, pp. 126–131.Google Scholar
  18. 18.
    Coradeschi, S., & Saffiotti, A. (2000). Anchoring symbols to sensor data: Preliminary report. In AAAI/IAAI, pp. 129–135.Google Scholar
  19. 19.
    Coradeschi, S., & Saffiotti, A. (2003). An introduction to the anchoring problem. Robotics and Autonomous Systems, 43(2–3), 85–96. Special issue on perceptual anchoring. Online at
  20. 20.
    Dreyfus H. (1993) What computers still can’t do: A critique of artificial reason, 3rd edn. The MIT Press, London, EnglandGoogle Scholar
  21. 21.
    Dubois, D., & Prade, H. (1994). Possibilistic logic. In D. M. Gabbay, C. J. Hogger, J. A. Robinson & J. H. Siekmann (Eds.), Handbook of logic in artificial intelligence and logic programming (pp. 439–513). Oxford University Press.Google Scholar
  22. 22.
    Dudek G., Jenkin M. (2000) Computational principles of mobile robotic, 1st edn. Cambridge University Press, London, EnglandGoogle Scholar
  23. 23.
    Floridi L. (2004) Open problems in the philosophy of information. Metaphilosophy 35: 554–582CrossRefGoogle Scholar
  24. 24.
    Gärdenfors P. (2003) How Homo became Sapiens: On the evolution of thinking. Oxford University Press, LondonGoogle Scholar
  25. 25.
    Gärdenfors P., Williams M.A. (2007) Multi-agent communication, planning, and collaboration. In: Schalley A.C., Khlentzos D. (eds) Mental states: Language and cognitive structure. John Benjamins Pub Co., Amsterdam, pp 197–253Google Scholar
  26. 26.
    Gibson J.J. (1979) The ecological approach to visual perception. Houghton-Mifflin, BostonGoogle Scholar
  27. 27.
    Glenberg A., Kaschak M. (2002) Grounding language in action. Psychonomic Bulletin and Review 9: 558–565Google Scholar
  28. 28.
    Goldstein, E. B. (Ed.). (2006). Sensation and perception. Wadsworth Publishing.Google Scholar
  29. 29.
    Grazino M., Hu T., Cross C. (1997) Coding the locations of objects in the dark. Science 277: 239–241CrossRefGoogle Scholar
  30. 30.
    Harnad S. (1990) The symbol grounding problem. Physica 42: 335–346Google Scholar
  31. 31.
    Harnad, S. (2003). The symbol grounding problem. In Encyclopedia of cognitive science. London: Nature Publishing Group/Macmillan.Google Scholar
  32. 32.
    Johnston, B., & Williams, M. A. (2009). A formal framework for the symbol grounding problem. In The proceedings of the second conference on artificial general intelligence.
  33. 33.
    Karol, A., Nebel, B., Stanton, C., & Williams, M. A. (2003). Case-based game-play in the robocup four-legged league. In D. Polani, B. Browning, A. Bonarini, & K. Yoshida (Eds.), RoboCup 2003: Robot soccer world cup VII (pp. 739–747). Springer.Google Scholar
  34. 34.
    Kennedy, C. (1998). A conceptual foundation for autonomous learning in unforeseen situations. In Proceedings of the IEEE international symposium on intelligent control (ISIC/CIRA/ISIS’98), Gaithersburg, Maryland.Google Scholar
  35. 35.
    MacLennan B. (1993) Grounding analog computers. Think 2(1): 48–52Google Scholar
  36. 36.
    Mayo, M. (2003). Symbol grounding and its implication for artificial intelligence. In Twenty-sixth Australian computer science conference, pp. 55–60.Google Scholar
  37. 37.
    McCarthy, J. (1956). The inversion of functions defined by turing machines. In C. E. Shannon & J. McCarthy (Eds.), Automata studies. Annals of mathematical studies (Vol. 34, pp. 177–181). Princeton University Press.Google Scholar
  38. 38.
    McCarthy J. (1980) Circumscription—a form of non-monotonic reasoning. Artificial Intelligence 13: 27–39zbMATHCrossRefMathSciNetGoogle Scholar
  39. 39.
    McCarthy, J. (1998). Elaboration tolerance. In Working papers of the fourth international symposium on logical formalizations of commonsense reasoning.Google Scholar
  40. 40.
    McCarthy, J., & Hayes, P. (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer, & D. Michie (Eds.), Machine intelligence (Vol. 4, pp. 463–502). Edinburgh: Edinburgh University Press.Google Scholar
  41. 41.
    Pastra, K. (2004a). Viewing vision-language integration as a double-grounding case. In Proceedings of the American Association of Artificial Intelligence (AAAI) fall symposium on “Achieving human-level intelligence through integrated systems and research”, Washington D.C., USA.Google Scholar
  42. 42.
    Pastra, K. (2004b). Vision-language integration: A double-grounding case. PhD thesis, Department of Computer Science, University of Sheffield, UK.Google Scholar
  43. 43.
    Pfeifer R., Verschure P. (1995) The challenge of autonomous agents: Pitfalls and how to avoid them. In: Steels L., Brooks R. (eds) The artificial life route to artificial intelligence. Lawrence Erlbaum, Hillsdale, New Jersey, pp 237–263Google Scholar
  44. 44.
    Prem, E. (1995). Dynamic symbol grounding, state construction and the problem of teleology. In J. Mira & F. Sandoval (Eds.), From natural to artificial neural computation. International workshop on artificial neural networks. Proceedings (pp. 619–626). Berlin, Germany: Springer-Verlag.
  45. 45.
    Prince, C. (2001). Theory grounding in embodied artificially intelligent systems. In Proceedings of the first international workshop on epigenetic robotics: Modeling cognitive development in robotic systems, Lund, Sweden.Google Scholar
  46. 46.
    Roy D. (2005) Semiotic schemas: A framework for grounding language in action and perception. Artificial Intelligence 167(1–2): 170–205CrossRefGoogle Scholar
  47. 47.
    Searle J.R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457CrossRefGoogle Scholar
  48. 48.
    Stanton, C. (2007). Grounding oriented design. PhD thesis, UTS.Google Scholar
  49. 49.
    Stanton, C., & Williams, M. A. (2005). A novel and practical approach towards color constancy for mobile robots using overlapping color space signatures. In RoboCup 2005, pp. 444–451.Google Scholar
  50. 50.
    Steels L. (1995) When are robots intelligent autonomous agents? Journal of Robotics and Autonomous Systems 15: 3–9CrossRefGoogle Scholar
  51. 51.
    Steels L. (1996) Perceptually grounded meaning creation. In: Tokoro M. (eds) Proceedings of the international conference on multi-agent systems. MIT Press, Cambridge, MAGoogle Scholar
  52. 52.
    Steels L. (2001) Language games for autonomous robots. IEEE Intelligent Systems 16(5): 16–22Google Scholar
  53. 53.
    Steels L. (2003) Evolving grounded communication for robots. Trends in Cognitive Science 7(7): 308–312CrossRefGoogle Scholar
  54. 54.
    Steels, L. (2007). The symbol grounding problem is solved, so what’s next? In M. D. Vega, G. Glennberg, & G. Graesser (Eds.), Symbols, embodiment and meaning. New Haven: Academic Press.
  55. 55.
    Steels, L., & Kaplan, F. (2002). Bootstrapping grounded word semantics. In Linguistic evolution through language acquisition: Formal and computational models (pp. 53–74). Cambridge University Press.Google Scholar
  56. 56.
    Steels L., Vogt P. (1997) Grounding adaptive language games in robotic agents. In: Harvey I., Husbands P. (eds) Advances in artificial life. Proceedings of the fourth European conference on artificial Life. MIT Press, Cambridge, MAGoogle Scholar
  57. 57.
    Sun R. (2000) Symbol grounding: A new look at an old idea. Philosophical Psychology 13(2): 149–172CrossRefGoogle Scholar
  58. 58.
    Taddeo M., Floridi L. (2005) Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence 17: 419–445CrossRefGoogle Scholar
  59. 59.
    Tarski A. (1944) The semantic conception of truth and the foundations of semantics. Philosophy and Phenomenological Research 4: 13–47CrossRefMathSciNetGoogle Scholar
  60. 60.
    Tolkien, J. R. R. (1937). The hobbit. Ballantine Books.Google Scholar
  61. 61.
    Vogt P. (2002) The physical symbol grounding problem. Cognitive Systems Research Journal 3(3): 429–457CrossRefGoogle Scholar
  62. 62.
    Williams, M. A. (1994). On the logic of theory base change. In JELIA, pp. 86–105.Google Scholar
  63. 63.
    Williams, M. A. (1998). Applications of belief revision. Transactions and Change in Logic Databases, 1472(1), 285–314.
  64. 64.
    Williams M.A. (2008) Representation = Grounded Information. In: Ho T.B., Zhou Z.H. (eds) Trends in artificial intelligence: 10th Pacific rim international conference on artificial intelligence. Lecture Notes in Computer Science. Springer, GermanyGoogle Scholar
  65. 65.
    Ziemke, T. (1999). Rethinking grounding. Understanding representation in the cognitive sciences (pp. 177–190). New York: Plenum Press.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Mary-Anne Williams
    • 1
  • John McCarthy
    • 2
  • Peter Gärdenfors
    • 3
  • Christopher Stanton
    • 1
  • Alankar Karol
    • 1
  1. 1.Innovation and Enterprise Research LaboratoryUniversity of TechnologySydneyAustralia
  2. 2.Department of Computer ScienceStanford UniversityStanfordUSA
  3. 3.Lund University Cognitive ScienceLundSweden

Personalised recommendations