Skip to main content
Log in

Abstract

In order for an agent to achieve its objectives, make sound decisions, communicate and collaborate with others effectively it must have high quality representations. Representations can encapsulate objects, situations, experiences, decisions and behavior just to name a few. Our interest is in designing high quality representations, therefore it makes sense to ask of any representation; what does it represent; why is it represented; how is it represented; and importantly how well is it represented. This paper identifies the need to develop a better understanding of the grounding process as key to answering these important questions. The lack of a comprehensive understanding of grounding is a major obstacle in the quest to develop genuinely intelligent systems that can make their own representations as they seek to achieve their objectives. We develop an innovative framework which provides a powerful tool for describing, dissecting and inspecting grounding capabilities with the necessary flexibility to conduct meaningful and insightful analysis and evaluation. The framework is based on a set of clearly articulated principles and has three main applications. First, it can be used at both theoretical and practical levels to analyze grounding capabilities of a single system and to evaluate its performance. Second, it can be used to conduct comparative analysis and evaluation of grounding capabilities across a set of systems. Third, it offers a practical guide to assist the design and construction of high performance systems with effective grounding capabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Agnew, N., Brownlow, P., Dissanayake, G., Hartanto, Y., Heinitz, S., Karol, A., et al. (2003). Robot soccer world cup 2003: The power of uts unleashed!

  2. Albus J., Barbera A. (2005) RCS: A cognitive architecture for intelligent multi-agent systems. Annual Reviews in Control 29(1): 89–99

    Google Scholar 

  3. Alchourron C.E., Gardenfors P., Makinson D. (1985) On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic 50: 510–530

    Article  MATH  MathSciNet  Google Scholar 

  4. Barsalou L. (1999) Perceptual symbol systems. Behavioral and Brain Sciences 22: 577–609

    Google Scholar 

  5. Brennan S. (1998) The grounding problem in conversations with and through computers. In: Fussell S.R., Kreuz R.J. (eds) Social and cognitive psychological approaches to interpersonal communication. Lawrence Erlbaum, Hillsdale, NJ, pp 201–225

    Google Scholar 

  6. Brooks R.A. (1990) Elephants don’t play chess. Robotics and Autonomous Systems 6: 3–15

    Article  Google Scholar 

  7. Brooks, R. A. (1991a). Intelligence without reason. In Proceedings of 12th international joint conference on artificial intelligence, Sydney, Australia, pp. 569–595.

  8. Brooks R.A. (1991b) Intelligence without representation. Artificial Intelligence 47: 139–160

    Article  Google Scholar 

  9. Brooks R.A. (1991c) New approaches to robotics. Science 253: 1227–1232

    Article  Google Scholar 

  10. Cahn, J., & Brennan, S. (1999). A psychological model of grounding and repair in dialog. In Proceedings of AAAI fall symposium on psychological models of communication in collaborative systems, American Association for Artificial Intelligence, North Falmouth, MA, pp. 25–33.

  11. Cangelosi A., Harnard S. (2001) The adaptive advantage of symbolic theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication 4(1): 117–142

    Article  Google Scholar 

  12. Chalmers, D. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (Ed.), The symbolic and connectionist paradigms: Closing the gap. Lawrence Erlbaum.

  13. Chang, M., Dissanayake, G., Gurram, D., Hadzic, F., Healy, G., Karol, A., et al. (2004). Robot world cup soccer 2004: The magic of uts unleashed!

  14. Christiansen, M., & Chater, N. (1993). Symbol grounding—the emperor’s new theory of meaning? In Proceedings of the 15th annual conference of the cognitive science society, pp. 155–160.

  15. Clark H., Brennan S. (1991) Grounding in communication. In: Resnick L.B., Levine J.M., Teasley S.D. (eds) Perspectives on socially shared cognition. APA, Washington, DC, pp 127–149

    Chapter  Google Scholar 

  16. Clark H., Schaefer E. (1989) Contributing to discourse. Cognitive Science 13: 259–294

    Article  Google Scholar 

  17. Cohen, P., Oates, T., Beal, C., & Adams, N. (2002). Contentful mental states for robot baby. In Eighteenth national conference on artificial intelligence, American Association for Artificial Intelligence, Menlo Park, CA, USA, pp. 126–131.

  18. Coradeschi, S., & Saffiotti, A. (2000). Anchoring symbols to sensor data: Preliminary report. In AAAI/IAAI, pp. 129–135.

  19. Coradeschi, S., & Saffiotti, A. (2003). An introduction to the anchoring problem. Robotics and Autonomous Systems, 43(2–3), 85–96. Special issue on perceptual anchoring. Online at http://www.aass.oru.se/Agora/RAS02/.

  20. Dreyfus H. (1993) What computers still can’t do: A critique of artificial reason, 3rd edn. The MIT Press, London, England

    Google Scholar 

  21. Dubois, D., & Prade, H. (1994). Possibilistic logic. In D. M. Gabbay, C. J. Hogger, J. A. Robinson & J. H. Siekmann (Eds.), Handbook of logic in artificial intelligence and logic programming (pp. 439–513). Oxford University Press.

  22. Dudek G., Jenkin M. (2000) Computational principles of mobile robotic, 1st edn. Cambridge University Press, London, England

    Google Scholar 

  23. Floridi L. (2004) Open problems in the philosophy of information. Metaphilosophy 35: 554–582

    Article  Google Scholar 

  24. Gärdenfors P. (2003) How Homo became Sapiens: On the evolution of thinking. Oxford University Press, London

    Google Scholar 

  25. Gärdenfors P., Williams M.A. (2007) Multi-agent communication, planning, and collaboration. In: Schalley A.C., Khlentzos D. (eds) Mental states: Language and cognitive structure. John Benjamins Pub Co., Amsterdam, pp 197–253

    Google Scholar 

  26. Gibson J.J. (1979) The ecological approach to visual perception. Houghton-Mifflin, Boston

    Google Scholar 

  27. Glenberg A., Kaschak M. (2002) Grounding language in action. Psychonomic Bulletin and Review 9: 558–565

    Google Scholar 

  28. Goldstein, E. B. (Ed.). (2006). Sensation and perception. Wadsworth Publishing.

  29. Grazino M., Hu T., Cross C. (1997) Coding the locations of objects in the dark. Science 277: 239–241

    Article  Google Scholar 

  30. Harnad S. (1990) The symbol grounding problem. Physica 42: 335–346

    Google Scholar 

  31. Harnad, S. (2003). The symbol grounding problem. In Encyclopedia of cognitive science. London: Nature Publishing Group/Macmillan.

  32. Johnston, B., & Williams, M. A. (2009). A formal framework for the symbol grounding problem. In The proceedings of the second conference on artificial general intelligence. http://www.agi-09.org/papers/paper_20.pdf.

  33. Karol, A., Nebel, B., Stanton, C., & Williams, M. A. (2003). Case-based game-play in the robocup four-legged league. In D. Polani, B. Browning, A. Bonarini, & K. Yoshida (Eds.), RoboCup 2003: Robot soccer world cup VII (pp. 739–747). Springer.

  34. Kennedy, C. (1998). A conceptual foundation for autonomous learning in unforeseen situations. In Proceedings of the IEEE international symposium on intelligent control (ISIC/CIRA/ISIS’98), Gaithersburg, Maryland.

  35. MacLennan B. (1993) Grounding analog computers. Think 2(1): 48–52

    Google Scholar 

  36. Mayo, M. (2003). Symbol grounding and its implication for artificial intelligence. In Twenty-sixth Australian computer science conference, pp. 55–60.

  37. McCarthy, J. (1956). The inversion of functions defined by turing machines. In C. E. Shannon & J. McCarthy (Eds.), Automata studies. Annals of mathematical studies (Vol. 34, pp. 177–181). Princeton University Press.

  38. McCarthy J. (1980) Circumscription—a form of non-monotonic reasoning. Artificial Intelligence 13: 27–39

    Article  MATH  MathSciNet  Google Scholar 

  39. McCarthy, J. (1998). Elaboration tolerance. In Working papers of the fourth international symposium on logical formalizations of commonsense reasoning.

  40. McCarthy, J., & Hayes, P. (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer, & D. Michie (Eds.), Machine intelligence (Vol. 4, pp. 463–502). Edinburgh: Edinburgh University Press.

  41. Pastra, K. (2004a). Viewing vision-language integration as a double-grounding case. In Proceedings of the American Association of Artificial Intelligence (AAAI) fall symposium on “Achieving human-level intelligence through integrated systems and research”, Washington D.C., USA.

  42. Pastra, K. (2004b). Vision-language integration: A double-grounding case. PhD thesis, Department of Computer Science, University of Sheffield, UK.

  43. Pfeifer R., Verschure P. (1995) The challenge of autonomous agents: Pitfalls and how to avoid them. In: Steels L., Brooks R. (eds) The artificial life route to artificial intelligence. Lawrence Erlbaum, Hillsdale, New Jersey, pp 237–263

    Google Scholar 

  44. Prem, E. (1995). Dynamic symbol grounding, state construction and the problem of teleology. In J. Mira & F. Sandoval (Eds.), From natural to artificial neural computation. International workshop on artificial neural networks. Proceedings (pp. 619–626). Berlin, Germany: Springer-Verlag. http://citeseer.ist.psu.edu/prem95dynamic.html.

  45. Prince, C. (2001). Theory grounding in embodied artificially intelligent systems. In Proceedings of the first international workshop on epigenetic robotics: Modeling cognitive development in robotic systems, Lund, Sweden.

  46. Roy D. (2005) Semiotic schemas: A framework for grounding language in action and perception. Artificial Intelligence 167(1–2): 170–205

    Article  Google Scholar 

  47. Searle J.R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457

    Article  Google Scholar 

  48. Stanton, C. (2007). Grounding oriented design. PhD thesis, UTS.

  49. Stanton, C., & Williams, M. A. (2005). A novel and practical approach towards color constancy for mobile robots using overlapping color space signatures. In RoboCup 2005, pp. 444–451.

  50. Steels L. (1995) When are robots intelligent autonomous agents? Journal of Robotics and Autonomous Systems 15: 3–9

    Article  Google Scholar 

  51. Steels L. (1996) Perceptually grounded meaning creation. In: Tokoro M. (eds) Proceedings of the international conference on multi-agent systems. MIT Press, Cambridge, MA

    Google Scholar 

  52. Steels L. (2001) Language games for autonomous robots. IEEE Intelligent Systems 16(5): 16–22

    Google Scholar 

  53. Steels L. (2003) Evolving grounded communication for robots. Trends in Cognitive Science 7(7): 308–312

    Article  Google Scholar 

  54. Steels, L. (2007). The symbol grounding problem is solved, so what’s next? In M. D. Vega, G. Glennberg, & G. Graesser (Eds.), Symbols, embodiment and meaning. New Haven: Academic Press. http://www.isrl.uiuc.edu/~amag/langev/paper/steels07symbolGrounding.html.

  55. Steels, L., & Kaplan, F. (2002). Bootstrapping grounded word semantics. In Linguistic evolution through language acquisition: Formal and computational models (pp. 53–74). Cambridge University Press.

  56. Steels L., Vogt P. (1997) Grounding adaptive language games in robotic agents. In: Harvey I., Husbands P. (eds) Advances in artificial life. Proceedings of the fourth European conference on artificial Life. MIT Press, Cambridge, MA

    Google Scholar 

  57. Sun R. (2000) Symbol grounding: A new look at an old idea. Philosophical Psychology 13(2): 149–172

    Article  Google Scholar 

  58. Taddeo M., Floridi L. (2005) Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence 17: 419–445

    Article  Google Scholar 

  59. Tarski A. (1944) The semantic conception of truth and the foundations of semantics. Philosophy and Phenomenological Research 4: 13–47

    Article  MathSciNet  Google Scholar 

  60. Tolkien, J. R. R. (1937). The hobbit. Ballantine Books.

  61. Vogt P. (2002) The physical symbol grounding problem. Cognitive Systems Research Journal 3(3): 429–457

    Article  Google Scholar 

  62. Williams, M. A. (1994). On the logic of theory base change. In JELIA, pp. 86–105.

  63. Williams, M. A. (1998). Applications of belief revision. Transactions and Change in Logic Databases, 1472(1), 285–314. http://research.it.uts.edu.au/magic/Mary-Anne/publications/BeliefRevisionApplicationsM-AWilliams.pdf.

  64. Williams M.A. (2008) Representation = Grounded Information. In: Ho T.B., Zhou Z.H. (eds) Trends in artificial intelligence: 10th Pacific rim international conference on artificial intelligence. Lecture Notes in Computer Science. Springer, Germany

    Google Scholar 

  65. Ziemke, T. (1999). Rethinking grounding. Understanding representation in the cognitive sciences (pp. 177–190). New York: Plenum Press.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mary-Anne Williams.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Williams, MA., McCarthy, J., Gärdenfors, P. et al. A grounding framework. Auton Agent Multi-Agent Syst 19, 272–296 (2009). https://doi.org/10.1007/s10458-009-9082-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-009-9082-0

Keywords

Navigation