Advertisement

Symbol Grounding Through Cumulative Learning

  • Samarth Swarup
  • Kiran Lakkaraju
  • Sylvian R. Ray
  • Les Gasser
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4211)

Abstract

We suggest that the primary motivation for an agent to construct a symbol-meaning mapping is to solve a task. The meaning space of an agent should be derived from the tasks that it faces during the course of its lifetime. We outline a process in which agents learn to solve multiple tasks and extract a store of “cumulative knowledge” that helps them to solve each new task more quickly and accurately. This cumulative knowledge then forms the ontology or meaning space of the agent. We suggest that by grounding symbols to this extracted cumulative knowledge agents can gain a further performance benefit because they can guide each others’ learning process. In this version of the symbol grounding problem meanings cannot be directly communicated because they are internal to the agents, and they will be different for each agent. Also, the meanings may not correspond directly to objects in the environment. The communication process can also allow a symbol meaning mapping that is dynamic. We posit that these properties make this version of the symbol grounding problem realistic and natural. Finally, we discuss how symbols could be grounded to cumulative knowledge via a situation where a teacher selects tasks for a student to perform.

Keywords

Recurrent Neural Network Language Game Turing Test Frequent Subgraph Multitask Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sowa, J.F.: Signs, processes, and language games (1999), Available Online: http://www.jfsowa.com/pubs/signproc.htm
  2. 2.
    Turing, A.M.: Computing machinery and intelligence. Mind 49, 433–460 (1950)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Newell, A., Simon, H.: Computer science as empirical inquiry: Symbols and search. Communications of the ACM 19, 113–126 (1976)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Searle, J.: Minds, brains, and programs. Behavioral and Brain Sciences 3, 417–457 (1980)CrossRefGoogle Scholar
  5. 5.
    Harnad, S.: The symbol grounding problem. Physica D 42, 335–346 (1990)CrossRefGoogle Scholar
  6. 6.
    Steels, L.: The evolution of communication systems by adaptive agents. In: Alonso, E., Kudenko, D., Kazakov, D. (eds.) AAMAS 2000 and AAMAS 2002. LNCS (LNAI), vol. 2636, pp. 125–140. Springer, Heidelberg (2003)Google Scholar
  7. 7.
    Vogt, P.: Language evolution and robotics: Issues in symbol grounding and language acquisition. In: Artificial Cognition Systems, Idea Group (2006)Google Scholar
  8. 8.
    Cangelosi, A., Harnad, S.: The adaptive value of symbolic theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication 4(1), 117–142 (2000)CrossRefGoogle Scholar
  9. 9.
    Vogt, P.: Anchoring symbols to sensorimotor control. In: Proceedings of the 14th Belgian/Netherlands Artificial Intelligence Conference, BNAIC (2002)Google Scholar
  10. 10.
    Dunbar, K., Blanchette, I.: The inVivo/inVitro approach to cognition: The case of analogy. Trends in Cognitive Sciences 5, 334–339 (2001)CrossRefGoogle Scholar
  11. 11.
    Hofstadter, D.R.: Analogy as the Core of Cognition. In: The Analogical Mind: Perspectives from Cognitive Science, pp. 499–538. The MIT Press, Cambridge (2001)Google Scholar
  12. 12.
    Omlin, C.W., Giles, C.L.: Extraction of rules from discrete-time recurrent neural networks. Neural Networks 9(1), 41–52 (1996)CrossRefGoogle Scholar
  13. 13.
    Shavlik, J.: Combining symbolic and neural learning. Machine Learning 14(3), 321–331 (1994)Google Scholar
  14. 14.
    Bodén, M., Wiles, J.: Context-free and context-sensitive dynamics in recurrent neural networks. Connection Science 12(3,4), 197–210 (2001)Google Scholar
  15. 15.
    Thrun, S., Mitchell, T.: Lifelong robot learning. Robotics and Autonomous Systems 15, 25–46 (1995)CrossRefGoogle Scholar
  16. 16.
    Caruana, R.: Multitask learning. Machine Learning 28, 41–75 (1997)CrossRefGoogle Scholar
  17. 17.
    Swarup, S., Ray, S.R.: Cross domain knowledge transfer using structured representations. In: Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI), Boston, MA, USA (2006)Google Scholar
  18. 18.
    Yan, X., Han, J.: CloseGraph: Mining closed frequent graph patterns. In: Proceedings of the 9th ACM SIGKDD conference on Knowledge-Discovery and Data Mining (KDD 2003) (2003)Google Scholar
  19. 19.
    Nicolescu, M.N., Matarić, M.J.: Task Learning Through Imitation and Human-Robot Interaction. In: Models and Mechanisms of Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions (2005)Google Scholar
  20. 20.
    Smith, A.D.M.: Mutual Exclusivity: Communicative Success Despite Conceptual Divergence. In: Language Origins: Perspectives on Evolution, pp. 372–388. Oxford University Press, Oxford (2005)Google Scholar
  21. 21.
    Smith, A.D.M.: Stable communication through dynamic language. In: Proceedings of the 2nd International Symposium on the Emergence and Evolution of Linguistic Communication, pp. 135–142 (2005)Google Scholar
  22. 22.
    Beule, J.D., Vylder, B.D., Belpaeme, T.: A cross-situational learning algorithm for damping homonymy in the guessing game. In: Rocha, L.M., Bedau, M., Floreano, D., Goldstone, R., Vespignani, A., Yaeger, L. (eds.) Proceedings of the Xth Conference on Artificial Life. The MIT Press, Cambridge (2006)Google Scholar
  23. 23.
    Steels, L., Vogt, P.: Grounding adaptive language games in robotic agents. In: Husbands, C., Harvey, I. (eds.) Proceedings of the Fourth European Conference on Artificial Life. MIT Press, Cambridge and London (1997)Google Scholar
  24. 24.
    Smith, K., Kirby, S., Brighton, H.: Iterated learning: A framework for the emergence of language. Artificial Life 9(4), 371–386 (2003)CrossRefGoogle Scholar
  25. 25.
    Elman, J.L.: Learning and development in neural networks: The importance of starting small. Cognition 48, 71–99 (1993)CrossRefGoogle Scholar
  26. 26.
    Clark, A., Thornton, C.: Trading spaces: Computation, representation and the limits of uninformed learning. Behavioral and Brain Sciences 20(1), 57–67 (1997)CrossRefGoogle Scholar
  27. 27.
    de Solla Price, D.: A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science 27, 292–306 (1976)CrossRefGoogle Scholar
  28. 28.
    Heidegger, M.: On the Way to Language. Harper, San Francisco (1982)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Samarth Swarup
    • 1
  • Kiran Lakkaraju
    • 1
  • Sylvian R. Ray
    • 1
  • Les Gasser
    • 1
    • 2
  1. 1.Dept. of Computer Science 
  2. 2.Graduate School of Library and Information ScienceUniversity of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations