Skip to main content
SpringerLink
Account
Menu
Find a journal Publish with us Track your research
Search
Cart
  1. Home
  2. Autonomous Agents and Multi-Agent Systems
  3. Article

Intelligence Dynamics: a concept and preliminary experiments for open-ended learning agents

  • Open access
  • Published: 12 February 2009
  • Volume 19, pages 248–271, (2009)
  • Cite this article
Download PDF

You have full access to this open access article

Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript
Intelligence Dynamics: a concept and preliminary experiments for open-ended learning agents
Download PDF
  • Masahiro Fujita1 
  • 786 Accesses

  • 6 Citations

  • 1 Altmetric

  • Explore all metrics

Abstract

We propose a novel approach that aims to realize autonomous developmental intelligence called Intelligence Dynamics. We emphasize two technical features of dynamics and embodiment in comparison with the symbolic approach of the conventional Artificial Intelligence. The essential conceptual idea of this approach is that an embodied agent interacts with the real world to learn and develop its intelligence as attractors of the dynamic interaction. We develop two computational models, one is for self-organizing multi-attractors, and the other provides a motivational system for open-ended learning agents. The former model is realized by recurrent neural networks with a small humanoid body in the real world, and the later is realized by hierarchical support vector machines with inverted pendulum agents in a virtual world. Although they are preliminary experiments, they take important first steps towards demonstrating the feasibility and value of open-ended learning agents with the concept of Intelligence Dynamics.

Article PDF

Download to read the full article text

Similar content being viewed by others

The memorization of in-line sensorimotor invariants: toward behavioral ontogeny and enactive agents

Article 01 June 2014

Pierre De Loor, Kristen Manac’h & Pierre Chevaillier

Deep Intelligence: What AI Should Learn from Nature’s Imagination

Article 06 March 2023

Ali A. Minai

Embodied Cognition and Multi-Agent Behavioral Emergence

Chapter © 2018
Use our pre-submission checklist

Avoid common mistakes on your manuscript.

References

  • Allen G.I., Tsukahara N. (1974) Cerebrocerebellar communication system. Physical Review 54: 957–1006

    Google Scholar 

  • Asada M., MacDorman K.F., Ishiguro H., Kuniyoshi Y. (2001) Cognitive developmental robotics as a new paradigm for designing humanoid robots. Robotics and Autonomous Systems 37: 185–193. doi:10.1016/S0921-8890(01)00157-9

    Article  MATH  Google Scholar 

  • Barto, A. G., Singh, S., & Chentanez, N. (2004). Intrinsically motivated learning of hierarchical collection of skills. In Proceedings of the 3rd international conference on developmental learning (ICDL), San Diego, CA (pp. 112–119).

  • Bentivegna D.C., Atkeson C.G., Cheng G. (2004) Learning tasks from observation and practice. Robotics and Autonomous Systems 47(2–3): 163–169

    Article  Google Scholar 

  • Bentivegna, D. C., Ude, A., Atkeson, C. G., & Cheng, G. (2002). Humanoid robot learning and game playing using PC-based vision. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, Lausanne, Switzerland (pp. 2449–2454).

  • Bernstein N. (1967) The coordination and regulation of movements. Pergamon Press, Oxford

    Google Scholar 

  • Borg I., Groenen P. (1997) Modern multidimensional scaling. Theory and applications. Springer, New York

    MATH  Google Scholar 

  • Charniak E., McDermott D. (1985) Introduction to artificial intelligence. Reading, MA, Addison Wesley

    MATH  Google Scholar 

  • Csikszentmihalyi M. (1990) Flow: The psychology of optimal experience. Harper and Row, New York

    Google Scholar 

  • Donald M. (1991) Origin of the modern mind. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Doya K., Samejima K., Katagiri K., Kawato M. (2002) Multiple model-based reinforcement learning. Neural Computation 14(6): 1347–1369. doi:10.1162/089976602753712972

    Article  MATH  Google Scholar 

  • Fujita, M., Kuroki, Y., Ishida, T., & Doi, T. T. (2003). Autonomous behavior control architecture of entertainment humanoid robot SDR-4X. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, Las Vegas, NV (pp. 960–967).

  • Harnad S. (1990) The symbol grounding problem. Physica D. Nonlinear Phenomena 42: 335–346. doi:10.1016/0167-2789(90)90087-6

    Article  Google Scholar 

  • Haruno M., Wolpert D.M., Kawato M. (2001) MOSAIC model for sensorimotor learning and control. Neural Computation 13: 2201–2220. doi:10.1162/089976601750541778

    Article  MATH  Google Scholar 

  • Inamura T., Nakamura F., Toshima I. (2004) Embodied symbol emergence based on mimesis theory. The International Journal of Robotics Research 23(4): 363–377. doi:10.1177/0278364904042199

    Article  Google Scholar 

  • Ito M., Noda K., Hoshino Y., Tani J. (2006) Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model. Neural Networks 19(3): 323–337

    Article  MATH  Google Scholar 

  • Jordan M.I., Rumelhart D.E. (1992) Forward models: Supervised learning with a distal teacher. Cognitive Science 16: 307–354

    Article  Google Scholar 

  • Kaplan, F., & Oudeyer, P.-Y. (2003). Motivational principles for visual know-how development. In Proceedings of the 3rd international workshop on epigenetic robotics, Edinburgh, Scotland (pp. 73–80).

  • Kohonen T. (1997) Self-organizing maps. Springer-Verlag, New York

    MATH  Google Scholar 

  • Kuniyoshi Y., Ohmura Y., Terada K., Nagakubo A., Eitoku S., Yamamoto T. (2004) Embodied basis of invariant features in execution and perception of whole body dynamic actions—knacks and focuses of roll-and-rise motion. Robotics and Autonomous Systems 48(4): 189–201. doi:10.1016/j.robot.2004.07.004

    Article  Google Scholar 

  • Ma J., Theiler J., Perkins S. (2003) Accurate on-line support vector regression. Neural Computation 15(11): 2683–2703. doi:10.1162/089976603322385117

    Article  MATH  Google Scholar 

  • Minamino, K. (2008). Intelligence model organized by rich experience (in Japanese). In Intelligence dynamics (Vol. 3). Japan: Springer.

  • Newell A., Simon H.A. (1976) Computer science as empirical enquiry: Symbols and search. Communications of the ACM 19(3): 113–126. doi:10.1145/360018.360022

    Article  MathSciNet  Google Scholar 

  • Noda, K., Ito, M., Hoshino, Y., & Tani, J. (2006). Dynamic generation and switching of object handling behaviors by a humanoid robot using a recurrent neural network model. In Proceedings of simulation of adaptive behavior (SAB’06), Rome, Italy. Lecture Notes in Artificial Intelligence (Vol. 4095, pp. 185–196).

  • Pfeifer R., Scheier C. (1999) Understanding intelligence. MIT Press, Cambridge, MA

    Google Scholar 

  • Reed E.S. (1997) From soul to mind: The emergence of psychology, from Erasmus Darwin to William James. Yale University Press, New Haven, CT

    Google Scholar 

  • Rizzolattie G., Fadiga L., Gallese V., Fogassi L. (1996) Premotor cortex and the recognition of motor actions. Brain Research. Cognitive Brain Research 3: 131–141. doi:10.1016/0926-6410(95)00038-0

    Article  Google Scholar 

  • Rumelhart D.E., Hinton G.E., Williams R.J. (1986) Learning internal representations by error propagation. In: Rumelhart D.E., McClelland J.L. (eds) Parallel distributed processing. MIT Press, Cambridge, MA

    Google Scholar 

  • Russell R., Norvig P. (2002) Artificial intelligence: A modern approach. Prentice Hall, Englewood Cliffs, NJ

    Google Scholar 

  • Sabe, K. (2005). A proposal of intelligence model: MINDY (in Japanese). In Intelligence dynamics (Vol. 2). Japan: Springer.

  • Sabe, K., Hidai, K., Kawamoto, K., & Suzuki, H. (2006). A proposal for intelligence model, MINDY for open ended learning system. In Proceedings of the international workshop on intelligence dynamics at IEEE/RSJ Humanoids, Geneva, Italy.

  • Scholkopf B., Smola A.J. (2001) Learning with kernels: Support vector machines, regularization, optimization and beyond. MIT Press, Cambridge, MA

    Google Scholar 

  • Sutton R.S., Bart A.G. (1998) Reinforcement learning. MIT Press, Cambridge, MA

    Google Scholar 

  • Tani J. (2001) Learning to generate articulated behavior through the bottom-up and the top-down interaction process. Neural Networks 16(1): 11–23. doi:10.1016/S0893-6080(02)00214-9

    Article  Google Scholar 

  • Vijayakumar, S., & Schaal, S. (2000). LWPR: An O(n) algorithm for incremental real time learning in high dimensional space. In Proceedings of the seventeenth international conference on machine learning (ICML2000), Stanford, CA (pp. 1079–1086).

Download references

Acknowledgements

The author would like thank the former members of Sony Intelligence Dynamics Inc. (SIDL) for their research efforts on the concept and experiments described here. Especially the author would like to thank Dr. Toshi. T. Doi, the former president of SIDL, for directing Intelligence Dynamics research, and to Hideki Shimomura, Kohtaro Sabe, and Masato Ito for their discussions on this article. The author would also like to thank Akira Iga, the former president of Information Technologies laboratories, Sony, for his assist to continue research on Intelligence Dynamics.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Author information

Authors and Affiliations

  1. System Technologies Laboratories, Sony Corporation, Shinagawa-ku, Tokyo, Japan

    Masahiro Fujita

Authors
  1. Masahiro Fujita
    View author publications

    You can also search for this author in PubMed Google Scholar

Corresponding author

Correspondence to Masahiro Fujita.

Rights and permissions

Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Reprints and permissions

About this article

Cite this article

Fujita, M. Intelligence Dynamics: a concept and preliminary experiments for open-ended learning agents. Auton Agent Multi-Agent Syst 19, 248–271 (2009). https://doi.org/10.1007/s10458-009-9076-y

Download citation

  • Published: 12 February 2009

  • Issue Date: December 2009

  • DOI: https://doi.org/10.1007/s10458-009-9076-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Open-ended
  • Dynamics
  • Embodiment
  • Prediction
  • Intelligence Dynamics
  • Recurrent neural networks
  • Intrinsic motivation
  • Flow theory
Use our pre-submission checklist

Avoid common mistakes on your manuscript.

Advertisement

Search

Navigation

  • Find a journal
  • Publish with us
  • Track your research

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Publish your research
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our imprints

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support

Not affiliated

Springer Nature

© 2024 Springer Nature