Autonomous Agents and Multi-Agent Systems

, Volume 19, Issue 3, pp 248–271

Intelligence Dynamics: a concept and preliminary experiments for open-ended learning agents

Open Access


We propose a novel approach that aims to realize autonomous developmental intelligence called Intelligence Dynamics. We emphasize two technical features of dynamics and embodiment in comparison with the symbolic approach of the conventional Artificial Intelligence. The essential conceptual idea of this approach is that an embodied agent interacts with the real world to learn and develop its intelligence as attractors of the dynamic interaction. We develop two computational models, one is for self-organizing multi-attractors, and the other provides a motivational system for open-ended learning agents. The former model is realized by recurrent neural networks with a small humanoid body in the real world, and the later is realized by hierarchical support vector machines with inverted pendulum agents in a virtual world. Although they are preliminary experiments, they take important first steps towards demonstrating the feasibility and value of open-ended learning agents with the concept of Intelligence Dynamics.


Open-ended Dynamics Embodiment Prediction Intelligence Dynamics Recurrent neural networks Intrinsic motivation Flow theory 


  1. Allen G.I., Tsukahara N. (1974) Cerebrocerebellar communication system. Physical Review 54: 957–1006Google Scholar
  2. Asada M., MacDorman K.F., Ishiguro H., Kuniyoshi Y. (2001) Cognitive developmental robotics as a new paradigm for designing humanoid robots. Robotics and Autonomous Systems 37: 185–193. doi:10.1016/S0921-8890(01)00157-9 MATHCrossRefGoogle Scholar
  3. Barto, A. G., Singh, S., & Chentanez, N. (2004). Intrinsically motivated learning of hierarchical collection of skills. In Proceedings of the 3rd international conference on developmental learning (ICDL), San Diego, CA (pp. 112–119).Google Scholar
  4. Bentivegna D.C., Atkeson C.G., Cheng G. (2004) Learning tasks from observation and practice. Robotics and Autonomous Systems 47(2–3): 163–169CrossRefGoogle Scholar
  5. Bentivegna, D. C., Ude, A., Atkeson, C. G., & Cheng, G. (2002). Humanoid robot learning and game playing using PC-based vision. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, Lausanne, Switzerland (pp. 2449–2454).Google Scholar
  6. Bernstein N. (1967) The coordination and regulation of movements. Pergamon Press, OxfordGoogle Scholar
  7. Borg I., Groenen P. (1997) Modern multidimensional scaling. Theory and applications. Springer, New YorkMATHGoogle Scholar
  8. Charniak E., McDermott D. (1985) Introduction to artificial intelligence. Reading, MA, Addison WesleyMATHGoogle Scholar
  9. Csikszentmihalyi M. (1990) Flow: The psychology of optimal experience. Harper and Row, New YorkGoogle Scholar
  10. Donald M. (1991) Origin of the modern mind. Harvard University Press, Cambridge, MAGoogle Scholar
  11. Doya K., Samejima K., Katagiri K., Kawato M. (2002) Multiple model-based reinforcement learning. Neural Computation 14(6): 1347–1369. doi:10.1162/089976602753712972 MATHCrossRefGoogle Scholar
  12. Fujita, M., Kuroki, Y., Ishida, T., & Doi, T. T. (2003). Autonomous behavior control architecture of entertainment humanoid robot SDR-4X. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, Las Vegas, NV (pp. 960–967).Google Scholar
  13. Harnad S. (1990) The symbol grounding problem. Physica D. Nonlinear Phenomena 42: 335–346. doi:10.1016/0167-2789(90)90087-6 CrossRefGoogle Scholar
  14. Haruno M., Wolpert D.M., Kawato M. (2001) MOSAIC model for sensorimotor learning and control. Neural Computation 13: 2201–2220. doi:10.1162/089976601750541778 MATHCrossRefGoogle Scholar
  15. Inamura T., Nakamura F., Toshima I. (2004) Embodied symbol emergence based on mimesis theory. The International Journal of Robotics Research 23(4): 363–377. doi:10.1177/0278364904042199 CrossRefGoogle Scholar
  16. Ito M., Noda K., Hoshino Y., Tani J. (2006) Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model. Neural Networks 19(3): 323–337MATHCrossRefGoogle Scholar
  17. Jordan M.I., Rumelhart D.E. (1992) Forward models: Supervised learning with a distal teacher. Cognitive Science 16: 307–354CrossRefGoogle Scholar
  18. Kaplan, F., & Oudeyer, P.-Y. (2003). Motivational principles for visual know-how development. In Proceedings of the 3rd international workshop on epigenetic robotics, Edinburgh, Scotland (pp. 73–80).Google Scholar
  19. Kohonen T. (1997) Self-organizing maps. Springer-Verlag, New YorkMATHGoogle Scholar
  20. Kuniyoshi Y., Ohmura Y., Terada K., Nagakubo A., Eitoku S., Yamamoto T. (2004) Embodied basis of invariant features in execution and perception of whole body dynamic actions—knacks and focuses of roll-and-rise motion. Robotics and Autonomous Systems 48(4): 189–201. doi:10.1016/j.robot.2004.07.004 CrossRefGoogle Scholar
  21. Ma J., Theiler J., Perkins S. (2003) Accurate on-line support vector regression. Neural Computation 15(11): 2683–2703. doi:10.1162/089976603322385117 MATHCrossRefGoogle Scholar
  22. Minamino, K. (2008). Intelligence model organized by rich experience (in Japanese). In Intelligence dynamics (Vol. 3). Japan: Springer.Google Scholar
  23. Newell A., Simon H.A. (1976) Computer science as empirical enquiry: Symbols and search. Communications of the ACM 19(3): 113–126. doi:10.1145/360018.360022 CrossRefMathSciNetGoogle Scholar
  24. Noda, K., Ito, M., Hoshino, Y., & Tani, J. (2006). Dynamic generation and switching of object handling behaviors by a humanoid robot using a recurrent neural network model. In Proceedings of simulation of adaptive behavior (SAB’06), Rome, Italy. Lecture Notes in Artificial Intelligence (Vol. 4095, pp. 185–196).Google Scholar
  25. Pfeifer R., Scheier C. (1999) Understanding intelligence. MIT Press, Cambridge, MAGoogle Scholar
  26. Reed E.S. (1997) From soul to mind: The emergence of psychology, from Erasmus Darwin to William James. Yale University Press, New Haven, CTGoogle Scholar
  27. Rizzolattie G., Fadiga L., Gallese V., Fogassi L. (1996) Premotor cortex and the recognition of motor actions. Brain Research. Cognitive Brain Research 3: 131–141. doi:10.1016/0926-6410(95)00038-0 CrossRefGoogle Scholar
  28. Rumelhart D.E., Hinton G.E., Williams R.J. (1986) Learning internal representations by error propagation. In: Rumelhart D.E., McClelland J.L. (eds) Parallel distributed processing. MIT Press, Cambridge, MAGoogle Scholar
  29. Russell R., Norvig P. (2002) Artificial intelligence: A modern approach. Prentice Hall, Englewood Cliffs, NJGoogle Scholar
  30. Sabe, K. (2005). A proposal of intelligence model: MINDY (in Japanese). In Intelligence dynamics (Vol. 2). Japan: Springer.Google Scholar
  31. Sabe, K., Hidai, K., Kawamoto, K., & Suzuki, H. (2006). A proposal for intelligence model, MINDY for open ended learning system. In Proceedings of the international workshop on intelligence dynamics at IEEE/RSJ Humanoids, Geneva, Italy.Google Scholar
  32. Scholkopf B., Smola A.J. (2001) Learning with kernels: Support vector machines, regularization, optimization and beyond. MIT Press, Cambridge, MAGoogle Scholar
  33. Sutton R.S., Bart A.G. (1998) Reinforcement learning. MIT Press, Cambridge, MAGoogle Scholar
  34. Tani J. (2001) Learning to generate articulated behavior through the bottom-up and the top-down interaction process. Neural Networks 16(1): 11–23. doi:10.1016/S0893-6080(02)00214-9 CrossRefGoogle Scholar
  35. Vijayakumar, S., & Schaal, S. (2000). LWPR: An O(n) algorithm for incremental real time learning in high dimensional space. In Proceedings of the seventeenth international conference on machine learning (ICML2000), Stanford, CA (pp. 1079–1086).Google Scholar

Copyright information

© The Author(s) 2009

Authors and Affiliations

  1. 1.System Technologies LaboratoriesSony CorporationShinagawa-ku, TokyoJapan

Personalised recommendations