The Small Loop Problem: A Challenge for Artificial Emergent Cognition

  • Olivier L. Georgeon
  • James B. Marshall
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 196)

Abstract

We propose the Small Loop Problem as a challenge for biologically inspired cognitive architectures. This challenge consists of designing an agent that would autonomously organize its behavior through interaction with an initially unknown environment that offers basic sequential and spatial regularities. The Small Loop Problem demonstrates four principles that we consider crucial to the implementation of emergent cognition: environment-agnosticism, self-motivation, sequential regularity learning, and spatial regularity learning. While this problem is still unsolved, we report partial solutions that suggest that its resolution is realistic.

Keywords

Self-motivation decision process early-stage cognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aström, K.: Optimal control of Markov processes with incomplete state information. Journal of Mathematical Analysis and Applications (10), 174–205 (1965)Google Scholar
  2. 2.
    Blank, D.S., Kumar, D., Meeden, L., Marshall, J.: Bringing up robot: Fundamental mechanisms for creating a self-motivated, self-organizing architecture. Cybernetics and Systems 32(2), 125–150 (2005)CrossRefGoogle Scholar
  3. 3.
    Bongard, J., Zykov, V., Lipson: Resilient machines through continuous self-modeling. Science 314, 1118–1121 (2006)CrossRefGoogle Scholar
  4. 4.
    Cotterill, R.: Cooperation of basal ganglia, cerebellum, sensory cerebrum and hippocampus: Possible implications for cognition, consciousness, intelligence and creativity. Progress in Neurobiology 64, 1–33 (2001)CrossRefGoogle Scholar
  5. 5.
    Dietterich, T.G.: An Overview of MAXQ Hierarchical Reinforcement Learning. In: Choueiry, B.Y., Walsh, T. (eds.) SARA 2000. LNCS (LNAI), vol. 1864, pp. 26–44. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  6. 6.
    Gay, S., Georgeon, O.L., Kim, J.W.: Implementing spatial awareness in an environment-agnostic agent. In: Proceedings of BRIMS 2012, 21st Annual Conference on Behavior Representation in Modeling and Simulation, Amelia Island, Florida, pp. 62–69 (2012)Google Scholar
  7. 7.
    Georgeon, O.L., Ritter, F.E.: An intrinsically-motivated schema mechanism to model and simulate emergent cognition. Cognitive Systems Research 15-16, 73–92 (2012)CrossRefGoogle Scholar
  8. 8.
    Georgeon, O.L., Sakellariou, I.: Designing environment-agnostic agents. In: Proceedings of ALA 2012, Adaptive Learning Agents Workshop at AAMAS 2012, 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain, pp. 25–32 (2012)Google Scholar
  9. 9.
    Pierce, D., Kuipers, B.: Map learning with uninterpreted sensors and effectors. Artificial Intelligence 92, 169–227 (1997)MATHCrossRefGoogle Scholar
  10. 10.
    Rohrer: Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction. Journal of Artificial General Intelligence 2, 1–28 (2010)CrossRefGoogle Scholar
  11. 11.
    Sun, R., Sessions, C.: Automatic Segmentation of Sequences through Hierarchical Reinforcement Learning. In: Sun, R., Giles, C.L. (eds.) Sequence Learning. LNCS (LNAI), vol. 1828, pp. 241–263. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT Press, Cambridge (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Olivier L. Georgeon
    • 1
    • 2
  • James B. Marshall
    • 3
  1. 1.Université de Lyon, CNRSLyonFrance
  2. 2.Université Lyon 1, LIRIS, UMR5205Villeurbanne CedexFrance
  3. 3.Sarah Lawrence CollegeBronxvilleUSA

Personalised recommendations