Advertisement

Growing Recursive Self-Improvers

  • Bas R. Steunebrink
  • Kristinn R. Thórisson
  • Jürgen Schmidhuber
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9782)

Abstract

Research into the capability of recursive self-improvement typically only considers pairs of \(\langle \)agent, self-modification candidate\(\rangle \), and asks whether the agent can determine/prove if the self-modification is beneficial and safe. But this leaves out the much more important question of how to come up with a potential self-modification in the first place, as well as how to build an AI system capable of evaluating one. Here we introduce a novel class of AI systems, called experience-based AI (expai), which trivializes the search for beneficial and safe self-modifications. Instead of distracting us with proof-theoretical issues, expai systems force us to consider their education in order to control a system’s growth towards a robust and trustworthy, benevolent and well-behaved agent. We discuss what a practical instance of expai looks like and build towards a “test theory” that allows us to gauge an agent’s level of understanding of educational material.

Keywords

Test Theory Formal Verification Proof Search Intrinsic Goal Natural Language Understanding 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

The authors would like to thank Eric Nivel and Klaus Greff for seminal discussions and helpful critique. This work has been supported by a grant from the Future of Life Institute.

References

  1. 1.
    Bloom, B., Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R. (eds.): Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. David McKay, New York (1956)Google Scholar
  2. 2.
    Fallenstein, B., Soares, N.: Problems of self-reference in self-improving space-time embedded intelligence. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 21–32. Springer, Heidelberg (2014)Google Scholar
  3. 3.
    Goertzel, B., Bugaj, S.V.: AGI preschool. In: Proceedings of the Second Conference on Artificial General Intelligence (AGI-2009). Atlantis Press, Paris (2009)Google Scholar
  4. 4.
    Krathwohl, D.R.: A revision of bloom’s taxonomy: an overview. Theor. Pract. 41(4), 212–218 (2002)CrossRefGoogle Scholar
  5. 5.
    Marzano, R.J., Kendall, J.S.: The need for a revision of bloom’s taxonomy. In: Marzano, R., Kendall, J.S. (eds.) The New Taxonomy of Educational Objectives, pp. 1–20. Corwin Press, Thousand Oaks (2006)Google Scholar
  6. 6.
    Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images (2014). http://arXiv.org/abs/1412.1897
  7. 7.
    Nikolić, D.: AI-Kindergarten: A method for developing biological-like artificial intelligence (2016, forthcoming). http://www.danko-nikolic.com/wp-content/uploads/2015/05/AI-Kindergarten-patent-pending.pdf. Accessed 1 April 2016
  8. 8.
    Nivel, E., Thórisson, K.R.: Self-programming: operationalizing autonomy. In: Proceedings of the 2nd Conference on Artificial General Intelligence (AGI-2009) (2009)Google Scholar
  9. 9.
    Nivel, E., et al.: Bounded seed-AGI. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 85–96. Springer, Heidelberg (2014)Google Scholar
  10. 10.
    Nivel, E., Thórisson, K.R., Steunebrink, B.R., Dindo, H., Pezzulo, G., Rodríguez, M., Hernández, C., Ognibene, D., Schmidhuber, J., Sanz, R., Helgason, H.P., Chella, A., Jonsson, G.K.: Autonomous acquisition of natural language. In: Proceedings of the IADIS International Conference on Intelligent Systems & Agents, pp. 58–66 (2014)Google Scholar
  11. 11.
    Nivel, E., Thórisson, K.R., Steunebrink, B., Schmidhuber, J.: Anytime bounded rationality. In: Bieger, J., Goertzel, B., Potapov, A. (eds.) AGI 2015. LNCS, vol. 9205, pp. 121–130. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  12. 12.
    Schmidhuber, J.: Gödel machines: fully self-referential optimal universal self-improvers. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence. Cognitive Technologies, pp. 199–226. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    Schmidhuber, J.: Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Sci. 18(2), 173–187 (2006)CrossRefGoogle Scholar
  14. 14.
    Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Ment. Dev. 2(3), 230–247 (2010)CrossRefGoogle Scholar
  15. 15.
    Steunebrink, B.R., Schmidhuber, J.: A family of Gödel machine implementations. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830, pp. 275–280. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Steunebrink, B.R., Koutník, J., Thórisson, K.R., Nivel, E., Schmidhuber, J.: Resource-bounded machines are motivated to be effective, efficient, and curious. In: Kühnberger, K.-U., Rudolph, S., Wang, P. (eds.) AGI 2013. LNCS, vol. 7999, pp. 119–129. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  17. 17.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks (2013). http://arXiv.org/abs/1312.6199
  18. 18.
    Thórisson, K.R., Bieger, J., Thorarensen, T., Sigurdardottir, J.S., Steunebrink, B.R.: Why artificial intelligence needs a task theory - and what it might look like. In: Proceedings of AGI-2016 (2016)Google Scholar
  19. 19.
    Thórisson, K.R., Kremelberg, D., Steunebrink, B.R., Nivel, E.: About understanding. In: Proceedings of AGI-2016 (2016)Google Scholar
  20. 20.
    Thórisson, K.R., Nivel, E.: Achieving artificial general intelligence through peewee granularity. In: Proceedings of AGI-2009, pp. 222–223 (2009)Google Scholar
  21. 21.
    Yudkowsky, E., Herreshoff, M.: Tiling agents for self-modifying AI, and the Löbian obstacle (2013). https://intelligence.org/files/TilingAgentsDraft.pdf

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Bas R. Steunebrink
    • 1
  • Kristinn R. Thórisson
    • 2
    • 3
  • Jürgen Schmidhuber
    • 1
  1. 1.The Swiss AI Lab IDSIA, USI and SUPSIMannoSwitzerland
  2. 2.Center for Analysis and Design of Intelligent AgentsReykjavik UniversityReykjavikIceland
  3. 3.Icelandic Institute for Intelligent MachinesReykjavikIceland

Personalised recommendations