Advertisement

Self-reinforced Meta Learning for Belief Generation

  • Alexandros GkiokasEmail author
  • Alexandra I. Cristea
  • Matthew Thorpe
Conference paper

Abstract

Contrary to common perception, learning does not stop once knowledge has been transferred to an agent. Intelligent behaviour observed in humans and animals strongly suggests that after learning, we self-organise our experiences and knowledge, so that they can be more efficiently reused; a process that is unsupervised and employs reasoning based on the acquired knowledge. Our proposed algorithm emulates meta-learning in-silico: creating beliefs from previously acquired knowledge representations, which in turn become subject to learning, and are further self-reinforced. The proposition of meta-learning, in the form of an algorithm that can learn how to create beliefs on its own accord, raises an interesting question: can artificial intelligence arrive to similar beliefs, rules or ideas, as the ones we humans come to? The described work briefly analyses existing theories and research, and formalises a practical implementation of a meta-learning algorithm.

References

  1. 1.
    Cattell, R.B.: Abilities: Their Structure, Growth, and Action. Houghton Mifflin, New York (1971). ISBN-10: 0395042755Google Scholar
  2. 2.
    Maudsley, D.B.: A Theory of Meta-Learning and Principles of Facilitation: An Organismic Perspective, University of Toronto, Toronto (1979)Google Scholar
  3. 3.
    Vanschoren, J., Blockeel, H.: Towards understanding learning behavior. In: Proceedings of the Annual Machine Learning Conference of Belgium and The Netherlands, Benelearn, pp. 89–96 (2006)Google Scholar
  4. 4.
    Vilalta, R., Drissi, Y.: A perspective view and survey of meta-learning. Artif. Intell. Rev. 18, 77–95 (2002)CrossRefGoogle Scholar
  5. 5.
    Sowa, J.F.: Knowledge Representation: Logical. Brooks Press, Philosophical and Computational Foundations (1999). ISBN:10:0534949657Google Scholar
  6. 6.
    Chein, M., Mugnier, M.: Graph-based Knowledge Representation. Springer, Berlin (2009). ISBN:978-1-84800-285-2Google Scholar
  7. 7.
    Gkiokas, A., Cristea, A.I.: Training a cognitive agent to acquire and represent knowledge from RSS feeds onto conceptual graphs, IARIA COGNITIVE 2014, pp. 184–194. Venice, Italy, 25 (May 2014)Google Scholar
  8. 8.
    Ferrucci, D., Lally, A.: UIMA: an architectural approach to unstructured information processing in the corporate research environment. J. Nat. Lang. 10(3–4), 327–348 (2004)Google Scholar
  9. 9.
    Turney, P., Pantel, P.: From frequency to meaning: Vector space models of semantics. J. Artif. Intell. Res. 37, 141–188 (2010)MathSciNetzbMATHGoogle Scholar
  10. 10.
    MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, vol. 1, pp. 281–297. University of California Press, California (1967)Google Scholar
  11. 11.
    Girolami, M.: Mercer kernel-based clustering in feature space. Neural Netw. IEEE Trans. 13(3), 780–784 (2002)CrossRefGoogle Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (1998). ISBN-10:9780262193986Google Scholar
  13. 13.
    Fellbaum, C.: WordNet: An Electronic Lexical Database (Language, Speech, and Communication). MIT Press, Cambridge (1998). ISBN-10:026206197XGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Alexandros Gkiokas
    • 1
    Email author
  • Alexandra I. Cristea
    • 1
  • Matthew Thorpe
    • 2
  1. 1.Computer Science DepartmentUniversity of WarwickCoventryUK
  2. 2.Mathematics InstituteUniversity of WarwickCoventryUK

Personalised recommendations