Advertisement

Making a Reinforcement Learning Agent Believe

  • Klaus Häming
  • Gabriele Peters
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7552)

Abstract

We recently explored the benefits of a reinforcement learning agent which is supplemented by a symbolic learning level. This second level is represented in the symbolic form of Spohn’s ranking functions. Given this context, we discuss in this paper the creation of symbolic rules from a Q-function. We explore several alternatives and show that the rule generation greatly influences the performance of the agent. We provide empirical evidence about which approach to favor. Additionally, the rules created by the considered application are shown to be plausible and understandable.

Keywords

ranking functions belief revision reinforcement learning hybrid learning architecture 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alchourron, C.E., Gardenfors, P., Makinson, D.: On the logic of theory change: Partial meet contraction and revision functions. J. Symbolic Logic 50(2), 510–530 (1985)MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Gombert, J.E.: Implicit and explicit learning to read: Implication as for subtypes of dyslexia. Current Psychology Letters 1(10) (2003)Google Scholar
  3. 3.
    Häming, K., Peters, G.: A hybrid learning system for object recognition. In: 8th International Conference on Informatics in Control, Automation, and Robotics (ICINCO 2011), Noordwijkerhout, The Netherlands, July 28-31 (2011)Google Scholar
  4. 4.
    Häming, K., Peters, G.: Improved revision of ranking functions for the generalization of belief in the context of unobserved variables. In: International Conference on Neural Computation Theory and Applications (NCTA 2011), October 24-26 (2011)Google Scholar
  5. 5.
    Häming, K., Peters, G.: Ranking Functions in Large State Spaces. In: Iliadis, L., Maglogiannis, I., Papadopoulos, H. (eds.) EANN/AIAI 2011. IFIP AICT, vol. 364, pp. 219–228. Springer, Heidelberg (2011)Google Scholar
  6. 6.
    Robinson, J.A., Voronkov, A. (eds.): Handbook of Automated Reasoning (in 2 volumes). Elsevier and MIT Press (2001)Google Scholar
  7. 7.
    Ryman-Tubb, N.F., Krause, P.: Neural Network Rule Extraction to Detect Credit Card Fraud. In: Iliadis, L., Jayne, C. (eds.) EANN/AIAI 2011. IFIP AICT, vol. 363, pp. 101–110. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  8. 8.
    Spohn, W.: Ordinal conditional functions: A dynamic theory of epistemic states. In: Causation in Decision, Belief Change and Statistics, pp. 105–134 (August 1988)Google Scholar
  9. 9.
    Spohn, W.: Ranking functions, agm style. Internet Festschrift for Peter Gärdenfors (1999)Google Scholar
  10. 10.
    Spohn, W.: A survey of ranking theory. In: Degrees of Belief. Springer (2009)Google Scholar
  11. 11.
    Sun, R., Terry, C., Slusarz, P.: The interaction of the explicit and the implicit in skill learning: A dual-process approach. Psychological Review 112, 159–192 (2005)CrossRefGoogle Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  13. 13.
    Tadepalli, P., Givan, R., Driessens, K.: Relational reinforcement learning: An overview. In: Proceedings of the ICML 2004 Workshop on Relational Reinforcement Learning (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Klaus Häming
    • 1
  • Gabriele Peters
    • 1
  1. 1.University of Hagen - Human-Computer-InteractionHagenGermany

Personalised recommendations