General Discounting Versus Average Reward

  • Marcus Hutter
Conference paper

DOI: 10.1007/11894841_21

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4264)
Cite this paper as:
Hutter M. (2006) General Discounting Versus Average Reward. In: Balcázar J.L., Long P.M., Stephan F. (eds) Algorithmic Learning Theory. ALT 2006. Lecture Notes in Computer Science, vol 4264. Springer, Berlin, Heidelberg

Abstract

Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to ∞ (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m→∞ and V for k→∞ are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then the existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Marcus Hutter
    • 1
  1. 1.IDSIA / RSISE / ANU / NICTA / 

Personalised recommendations