Total Reward Variance in Discrete and Continuous Time Markov Chains

  • Karel Sladký
  • Nico M. van Dijk
Part of the Operations Research Proceedings book series (ORP, volume 2004)

Abstract

This note studies the variance of total cumulative rewards for Markov reward chains in both discrete and continuous time. It is shown that parallel results can be obtained for both cases.

First, explicit formulae are presented for the variance within finite time. Next, the infinite time horizon is considered. Most notably, it is concluded that the variance has a linear growth rate. Explicit expressions are provided, related to the standard average reward case, to compute this growth rate.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Karel Sladký
    • 1
  • Nico M. van Dijk
    • 2
  1. 1.Institute of Information Theory and AutomationAcademy of Sciences of the Czech RepublicPraha 8Czech Republic
  2. 2.Department of Economic Sciences and EconometricsUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations