Operations Research Proceedings 2004 pp 319-326
Total Reward Variance in Discrete and Continuous Time Markov Chains
This note studies the variance of total cumulative rewards for Markov reward chains in both discrete and continuous time. It is shown that parallel results can be obtained for both cases.
First, explicit formulae are presented for the variance within finite time. Next, the infinite time horizon is considered. Most notably, it is concluded that the variance has a linear growth rate. Explicit expressions are provided, related to the standard average reward case, to compute this growth rate.
Unable to display preview. Download preview PDF.