Empirical Comparison of Uniformization Methods for Continuous-Time Markov Chains
Computation of transient state occupancy probabilities of continuous-time Markov chains is important for evaluating many performance, dependability, and performability models. A number of numerical methods have been developed to perform this computation, including ordinary differential equation solution methods and uniformization. The performance of these methods degrades when the highest departure rate in the chain increases with respect to a fixed time point. A new variant of uniformization, called adaptive uniformization (AU), has been proposed that can potentially avoid such degradation, when several state transitions must occur before a state with a high departure rate is reached. However, in general, AU has a higher time complexity than standard uniformization, and it is not clear, without an implementation, when All will be advantageous. This paper presents the results of three different AU implementations, differing in the method by which the “jump probabilities” are calculated. To evaluate the methods, relative to standard uniformization, a C++ class was developed to compute a bound on the round-off error incurred by each implementation, as well as count the number of arithmetic instructions that must be performed, categorized both by operation type and phase of the algorithm they belong to. An extended machine-repairman reliability model is solved to illustrate use of the class and compare the adaptive uniformization implementations with standard uniformization. Results show that for certain models and mission times, adaptive uniformization can realize significant efficiency gains, relative to standard uniformization, while maintaining the stability of standard uniformization.
KeywordsLayered Uniformization Uniformization Method Discrete Time Markov Chain Birth Process Standard Uniformization
Unable to display preview. Download preview PDF.
- J. Dunkel and H. Stahl, “On the Transient Analysis of Stiff Markov Chains,” Proc. Third Working Conference on Dependable Computing for Critical Apllications, Modello, Italy, Sept. 1992.Google Scholar
- W.K. Grassman, “Finding Transient Solutions in Markovian Event Systems through Randomization,” in Numerical Solution of Markov Chains, W.J. Stewart (Ed.), Marcel Dekker, New York, 1991.Google Scholar
- A. Jensen, “Markoff chains as an Aid in the Study of Markoff Processes,” Skand. Aktuarietidskrift., 36, pp. 87–91, 1953.Google Scholar
- C. Lindemann, Private Communication, November 1993.Google Scholar
- C. Lindemann, M. Malhotra and K.S. Trivedi, “Numerical Methods for Reliability Evaluation of Closed Fault-tolerant Systems,” Technical Report, Duke University, 1992.Google Scholar
- M. Malhotra, “A Unified Approach for Transient Analysis of Stiff and Non-Stiff Markov Models,” Technical Report DUKE-CCSR-92–001, Center for Computer Systems Research, Duke University, 1992.Google Scholar
- A.P.A. van Moorsel, “Performability Evaluation Concepts and Techniques,” Ph.D. Thesis, University of Twente, 1993.Google Scholar
- A.P.A van Moorsel and W.H. Sanders,“ Adaptive Uniformization,” Stochastic Models, 10:3, 1994.Google Scholar
- A.P.A. van Moorsel and W.H. Sanders,“ Adaptive Uniformization: Technical Details,” PMRL Technical Report 93–4, University of Arizona, 1992.Google Scholar
- P.H. Sterbenz, Floating-Point Computation, Prentice-Hall, Englewood Cliffs, 1974.Google Scholar
- K.S. Trivedi, Probability and Statistics with Reliability, Queueing, and Computer Science Applications, Prentice-Hall, Englewood Cliffs, 1982.Google Scholar