Encyclopedia of Operations Research and Management Science

2001 Edition
| Editors: Saul I. Gass, Carl M. Harris

Markov processes

  • Douglas R. Miller
Reference work entry
DOI: https://doi.org/10.1007/1-4020-0611-X_582

INTRODUCTION

A Markov process is a stochastic process { X( t), tT } with state space S and time domain T that satisfies the Markov property. The Markov property is also known as lack of memory. For a stochastic process, probabilities of behavior of the process at future times usually depend on the behavior of the process at times in the past. The Markov property means that probabilities of future events are completely determined by the present state of the process: if the current state of the process is known, the past behavior of the process provides no additional information in determining the probabilities of future events. Mathematically, the process { X( t), tT } is Markov if, for any n > 0, any t 1 < t 2 <..., < t n < t n+1 in the time domain T, and any states x 1, x 2,..., x n and any set A in the state space S,
This is a preview of subscription content, log in to check access.

References

  1. [1]
    Billingsley, P. (1968). Convergence of Probability Measures, Wiley, New York.Google Scholar
  2. [2]
    Breiman, L. (1968). Probability, Addison-Wesley, Reading, Massachusetts.Google Scholar
  3. [3]
    Breiman, L. (1986). Probability and Stochastic Processes, With a View Toward Applications, Second Edition. The Scientific Press, Palo Alto, California.Google Scholar
  4. [4]
    Cassandras, C.G. (1993). Discrete Event Systems: Modeling and Performance Analysis, Irwin, Boston.Google Scholar
  5. [5]
    Chellappa, R. and Jain, A., eds. (1993). Markov Random Fields: Theory and Application, Academic Press, San Diego.Google Scholar
  6. [6]
    Çinlar, E. (1975). Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, New Jersey.Google Scholar
  7. [7]
    Chung, K.L. (1967). Markov Chains with Stationary Transition Probabilities, Springer-Verlag, New York.Google Scholar
  8. [8]
    Feller, W. (1968). An Introduction to Probability Theory and Its Applications, Volume I, Third Edition. Wiley, New York.Google Scholar
  9. [9]
    Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Volume II, Second Edition. Wiley, New York.Google Scholar
  10. [10]
    Fox, B.L. (1990). “Generating Markov-Chain Transitions Quickly.” ORSA J. Comput, 2, 126–135.Google Scholar
  11. [11]
    Glynn, P.W. (1989). “A GSMP Formalism for Discrete Event Systems.” Proc. IEEE 77, 14–23.Google Scholar
  12. [12]
    Glynn, P.W. (1990). “Diffusion Approximations.” In Handbooks in OR and MS, Volume 2. D.P. Heyman and M.J. Sobel (eds.), Elsevier Science Publishers, Amsterdam, 145–198.Google Scholar
  13. [13]
    Grassmann, W.K. (1990). “Computational Methods in Probability,” In Handbooks in OR and MS, Volume 2, D.P. Heyman and M.J. Sobel (eds.). Elsevier Science Publishers, Amsterdam, 199–254.Google Scholar
  14. [14]
    Gross, D. and Miller, D.R. (1984). “The Randomization Technique as a Modelling Tool and Solution Procedure for Transient Markov Processes,” Oper. Res. 32, 343–361.Google Scholar
  15. [15]
    Heyman, D.P. and Sobel, M.J. (1982). Stochastic Models in Operations Research, Volume I: Stochastic Processes and Operating Characteristics. McGraw-Hill, New York.Google Scholar
  16. [16]
    Hordijk, A., Iglehart, D.L., and Schassberger, R. (1976). “Discrete-time methods for simulating continuous-time Markov chains,” Adv. Appl. Probab. 8, 772–778.Google Scholar
  17. [17]
    Howard, R. A. (1971). Dynamic Probabilistic Systems, Volume I: Markov Models, Wiley, New York.Google Scholar
  18. [18]
    Isaacson, D.L. and Madsen, R.W. (1976). Markov Chains: Theory and Applications, Wiley, New York.Google Scholar
  19. [19]
    Iosifescu, M. (1980). Finite Markov Processes and their Application, Wiley, New York.Google Scholar
  20. [20]
    Karlin, S. and Taylor, H.M. (1975). A First Course in Stochastic Processes, Second Edition. Academic Press, New York.Google Scholar
  21. [21]
    Karlin, S. and Taylor, H.M. (1981). A Second Course in Stochastic Processes, Academic Press, New York.Google Scholar
  22. [22]
    Keilson, J. (1979). Markov Chain Models — Rarity and Exponentiality, Springer-Verlag, New York.Google Scholar
  23. [23]
    Kelly, F.P. (1979). Reversibility and Stochastic Networks, Wiley, New York.Google Scholar
  24. [24]
    Kemeny, J.G. and Snell, J.L. (1976). Finite Markov Chains, Springer-Verlag, New York.Google Scholar
  25. [25]
    Kemeny, J.G., Snell, J.L., and Knapp, A.W. (1966). Denumerable Markov Chains, Van Nostrand, Princeton.Google Scholar
  26. [26]
    Kindermann, R. and Snell, J.L. (1980). Markov Random Fields and their Applications. American Mathematical Society, Providence, Rhode Island.Google Scholar
  27. [27]
    Maistrov, L.E. (1974). Probability Theory: A Historical Sketch, Academic Press, New York.Google Scholar
  28. [28]
    Neuts, M.F. (1981). Matrix-Geometric Solutions in Stochastic Models, The Johns Hopkins University Press, Baltimore.Google Scholar
  29. [29]
    Parzen, E. (1962). Stochastic Processes, Holden-Day, San Francisco.Google Scholar
  30. [30]
    Snell, J.L. (1988). Introduction to Probability, Random House, New York.Google Scholar
  31. [31]
    Whitt, W. (1980). “Continuity of Generalized Semi-Markov Processes.” Math. Opns. Res. 5, 494–501.Google Scholar
  32. [32]
    Whittle, P. (1986). Systems in Stochastic Equilibrium, Wiley, New York.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Douglas R. Miller
    • 1
  1. 1.George Mason UniversityFairfaxUSA