Skip to main content

Approximating Markov Processes by Averaging

  • Conference paper
Automata, Languages and Programming (ICALP 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5556))

Included in the following conference series:

Abstract

We take a dual view of Markov processes – advocated by Kozen – as transformers of bounded measurable functions. We redevelop the theory of labelled Markov processes from this view point, in particular we explore approximation theory. We obtain three main results:

(i) It is possible to define bisimulation on general measure spaces and show that it is an equivalence relation. The logical characterization of bisimulation can be done straightforwardly and generally. (ii) A new and flexible approach to approximation based on averaging can be given. This vastly generalizes and streamlines the idea of using conditional expectations to compute approximation. (iii) It is possible to show that there is a minimal bisimulation equivalent to a process obtained as the limit of the finite approximants.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Larsen, K.G., Skou, A.: Bisimulation through probablistic testing. Information and Computation 94, 1–28 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  2. Desharnais, J., Edalat, A., Panangaden, P.: Bisimulation for labeled Markov processes. Information and Computation 179(2), 163–193 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. de Vink, E., Rutten, J.J.M.M.: Bisimulation for probabilistic transition systems: A coalgebraic approach. Theoretical Computer Science 221(1/2), 271–293 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  4. Desharnais, J., Gupta, V., Jagadeesan, R., Panangaden, P.: Approximating labeled Markov processes. Information and Computation 184(1), 160–200 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  5. Danos, V., Desharnais, J., Panangaden, P.: Conditional expectation and the approximation of labelled Markov processes. In: Amadio, R.M., Lugiez, D. (eds.) CONCUR 2003. LNCS, vol. 2761, pp. 477–491. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  6. Ferns, N., Panangaden, P., Precup, D.: Metrics for Markov decision processes with infinite state spaces. In: Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, pp. 201–208 (2005)

    Google Scholar 

  7. Bouchard-Côté, A., Ferns, N., Panangaden, P., Precup, D.: An approximation algorithm for labelled Markov processes: towards realistic approximation. In: Proceedings of the 2nd International Conference on the Quantitative Evaluation of Systems (QEST), pp. 54–61 (2005)

    Google Scholar 

  8. Cattani, S., Segala, R., Kwiatkowska, M., Norman, G.: Stochastic transition systems for continuous state spaces and non-determinism. In: Sassone, V. (ed.) FOSSACS 2005. LNCS, vol. 3441, pp. 125–139. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  9. Danos, V., Desharnais, J., Laviolette, F., Panangaden, P.: Bisimulation and cocongruence for probabilistic systems. Information and Computation 204(4), 503–523 (2006); Seventh Workshop on Coalgebraic Methods in Computer Science 2004

    Article  MathSciNet  MATH  Google Scholar 

  10. Goubault-Larrecq, J.: Continuous capacities on continuous state spaces. In: Arge, L., Cachin, C., Jurdziński, T., Tarlecki, A. (eds.) ICALP 2007. LNCS, vol. 4596, pp. 764–776. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  11. Kozen, D.: A probabilistic PDL. Journal of Computer and Systems Sciences 30(2), 162–178 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  12. Billingsley, P.: Probability and Measure. Wiley Interscience, Hoboken (1995)

    MATH  Google Scholar 

  13. Selinger, P.: Towards a quantum programming language. Mathematical Structures in Computer Science 14(4), 527–586 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Hopf, E.: The general temporally discrete Markoff process. J. Rational Math. Mech. Anal. 3, 13–45 (1954)

    MathSciNet  MATH  Google Scholar 

  15. Bartels, F., Sokolova, A., de Vink, E.: A hierarchy of probabilistic system types. Theoretical Computer Science 327, 3–22 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  16. Choksi, J.: Inverse limits on measure spaces. Proc. London Math. Soc 8(3), 321–342 (1958)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chaput, P., Danos, V., Panangaden, P., Plotkin, G. (2009). Approximating Markov Processes by Averaging. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds) Automata, Languages and Programming. ICALP 2009. Lecture Notes in Computer Science, vol 5556. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02930-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02930-1_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02929-5

  • Online ISBN: 978-3-642-02930-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics