Scalar Outcomes Suffice for Finitary Probabilistic Testing

  • Yuxin Deng
  • Rob van Glabbeek
  • Carroll Morgan
  • Chenyi Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4421)

Abstract

The question of equivalence has long vexed research in concurrency, leading to many different denotational- and bisimulation-based approaches; a breakthrough occurred with the insight that tests expressed within the concurrent framework itself, based on a special “success action”, yield equivalences that make only inarguable distinctions.

When probability was added, however, it seemed necessary to extend the testing framework beyond a direct probabilistic generalisation in order to remain useful. An attractive possibility was the extension to multiple success actions that yielded vectors of real-valued outcomes.

Here we prove that such vectors are unnecessary when processes are finitary, that is finitely branching and finite-state: single scalar outcomes are just as powerful. Thus for finitary processes we can retain the original, simpler testing approach and its direct connections to other naturally scalar-valued phenomena.

References

  1. 1.
    Abramsky, S., Jung, A.: Domain theory. In: Abramsky, S., Gabbay, D.M., Maibaum, T.S.E. (eds.) Handbook of Logic and Computer Science, vol. 3, pp. 1–168. Clarendon Press, Oxford (1994)Google Scholar
  2. 2.
    Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis, 2nd edn. Springer, Heidelberg (1999)MATHGoogle Scholar
  3. 3.
    Cattani, S., Segala, R.: Decision algorithms for probabilistic bisimulation. In: Brim, L., et al. (eds.) CONCUR 2002. LNCS, vol. 2421, pp. 371–385. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    De Nicola, R., Hennessy, M.: Testing equivalences for processes. Theoretical Computer Science 34, 83–133 (1984)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Hansson, H., Jonsson, B.: A calculus for communicating systems with time and probabilities. In: Proc. of the Real-Time Systems Symposium (RTSS ’90), pp. 278–287. IEEE Computer Society Press, Los Alamitos (1990)Google Scholar
  6. 6.
    He, J., Seidel, K., McIver, A.K.: Probabilistic models for the guarded command language. Science of Computer Programming 28, 171–192 (1997)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs (1985)MATHGoogle Scholar
  8. 8.
    Jonsson, B., Ho-Stuart, C., Yi, W.: Testing and refinement for nondeterministic and probabilistic processes. In: Langmaack, H., de Roever, W.-P., Vytopil, J. (eds.) FTRTFT 1994 and ProCoS 1994. LNCS, vol. 863, pp. 418–430. Springer, Heidelberg (1994)Google Scholar
  9. 9.
    Jonsson, B., Yi, W.: Testing preorders for probabilistic processes can be characterized by simulations. Theoretical Computer Science 282(1), 33–51 (2002)MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Kozen, D.: A probabilistic PDL. Jnl. Comp. Sys. Sciences 30(2), 162–178 (1985)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Matoušek, J.: Lectures on Discrete Geometry. Springer, Heidelberg (2002)Google Scholar
  12. 12.
    McIver, A.K., Morgan, C.C.: Games, probability and the quantitative μ-calculus qMu. In: Baaz, M., Voronkov, A. (eds.) LPAR 2002. LNCS (LNAI), vol. 2514, pp. 292–310. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  13. 13.
    McIver, A.K., Morgan, C.C.: Abstraction, Refinement and Proof for Probabilistic Systems. Tech. Mono. Comp. Sci. Springer, Heidelberg (2005)MATHGoogle Scholar
  14. 14.
    Morgan, C.C., McIver, A.K., Seidel, K.: Probabilistic predicate transformers. ACM Trans. on Programming Languages and Systems 18(3), 325–353 (1996)CrossRefGoogle Scholar
  15. 15.
    Philippou, A., Lee, I., Sokolsky, O.: Weak bisimulation for probabilistic systems. In: Palamidessi, C. (ed.) CONCUR 2000. LNCS, vol. 1877, pp. 334–349. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  16. 16.
    Puterman, M.L.: Markov Decision Processes. Wiley, Chichester (1994)MATHGoogle Scholar
  17. 17.
    Segala, R.: Testing probabilistic automata. In: Sassone, V., Montanari, U. (eds.) CONCUR 1996. LNCS, vol. 1119, pp. 299–314. Springer, Heidelberg (1996)Google Scholar
  18. 18.
    Segala, R., Lynch, N.A.: Probabilistic simulations for probabilistic processes. In: Jonsson, B., Parrow, J. (eds.) CONCUR 1994. LNCS, vol. 836, pp. 481–496. Springer, Heidelberg (1994)CrossRefGoogle Scholar
  19. 19.
    Stoelinga, M.I.A., Vaandrager, F.W.: A testing scenario for probabilistic automata. In: Baeten, J.C.M., et al. (eds.) ICALP 2003. LNCS, vol. 2719, pp. 407–418. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  20. 20.
    Vardi, M.Y.: Automatic verification of probabilistic concurrent finite state programs. In: Proc. FOCS ’85, pp. 327–338. IEEE Computer Society Press, Los Alamitos (1985)Google Scholar
  21. 21.
    Yi, W., Larsen, K.G.: Testing probabilistic and nondeterministic processes. In: Proc. IFIP TC6/WG6.1 Twelfth Intern. Symp. on Protocol Specification, Testing and Verification. IFIP Transactions, vol. C-8, pp. 47–61. North-Holland, Amsterdam (1992)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Yuxin Deng
    • 1
  • Rob van Glabbeek
    • 1
    • 2
  • Carroll Morgan
    • 1
  • Chenyi Zhang
    • 1
    • 2
  1. 1.School of Comp. Sci. and Eng., University of New South Wales, SydneyAustralia
  2. 2.National ICT Australia, Locked Bag 6016, Sydney, NSW 1466Australia

Personalised recommendations