Customized Testing for Probabilistic Systems

  • Luis F. Llana-Díaz
  • Manuel Núñez
  • Ismael Rodríguez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3964)


In order to test the correctness of an IUT (implementation under test) with respect to a specification, testing its whole behavior is desirable but unfeasible. In some situations, testing the behavior of the IUT assuming that it is stimulated by a given usage model is more appropriate. Though considering this approach to test functional behaviors consists simply in testing a subset of the IUT, to study the probabilistic behavior of systems by using this customized testing approach leads to some new possibilities. If usage models specify the probabilistic behavior of stimuli and specifications define the probabilistic behavior of reactions to these stimuli, then, by composing them, the probabilistic behavior of any behavior is completely specified. So, after a finite set of behaviors of the IUT is checked, we can compute an upper bound of the probability that a user following the usage model finds an error in the IUT. This can be done by considering the worst case scenario, that is, that any unchecked behavior is wrong.


Output State User Model Input State Probabilistic Term Probabilistic Tree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Cazorla, D., Cuartero, F., Valero, V., Pelayo, F., Pardo, J.: Algebraic theory of probabilistic and non-deterministic processes. Journal of Logic and Algebraic Programming 55(1–2), 57–103 (2003)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Christoff, I.: Testing equivalences and fully abstract models for probabilistic processes. In: Baeten, J.C.M., Klop, J.W. (eds.) CONCUR 1990. LNCS, vol. 458, pp. 126–140. Springer, Heidelberg (1990)Google Scholar
  3. 3.
    Cleaveland, R., Dayar, Z., Smolka, S.A., Yuen, S.: Testing preorders for probabilistic processes. Information and Computation 154(2), 93–148 (1999)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Hennessy, M.: Algebraic Theory of Processes. MIT Press, Cambridge (1988)MATHGoogle Scholar
  5. 5.
    López, N., Núñez, M., Rodríguez, I.: Specification, testing and implementation relations for symbolic-probabilistic systems. Theoretical Computer Science 353(1–3), 228–248 (2006)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    de Nicola, R., Hennessy, M.C.B.: Testing equivalences for processes. Theoretical Computer Science 34, 83–133 (1984)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Núñez, M.: Algebraic theory of probabilistic processes. Journal of Logic and Algebraic Programming 56(1–2), 117–177 (2003)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Núñez, M., Rodríguez, I.: Encoding PAMR into (timed) EFSMs. In: Peled, D.A., Vardi, M.Y. (eds.) FORTE 2002. LNCS, vol. 2529, pp. 1–16. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Núñez, M., Rodríguez, I.: Towards testing stochastic timed systems. In: König, H., Heiner, M., Wolisz, A. (eds.) FORTE 2003. LNCS, vol. 2767, pp. 335–350. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  10. 10.
    Núñez, M., de Frutos, D.: Testing semantics for probabilistic LOTOS. In: Formal Description Techniques VIII, pp. 365–380. Chapman & Hall, Boca Raton (1995)Google Scholar
  11. 11.
    Sayre, K.: Usage model-based automated testing of C++ templates. In: International Conference on Software Engineering. Proceedings of the first international workshop on Advances in model-based testing, pp. 1–5. ACM Press, New York (2005)Google Scholar
  12. 12.
    Segala, R.: Testing probabilistic automata. In: Sassone, V., Montanari, U. (eds.) CONCUR 1996. LNCS, vol. 1119, pp. 299–314. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  13. 13.
    Stoelinga, M., Vaandrager, F.W.: A testing scenario for probabilistic automata. In: Baeten, J.C.M., Lenstra, J.K., Parrow, J., Woeginger, G.J. (eds.) ICALP 2003. LNCS, vol. 2719, pp. 464–477. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Software – Concepts and Tools 17(3), 103–120 (1996)MATHGoogle Scholar
  15. 15.
    Walton, G.H., Poore, J.H., Trammell, C.J.: Statistical testing of software based on a usage model. Software - Practice & Experience 25(1), 97–108 (1995)CrossRefGoogle Scholar
  16. 16.
    Whittaker, J.A., Poore, J.H.: Markov analysis of software specifications. ACM Transactions on Software Engineering and Methodology 2(1), 93–106 (1993)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2006

Authors and Affiliations

  • Luis F. Llana-Díaz
    • 1
  • Manuel Núñez
    • 1
  • Ismael Rodríguez
    • 1
  1. 1.Dept. Sistemas Informáticos y ProgramaciónUniversidad Complutense de MadridMadridSpain

Personalised recommendations