Testing Concurrent Objects with Application-Specific Schedulers

  • Rudolf Schlatte
  • Bernhard Aichernig
  • Frank de Boer
  • Andreas Griesmayer
  • Einar Broch Johnsen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5160)

Abstract

In this paper, we propose a novel approach to testing executable models of concurrent objects under application-specific scheduling regimes. Method activations in concurrent objects are modeled as a composition of symbolic automata; this composition expresses all possible interleavings of actions. Scheduler specifications, also modeled as automata, are used to constrain the system execution. Test purposes are expressed as assertions on selected states of the system, and weakest precondition calculation is used to derive the test cases from these test purposes. Our new testing technique is based on the assumption that we have full control over the (application-specific) scheduler, which is the case in our executable models under test. Hence, the enforced scheduling policy becomes an integral part of a test case. This tackles the problem of testing non-deterministic behavior due to scheduling.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Choi, J.-D., Zeller, A.: Isolating failure-inducing thread schedules. In: International Symposium on Software Testing and Analysis, pp. 210–220. ACM Press, New York (2002)CrossRefGoogle Scholar
  2. 2.
    Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. The MIT Press, Cambridge (1999)Google Scholar
  3. 3.
    Clavel, M., Durán, F., Eker, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Quesada, J.F.: Maude: Specification and programming in rewriting logic. Theoretical Computer Science 285, 187–243 (2002)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    de Boer, F.S., Clarke, D., Johnsen, E.B.: A complete guide to the future. In: De Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 316–330. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Edelstein, O., Farchi, E., Nir, Y., Ratzaby, G., Ur, S.: Multithreaded Java program test generation. IBM Systems Journal 41(1), 111–125 (2002)CrossRefGoogle Scholar
  6. 6.
    Fersman, E., Krcál, P., Pettersson, P., Yi, W.: Task automata: Schedulability, decidability and undecidability. Information and Computation 205(8), 1149–1172 (2007)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    ISO/IEC 9646-1: Information technology - OSI - Conformance testing methodology and framework - Part 1: General Concepts (1994)Google Scholar
  8. 8.
    Jasper, R., Brennan, M., Williamson, K., Currier, B., Zimmerman, D.: Test data generation and feasible path analysis. In: Proceedings of the International symposium on Software testing and analysis (ISSTA 1994), pp. 95–107. ACM Press, New York (1994)CrossRefGoogle Scholar
  9. 9.
    Johnsen, E.B., Owe, O.: An asynchronous communication model for distributed concurrent objects. Software and Systems Modeling 6(1), 35–58 (2007)Google Scholar
  10. 10.
    Magee, J., Kramer, J.: Concurrency: State Models & Java Programs, 2nd edn. Wiley, Chichester (2006)Google Scholar
  11. 11.
    Meseguer, J.: Conditional rewriting logic as a unified model of concurrency. Theoretical Computer Science 96, 73–155 (1992)MATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Nigro, L., Pupo, F.: Schedulability analysis of real time actor systems using coloured petri nets. In: Agha, G.A., De Cindio, F., Rozenberg, G. (eds.) APN 2001. LNCS, vol. 2001, pp. 493–513. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  13. 13.
    Rusu, V., du Bousquet, L., Jéron, T.: An approach to symbolic test generation. In: Grieskamp, W., Santen, T., Stoddart, B. (eds.) IFM 2000. LNCS, vol. 1945, pp. 338–357. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  14. 14.
    Schönborn, J., Kyas, M.: A theory of bounded fair scheduling. In: Fitzgerald, J., Haxthausen, A. (eds.) International Colloquium on Theoretical Aspects of Computing (ICTAC). LNCS, vol. 5160, pp. 334–348. Springer, Heidelberg (2008)Google Scholar
  15. 15.
    Stone, J.M.: Debugging concurrent processes: A case study. In: Proceedings SIGPLAN Conference on Programming Language Design and Implementation (PLDI 1988), June 1988, pp. 145–153. ACM Press, New York (1988)CrossRefGoogle Scholar
  16. 16.
    Tillmann, N., Schulte, W.: Parameterized unit tests. In: Proceedings of the 10th European Software Engineering Conference / 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2005), pp. 253–262. ACM Press, New York (2005)CrossRefGoogle Scholar
  17. 17.
    Wang, C., Yang, Z., Ivancic, F., Gupta, A.: Whodunit? Causal analysis for counterexamples. In: Graf, S., Zhang, W. (eds.) ATVA 2006. LNCS, vol. 4218, pp. 82–95. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    Weyuker, E.J.: Testing component-based software: A cautionary tale. IEEE Software, pp. 54–59 (September 1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Rudolf Schlatte
    • 1
    • 2
  • Bernhard Aichernig
    • 1
    • 2
  • Frank de Boer
    • 3
  • Andreas Griesmayer
    • 1
  • Einar Broch Johnsen
    • 4
  1. 1.International Institute for Software TechnologyUnited Nations University (UNU-IIST)Macao S.A.R.China
  2. 2.Institute for Software TechnologyGraz University of TechnologyAustria
  3. 3.CWI, AmsterdamNetherlands
  4. 4.Department of InformaticsUniversity of OsloNorway

Personalised recommendations