Testing Systems of Concurrent Black-Boxes—An Automata-Theoretic and Decompositional Approach

  • Gaoyan Xie
  • Zhe Dang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3997)


The global testing problem studied in this paper is to seek a definite answer to whether a system of concurrent black-boxes has an observable behavior in a given finite (but could be huge) set Bad. We introduce a novel approach to solve the problem that does not require integration testing. Instead, in our approach, the global testing problem is reduced to testing individual black-boxes in the system one by one in some given order. Using an automata-theoretic approach, test sequences for each individual black-box are generated from the system’s description as well as the test results of black-boxes prior to this black-box in the given order. In contrast to the conventional compositional/modular verification/testing approaches, our approach is essentially decompositional. Also, our technique is complete, sound, and can be carried out automatically. Our experiment results show that the total number of tests needed to solve the global testing problem is substantially small even for an extremely large Bad.


Model Check Test Sequence Label Transition System Observable Behavior Integration Testing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Abadi, M., Lamport, L.: Composing specifications. TOPLAS 15(1), 73–132 (1993)CrossRefGoogle Scholar
  3. 3.
    Alur, R., Henzinger, T.A., Mang, F.Y.C., Qadeer, S., Rajamani, S.K., Tasiran, S.: MOCHA: Modularity in model checking. In: Y. Vardi, M. (ed.) CAV 1998. LNCS, vol. 1427, pp. 521–525. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  4. 4.
    Bertolino, A., Polini, A.: A framework for component deployment testing. In: ICSE 2003, pp. 221–231. IEEE Press, Los Alamitos (2003)Google Scholar
  5. 5.
    Brinksma, E., Tretmans, J.: Testing transition systems: An annotated bibliography. In: Proc. 4th Summer School on Modeling and Verification of Parallel Processes, pp. 187–195. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  6. 6.
    Callahan, J., Schneider, F., Easterbrook, S.: Automated software testing using modelchecking. WVU Technical Report #NASA-IVV-96-022Google Scholar
  7. 7.
    Clarke, E., Long, D., McMillan, K.: Compositional model checking. In: LICS 1989, pp. 353–362. IEEE Press, Los Alamitos (1989)Google Scholar
  8. 8.
    Coen-Porisini, A., Ghezzi, C., Kemmerer, R.A.: Specification of realtime systems using ASTRAL. TSE 23(9), 572–598 (1997)Google Scholar
  9. 9.
    de Alfaro, L., Henzinger, T.A.: Interface automata. In: FSE 2001, pp. 109–120. ACM Press, New York (2001)Google Scholar
  10. 10.
    Engels, A., Feijs, L.M.G., Mauw, S.: Test generation for intelligent networks using model checking. In: Brinksma, E. (ed.) TACAS 1997. LNCS, vol. 1217, pp. 384–398. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  11. 11.
    Fisler, K., Krishnamurthi, S.: Modular verification of collaboration-based software designs. In: ESEC/FSE 2001, pp. 152–163. ACM Press, New York (2001)Google Scholar
  12. 12.
    Flanagan, C., Qadeer, S.: Thread-modular model checking. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 213–225. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  13. 13.
    Gargantini, A., Heitmeyer, C.: Using model checking to generate tests from requirements specifications. In: Nierstrasz, O., Lemoine, M. (eds.) ESEC 1999 and ESEC-FSE 1999. LNCS, vol. 1687, pp. 146–163. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  14. 14.
    Ghosh, S., Mathur, P.: Issues in testing distributed component-based systems. In: First ICSE Workshop on Testing Distributed Component-Based Systems (1999)Google Scholar
  15. 15.
    Giannakopoulou, D., Pasareanu, C.S., Cobleigh, J.M.: Assume-guarantee verification of source code with design-level assumptions. In: ICSE 2004, pp. 211–220. IEEE Press, Los Alamitos (2004)Google Scholar
  16. 16.
    Harrold, M.J.: Testing: a roadmap. In: Proceedings of the conference on the future of software engineering, pp. 61–72. ACM Press, New York (2000)Google Scholar
  17. 17.
    Henzinger, T.A., Jhala, R., Majumdar, R., Qadeer, S.: Thread-modular abstraction refinement. In: Hunt Jr., W.A., Somenzi, F. (eds.) CAV 2003. LNCS, vol. 2725, pp. 262–274. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  18. 18.
    Jeffords, R.D., Heitmeyer, C.L.: A strategy for efficiently verifying requirements. In: FSE 2003, pp. 28–37. ACM Press, New York (2003)Google Scholar
  19. 19.
    Jones, C.B.: Tentative steps towards a development method for interfering programs. TOPLAS 5(4), 596–619 (1983)CrossRefMATHGoogle Scholar
  20. 20.
    Lamport, L.: Specifying concurrent program modules. TOPLAS 5(2), 190–222 (1983)CrossRefMATHGoogle Scholar
  21. 21.
    Li, H., Krishnamurthi, S., Fisler, K.: Verifying cross-cutting features as open systems. ACM SIGSOFT Software Engineering Notes 27(6), 89–98 (2002)CrossRefGoogle Scholar
  22. 22.
    Lynch, N., Tuttle, M.: An introduction to input/output automata. CWI-Quarterly 2(3), 219–246 (1989)MathSciNetMATHGoogle Scholar
  23. 23.
    Nachmanson, L., Veanes, M., Schulte, W., Tillmann, N., Grieskamp, W.: Optimal strategies for testing nondeterministic systems. In: ISSTA 2004, pp. 55–64. ACM Press, New York (2004)Google Scholar
  24. 24.
    Orso, A., Harrold, M.J., Rosenblum, D.S.: Component metadata for software engineering tasks. In: Emmerich, W., Tai, S. (eds.) EDO 2000. LNCS, vol. 1999, pp. 129–144. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  25. 25.
    Peled, D., Vardi, M.Y., Yannakakis, M.: Black box checking. In: FORTE/PSTV 1999, pp. 225–240. Kluwer, Dordrecht (1999)Google Scholar
  26. 26.
    Petrenko, A., Yevtushenko, N., Le Huo, J.: Testing transition systems with input and output testers. In: Hogrefe, D., Wiles, A. (eds.) TestCom 2003. LNCS, vol. 2644, pp. 129–145. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  27. 27.
    Pnueli, A.: In transition from global to modular temporal reasoning about programs. In: Apt, K.R. (ed.) Logics and Models of Concurrent Systems, sub-series F: Computer and System Science (1985)Google Scholar
  28. 28.
    Rosenblum, D.: Adequate testing of componentbased software. Department of Information and Computer Science, University of California, Irvine, Technical Report 97-34 (August 1997)Google Scholar
  29. 29.
    Szyperski, C.: Component technology: what, where, and how? In: ICSE 2003, pp. 684–693. IEEE Press, Los Alamitos (2003)Google Scholar
  30. 30.
    Tai, C., Carver, R.H.: Testing of distributed programs. In: Parallel and Distributed Computing Handbook, pp. 955–978. McGraw-Hill, New York (1996)Google Scholar
  31. 31.
    Tretmans, J., Brinksma, E.: Torx: Automated model-based tesing. In: First European Conference on Model-Driven Software Engineering, pp. 31–43 (2003)Google Scholar
  32. 32.
    Voas, J.: Developing a usage-based software certification process. IEEE Computer 33(8), 32–37 (2000)CrossRefGoogle Scholar
  33. 33.
    Whaley, J., Martin, M.C., Lam, M.S.: Automatic extraction of object-oriented component interfaces. In: ISSTA 2002, pp. 218–228. ACM Press, New York (2002)Google Scholar
  34. 34.
    Xie, G., Dang, Z.: An automata-theoretic approach for model-checking systems with unspecified components. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 155–169. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Gaoyan Xie
    • 1
  • Zhe Dang
    • 1
  1. 1.School of Electrical Engineering and Computer ScienceWashington State UniversityPullmanUSA

Personalised recommendations