Advertisement

Minimizing Test Execution Time During Test Generation

  • Tilo Mücke
  • Michaela Huhn
Part of the IFIP International Federation for Information Processing book series (IFIPAICT, volume 227)

Abstract

In the area of model based testing, major improvements have been made in the generation of conformance tests using a model checker. Unfortunately, the execution of the generated test suites tend to be rather time-consuming. In [1] we presented a method to generate the test suites with the shortest execution time providing the required coverage, but this method can only be applied to small models due to memory-consumption. Here we show how to generate test suites for a number of different test quality criteria like coverage criteria, UIOs, mutant testing. Moreover, we present heuristics to significantly reduce test execution time that are as efficient as a naive testsuite generation. Our optimization combines min-set-cover-algorithms and search strategies, which we use to enlengthen generated test cases by promising additional coverages. We compare several heuristics and present a case study where we could achieve a reduction of the test execution time to less than 10%.

Keywords

Model Checker Sequence Diagram Partial Coverage Coverage Criterion Test Case Generation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Mücke, T., Huhn, M.: Generation of optimized testsuites for UML statecharts with time. In Groz, R., Hierons, R.M., eds.: TestCom. Volume 2978 of LNCS., Springer (2004) 128–143Google Scholar
  2. 2.
    Engels, A., Feijs, L., Mauw, S.: Test generation for intelligent networks using model checking. In Brinksma, E., ed.: Tools and Algorithms for the Construction and Analysis of Systems. (1997)Google Scholar
  3. 3.
    Rayadurgan, S., Heimdahl, M.: Coverage based test-case generation using model checkers. In: Intl. Conf. and Workshop on the Engineering of Computer Based Systems. (2001) 83–93Google Scholar
  4. 4.
    Hong, H., Lee, I., Sokolsky, O., Cha, S.: Automatic test generation from statecharts using model checking. In Brinksma, E., Tretmans, J., eds.: Workshop on Formal Approaches to Testing of Software (FATES). (2001) 15–30Google Scholar
  5. 5.
    Pretschner, A.: Classical search strategies for test case generation with constraint logic programming. In Brinksma, E., Tretmans, J., eds.: Workshop on Formal Approaches to Testing of Software (FATES). (2001) 47–60Google Scholar
  6. 6.
    King, K.N., Offutt, A.J.: A Fortran language system for mutation-based software testing. Software-Practice & Experience 21(7) (1991) 685–718CrossRefGoogle Scholar
  7. 7.
    Heimdahl, M.P., Devaraj, G., Weber, R.J.: Specification test coverage adequacy criteria = specification test generation inadequacy criteria? In: Proceedings of the 8th IEEE International Symposium on High Assurance Systems Engineering (HASE), Tampa, Florida (2004)Google Scholar
  8. 8.
    Heimdahl, M.P., Devaraj, G.: Test-suite reduction for model based tests: Effects on test quality and implications for testing. In Wiels, V., Stirewalt, K., eds.: Proc. of the 19th IEEE Intern. Conference on Automated Software Engineering (ASE), Linz, Austria (2004)Google Scholar
  9. 9.
    OMG: Unified modeling language specification (2003) Version 1.5.Google Scholar
  10. 10.
    Larsen, K.G., Pettersson, P., Yi, W.: UPPAAL in a nutshell. International Journal on Software Tools for Technology Transfer 1(1–2) (1997) 134–152Google Scholar
  11. 11.
    Diethers, K., Goltz, U., Huhn, M.: Model checking UML statecharts with time. In Jézéquel, J.M., Hußmann, H., Cook, S., eds.: UML 2002, Workshop on Critical Systems Development with UML. (2002)Google Scholar
  12. 12.
    Diethers, K., Huhn, M.: Vooduu: Verification of object-oriented designs using uppaal. In Jensen, K., Podelski, A., eds.: TACAS. Volume 2988 of Lecture Notes in Computer Science., Springer (2004) 139–143Google Scholar
  13. 13.
    Robinson-Mallett, C., Liggesmeyer, P., Mücke, T., Goltz, U.: Generating optimal distinguishing sequences with a model checker. In: A-MOST’ 05: Proceedings of the 1st International Workshop on Advances in Model-based Testing, New York, NY, USA, ACM Press (2005) 1–7Google Scholar
  14. 14.
    Sugeta, T., Maldonado, J.C., Wong, W.E.: Mutation testing applied to validate SDL specifications. In Groz, R., Hierons, R.M., eds.: TestCom. Volume 2978 of LNCS., Springer (2004) 193–208Google Scholar
  15. 15.
    Offutt, J., Pan, J., Voas, J.: Procedures for reducing the size of coverage-based test sets. In: Proceedings of the Twelfth International Conference on Testing Computer Software. (1995) 111–123Google Scholar
  16. 16.
    Paulin, P., Knight, J.: Force-directed scheduling for the behavioural synthesis of asics. IEEE Trans. on Computer-Aided Design 8(6) (1989) 661–679CrossRefGoogle Scholar
  17. 17.
    Garey, M., Johnson, D.: Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman and Company (1979)Google Scholar
  18. 18.
    Steiner, J., Diethers, K., Mücke, T., Goltz, U., Huhn, M.: Rigorous tool-supported software development of a robot control system. In: Robot Systems for Handling and Assembly, 2nd Colloquium of the Collaborative Research Center 562. (2005) 137–152Google Scholar

Copyright information

© International Federation for Information Processing 2006

Authors and Affiliations

  • Tilo Mücke
    • 1
  • Michaela Huhn
    • 1
  1. 1.Technical University of BraunschweigBraunschweigGermany

Personalised recommendations