Facilitating Reuse in Multi-goal Test-Suite Generation for Software Product Lines

  • Johannes Bürdek
  • Malte Lochau
  • Stefan Bauregger
  • Andreas Holzer
  • Alexander von Rhein
  • Sven Apel
  • Dirk Beyer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9033)


Software testing is still the most established and scalable quality-assurance technique in practice. However, generating effective test suites remains computationally expensive, consisting of repetitive reachability analyses for multiple test goals according to a coverage criterion. This situation is even worse when testing entire software product lines, i.e., families of similar program variants, requiring a sufficient coverage of all derivable program variants. Instead of considering every product variant one-by-one, family-based approaches are variability-aware analysis techniques in that they systematically explore similarities among the different variants. Based on this principle, we present a novel approach for automated product-line test-suite generation incorporating extensive reuse of reachability information among test cases derived for different test goals and/or program variants. We present a tool implementation on top of CPA/tiger which is based on CPAchecker, and provide evaluation results obtained from various experiments, revealing a considerable increase in efficiency compared to existing techniques.


Software Product Lines Automated Test Generation Symbolic Model Checking CPAchecker CPA/tiger 


  1. 1.
    Apel, S., Batory, D., Kästner, C., Saake, G.: Feature-Oriented Software Product Lines. Springer (2013)Google Scholar
  2. 2.
    Apel, S., Beyer, D., Friedberger, K., Raimondi, F., von Rhein, A.: Domain Types: Abstract-Domain Selection Based on Variable Usage. In: Bertacco, V., Legay, A. (eds.) HVC 2013. LNCS, vol. 8244, pp. 262–278. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  3. 3.
    Apel, S., Rhein, A.: v., Wendler, P., Grösslinger, A., Beyer, D.: Strategies for Product-Line Verification: Case Studies and Experiments. In: ICSE, pp. 482–491. IEEE Press (2013)Google Scholar
  4. 4.
    Asirelli, P., ter Beek, M.H., Fantechi, A., Gnesi, S.: A Model-Checking Tool for Families of Services. In: Bruni, R., Dingel, J. (eds.) FORTE and FMOODS 2011. LNCS, vol. 6722, pp. 44–58. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Batory, D.: Feature Models, Grammars, and Propositional Formulas. In: Obbink, H., Pohl, K. (eds.) SPLC 2005. LNCS, vol. 3714, pp. 7–20. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  6. 6.
    Beyer, D., Henzinger, T.A., Jhala, R., Majumdar, R.: The software model checker Blast. Int. J. Softw. Tools Technol. Transfer 9(5-6), 505–525 (2007)CrossRefGoogle Scholar
  7. 7.
    Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating Tests from Counterexamples. In: ICSE, pp. 326–335. IEEE Press (2004)Google Scholar
  8. 8.
    Beyer, D., Holzer, A., Tautschnig, M., Veith, H.: Information Reuse for Multi-goal Reachability Analyses. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 472–491. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  9. 9.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A Tool for Configurable Software Verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  10. 10.
    Beyer, D., Löwe, S., Novikov, E., Stahlbauer, A., Wendler, P.: Precision Reuse for Efficient Regression Verification. In: ESEC/FSE 2013, pp. 389–399. ACM (2013)Google Scholar
  11. 11.
    Chilenski, J.J., Miller, S.P.: Applicability of Modified Condition/Decision Coverage to Software Testing. Software Engineering Journal 9(7), 193–200 (1994)CrossRefGoogle Scholar
  12. 12.
    Cichos, H., Oster, S., Lochau, M., Schürr, A.: Model-Based Coverage-Driven Test Suite Generation for Software Product Lines. In: Whittle, J., Clark, T., Kühne, T. (eds.) MODELS 2011. LNCS, vol. 6981, pp. 425–439. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  13. 13.
    Clarke, E., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-Guided Abstraction Refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  14. 14.
    Classen, A., Heymans, P., Schobbens, P.-Y., Legay, A.: Symbolic Model Checking of Software Product Lines. In: ICSE, pp. 321–330. ACM (2011)Google Scholar
  15. 15.
    Clements, P., Northrop, L.: Software Product Lines: Practices and Patterns. Addison-Wesley (2001)Google Scholar
  16. 16.
    Cordy, M., Classen, A., Schobbens, P.Y., Heymans, P., Legay, A.: Simulation-Based Abstractions for Software Product-Line Model Checking. In: ICSE, pp. 672–682. IEEE (2012)Google Scholar
  17. 17.
    Engström, E.: Exploring Regression Testing and Software Product Line Testing - Research and State of Practice. Lic dissertation, Lund University (May 2010)Google Scholar
  18. 18.
    Hall, R.J.: Fundamental Nonmodularity in Electronic Mail. ASE 12, 41–79 (2005)Google Scholar
  19. 19.
    C., K., P., G., T., R., S., E., K., O., T., B.: Variability-Aware Parsing in the Presence of Lexical Macros and Conditional Compilation. In: OOPSLA, pp. 805–824. ACM (2011)Google Scholar
  20. 20.
    Kramer, J., Magee, J., Sloman, M., Lister, A.: CONIC: An Integrated Approach to Distributed Computer Control Systems. Computers and Digital Techniques 130, 1–20 (1983)CrossRefGoogle Scholar
  21. 21.
    Li, H.C., Krishnamurthi, S., Fisler, K.: Interfaces for Modular Feature Verification. In: ASE 2002, pp. 195–204. IEEE (2002)Google Scholar
  22. 22.
    Lochau, M., Mennicke, S., Baller, H., Ribbeck, L.: DeltaCCS: A Core Calculus for Behavioral Change. In: Margaria, T., Steffen, B. (eds.) ISoLA 2014, Part I. LNCS, vol. 8802, pp. 320–335. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  23. 23.
    Lochau, M., Schaefer, I., Kamischke, J., Lity, S.: Incremental Model-Based Testing of Delta-Oriented Software Product Lines. In: Brucker, A.D., Julliand, J. (eds.) TAP 2012. LNCS, vol. 7305, pp. 67–82. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  24. 24.
    McGregor, J.D.: Testing a Software Product Line. Tech. Rep. CMU/SEI-2001-TR-022, Software Engineering Inst (2001)Google Scholar
  25. 25.
    Post, H., Sinz, C.: Configuration Lifting: Verification Meets Software Configuration. In: ASE, pp. 347–350. IEEE (2008)Google Scholar
  26. 26.
    Schaefer, I., Hähnle, R.: Formal Methods in Software Product Line Engineering. Computer 44(2), 82–85 (2011)CrossRefGoogle Scholar
  27. 27.
    Thüm, T., Apel, S., Kästner, C., Schaefer, I., Saake, G.: A Classification and Survey of Analysis Strategies for Software Product Lines. ACM Comput. Surv. 47(1), 6:1–6:45 (2014)Google Scholar
  28. 28.
    Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann (2007)Google Scholar
  29. 29.
    Weiss, D.M.: The Product Line Hall of Fame. In: SPLC, p. 395. IEEE (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Johannes Bürdek
    • 1
  • Malte Lochau
    • 1
  • Stefan Bauregger
    • 1
  • Andreas Holzer
    • 2
  • Alexander von Rhein
    • 3
  • Sven Apel
    • 3
  • Dirk Beyer
    • 3
  1. 1.TU DarmstadtDarmstadtGermany
  2. 2.TU WienWienAustria
  3. 3.University of PassauPassauGermany

Personalised recommendations