Software Quality Journal

, Volume 20, Issue 3–4, pp 605–643 | Cite as

Pairwise testing for software product lines: comparison of two approaches

  • Gilles Perrouin
  • Sebastian Oster
  • Sagar Sen
  • Jacques Klein
  • Benoit Baudry
  • Yves le Traon
Article

Abstract

Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability, which in turn leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large products space is hardly feasible. Hence, one possible option is to test SPLs by generating test configurations that cover all possible t feature interactions (t-wise). It dramatically reduces the number of test products while ensuring reasonable SPL coverage. In this paper, we report our experience on applying t-wise techniques for SPL with two independent toolsets developed by the authors. One focuses on generality and splits the generation problem according to strategies. The other emphasizes providing efficient generation. To evaluate the respective merits of the approaches, measures such as the number of generated test configurations and the similarity between them are provided. By applying these measures, we were able to derive useful insights for pairwise and t-wise testing of product lines.

Keywords

Model-based engineering and testing Test generation t-wise and pairwise Software product lines Alloy 

Notes

Acknowledgments

The authors would like to thank Professor Andy Schürr for his valuable comments on the paper. This research was partly funded by the NAPLES project funded by the Walloon Region (Belgium).

References

  1. Batory, D. S. (2005). Feature models, grammars, and propositional formulas. In: Software product line conference (SPLC) (pp. 7–20).Google Scholar
  2. Batory, D., Benavides, D., & Ruiz-Cortés, A. (2006). Automated analysis of feature models: Challenges ahead. Communications of the ACM.Google Scholar
  3. Benavides, D., Segura, S., & Ruiz-Cortés, A. (2010). Automated analysis of feature models 20 years later: A literature review. Information Systems, 35(6), 615–636.CrossRefGoogle Scholar
  4. Bennaceur, H. (2004). A comparison between SAT and CSP techniques. Constraints, 9(2), 123–138.MathSciNetMATHCrossRefGoogle Scholar
  5. Berger, T., She, S., Lotufo, R., Wasowski, A., & Czarnecki, K. (2010). Variability modeling in the real: A perspective from the operating systems domain. In: Proceedings of the IEEE/ACM international conference on automated software engineering (pp. 73–82). New York, NY, USA: ACM, automated software engineering conference (ASE) ’10.Google Scholar
  6. Bryce, R., & Colbourn, C. (2009). A density-based greedy algorithm for higher strength covering arrays. Software Testing, Verification and Reliability, 19(1), 37–53.CrossRefGoogle Scholar
  7. Bryce, R. C., & Colbourn, C. J. (2006). Prioritized interaction testing for pair-wise coverage with seeding and constraints. Information and Software Technology, 48(10):960–970, advances in Model-based Testing.Google Scholar
  8. Budinsky, F., Steinberg, D., Merks, E., Ellersick, R., & Grose, T. (2003). Eclipse modeling framework. The Eclipse Series, Addison Wesley Professional.Google Scholar
  9. Calvagna, A., & Gargantini, A. (2009). Combining satisfiability solving and heuristics to constrained combinatorial interaction testing. In: International conference on tests and proofs (pp. 27–42). Berlin, Heidelberg: Springer.Google Scholar
  10. Calvagna, A., & Gargantini, A. (2008). A logic-based approach to combinatorial testing with constraints. In Beckert, B., Hähnle, R., (Eds.), Tests and proofs (Vol. 4966, pp. 66–83). Berlin/Heidelberg: Springer, Lecture Notes in Computer Science.Google Scholar
  11. Cartaxo, E. G., Machado, P. D. L., & Neto F. G. O. (2011). On the use of a similarity function for test case selection in the context of model-based testing. Software Testing, Verification & Reliability, 21, 75–100.CrossRefGoogle Scholar
  12. Classen, A., Heymans, P., & Schobbens, P. (2008). What’s in a feature: A requirements engineering perspective. In Proceedings of the theory and practice of software, 11th international conference on fundamental approaches to software engineering (pp. 16–30). Springer.Google Scholar
  13. Clements, P., & Northrop, L. (2001a). Software product lines: Practices and patterns. Reading, MA, USA: Addison Wesley.Google Scholar
  14. Clements, P., & Northrop, L. (2001b). Software product lines: practices and patterns. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc.Google Scholar
  15. Cohen, M., Dwyer, M., & Shi, J. (2007). Interaction testing of highly-configurable systems in the presence of constraints. In International symposium on software testing and analysis (Vol. 4961/2008, pp. 129–139).Google Scholar
  16. Cohen, M. B., Dwyer, M. B., & Shi, J. (2006). Coverage and adequacy in software product line testing. In ROSATEA@ISSTA (pp. 53–63).Google Scholar
  17. Cohen, D. M., Dalal, S. R., Fredman, M. L., & Patton, G. C. (1997). The AETG system: An approach to testing based on combinatorial design. IEEE Transactions on Software Engineering, 23(7), 437–444.CrossRefGoogle Scholar
  18. Czarnecki, K., Wasowski, A. (2007). Feature diagrams and logics: There and back again. In 11th software product line conference (pp. 23–34). Kyoto, Japan: IEEE Computer Society.Google Scholar
  19. Czarnecki, K., & Antkiewicz, M. (2005). Mapping features to models: A template approach based on superimposed variants. In Generative programming and component engineering (GPCE) (Vol. 3676, pp. 422–437). Springer, LNCS.Google Scholar
  20. Czarnecki, K., Helsen, S., & Eisenecker, U. (2005). Formalizing cardinality-based feature models and their specialization. Software Process Improvement and Practice, 10(1), 7–29.CrossRefGoogle Scholar
  21. Ganesan, D., Knodel, J., Kolb, R., Haury, U., & Meier, G. (2007). Comparing costs and benefits of different test strategies for a software product line: A study from testo ag. In: 11th International software product line conference (pp. 74–83). Los Alamitos, CA, USA: IEEE Computer Society.Google Scholar
  22. Griss, M. L., Favaro, J., & d’ Alessandro, M. (1998). Integrating feature modeling with the RSEB. In Fifth international conference on software reuse (pp. 76–85). Washington, DC, USA.Google Scholar
  23. Haralick, R., & Elliott, G. (1980). Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence, 14(3), 263–313.CrossRefGoogle Scholar
  24. Hemmati, H., & Briand, L. (2010). An industrial investigation of similarity measures for model-based test case selection. In International symposium on software reliability engineering (ISSRE) (pp. 141–150). Los Alamitos, CA, USA: IEEE Computer Society.Google Scholar
  25. Hemmati, H., Arcuri, A., & Briand, L. (2010). Reducing the cost of model-based testing through test case diversity. In 22nd IFIP international conference on testing software and systems (ICTSS)— formerly TestCom/FATES (Vol. 6435/2010, pp. 63–78).Google Scholar
  26. Hemmati, H., Arcuri, A., & Briand, L. (2011). Empirical investigation of the effects of test suite properties on similarity-based test case selection. In 4th international conference on software testing, verification and validation (ICST) (pp. 327–336), Berlin, Germany.Google Scholar
  27. Jackson, D. (2006). Software abstractions: Logic, language, and analysis. MIT Press: Cambridge.Google Scholar
  28. Kang, K., Cohen, S., Hess, J., Novak, W., & Peterson, S. (1990). Feature-oriented domain analysis (FODA) feasibility study. Tech. Rep. CMU/SEI-90-TR-21, Software Engineering Institute.Google Scholar
  29. Kang, K. C., Kim, S., Lee, J., Kim, K., Shin, E., Huh, M. (1998). FORM: A feature-oriented reuse method with domain-specific reference architectures. Annals of Software Engineering, 5, 143–168.CrossRefGoogle Scholar
  30. Kuhn, R., Lei, Y., Kacker, R. (2008). Practical combinatorial testing: Beyond pairwise. IT Professional, 10, 19–23. http://doi.ieeecomputersociety.org/10.1109/MITP.2008.54.
  31. Kuhn, D. R., Wallace, D. R., & Gallo, A. M. (2004). Software fault interactions and implications for software testing. IEEE Transactions on Software Engineering, 30(6), 418–421.CrossRefGoogle Scholar
  32. Lei, Y., & Tai, K. (1998). In-parameter-order: A test generation strategy for pairwise testing. In IEEE high assurance systems engineering symposium (pp. 254–261).Google Scholar
  33. Lei, Y., Kacker, R., Kuhn, D., Okun, V., & Lawrence, J. (2008). IPOG/IPOG-D: Efficient test generation for multi-way combinatorial testing. Software Testing, Verification and Reliability, 18(3), 125–148.CrossRefGoogle Scholar
  34. Mahajan, Y. S., Fu, Z. S. M. (2004). Zchaff2004: An efficient sat solver. In SAT 2004 (pp. 360–375).Google Scholar
  35. McGregor, J. (2001). Testing a software product line. Tech. Rep. ESC-TR-2001-022, CMU/SEI.Google Scholar
  36. Mendonça, M., Wasowski, A., & Czarnecki, K. (2009). Sat-based analysis of feature models is easy. In 13th international software product line conference (SPLC) (pp. 231–240). San Francisco, CA, USA.Google Scholar
  37. Mendonca, M., Branco, M., & Cowan, D. (2009). SPLOT: Software product lines online tools. In Proceeding of the 24th ACM SIGPLAN conference companion on object oriented programming systems languages and applications (pp. 761–762). ACM.Google Scholar
  38. Metzger, A., Pohl, K., Heymans, P., Schobbens, P. Y., & Saval, G. (2007). Disambiguating the documentation of variability in software product lines: A separation of concerns, formalization and automated analysis. In IEEE conference on requirements engineering (pp. 243–253). Delhi, India: IEEE Computer Society.Google Scholar
  39. Monasson, R., Zecchina, R., Kirkpatrick, S., Selman, B., & Troyansky, L. (1999). Determining computational complexity from characteristic phase transitions. Nature, 400(6740), 133–137.MathSciNetCrossRefGoogle Scholar
  40. MoSo-PoLiTe (2011). http://www.sharq.tu-darmstadt.de/projects/mosopolite/. Accessed 8th April 2011.
  41. Muller, P. A., Fleurey, F., & Jézéquel, J. M. (2005). Weaving executability into object-oriented meta-languages. In MODELS/UML. Springer.Google Scholar
  42. Niklas, E., & Niklas, S. (2005). MiniSat: A SAT solver with conflict-clause minimization, poster. In SAT 2005.Google Scholar
  43. Oster, S., Markert, F., & Ritter, P. (2010). Automated incremental pairwise testing of software product lines. In Bosch, J., & Lee, J. (Eds.), Software product line conference (SPLC) (Vol. 6287, pp. 196–210). Springer, Lecture Notes in Computer Science.Google Scholar
  44. Oster, S., Wübbeke, A., Engels, G., & Schürr, A. (2011). Model-based software product lines testing survey. In Zander, J., Schieferdecker, I., & Mosterman, P. (Eds.), Model-based testing for embedded systems. CRC Press Taylor & Francis, to appear on September 9th, 2011.Google Scholar
  45. Perrouin, G., Klein, J., Guelfi, N., & Jézéquel, J. M. (2008). Reconciling automation and flexibility in product derivation. In Software product line conference (SPLC) (pp. 339–348). Limerick, Ireland: IEEE Computer Society.Google Scholar
  46. Perrouin, G., Sen, S., Klein, J., Baudry, B., & le Traon, Y. (2010). Automated and scalable t-wise test case generation strategies for software product lines. In International conference on software testing, verification, and validation (ICST) (pp. 459–468). IEEE Computer Society, Paris, France.Google Scholar
  47. Phadke, M. (1995). Quality engineering using robust design. Upper Saddle River, NJ, USA: Prentice Hall PTRGoogle Scholar
  48. Pohl, K., Böckle, G., & van der Linden. F. J. (2005). Software Product Line Engineering: Foundations, Principles and Techniques. Secaucus, NJ, USA: Springer-Verlag New York, Inc.MATHGoogle Scholar
  49. Gheyi, R. T. M., & Borba, P. (2006). A theory for feature models in alloy. In First alloy workshop (pp. 71–80).Google Scholar
  50. Scheidemann, K. (2007). Verifying families of system configurations. Doctoral Thesis TU Munich.Google Scholar
  51. Schobbens, P. Y., Heymans, P., Trigaux, J. C., & Bontemps, Y. (2006). Feature diagrams: A survey and a formal semantics. In Requirements engineering, IEEE international conference on (pp. 139–148).Google Scholar
  52. Schobbens, P., Heymans, P., Trigaux, J., & Bontemps, Y. (2007). Generic semantics of feature diagrams. Computer Networks, 51(2), 456–479.MATHCrossRefGoogle Scholar
  53. Tan, P., Steinbach, M., Kumar, V., et al. (2006). Introduction to data mining. Boston: Pearson Addison WesleyGoogle Scholar
  54. Tevanlinna, A., Taina, J., & Kauppinen, R. (2004). Product family testing: A survey. SIGSOFT Software Engineering Notes, 29(2), 12–12.CrossRefGoogle Scholar
  55. Torlak, E., & Jackson, D. (2007). Kodkod: A relational model finder. In Tools and algorithms for construction and analysis of systems (Vol. 4424/2007, pp. 632–647).Google Scholar
  56. Uzuncaova, E., Garcia, D., Khurshid, S., & Batory, D. (2008). Testing software product lines using incremental test generation. In ISSRE (pp. 249–258). IEEE Computer Society.Google Scholar
  57. Westphal, M., & Wölfl, S. (2009). Qualitative csp, finite csp, and sat: Comparing methods for qualitative constraint-based reasoning. In IJCAI’09: Proceedings of the 21st international jont conference on artificial intelligence (pp. 628–633). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.Google Scholar
  58. White, J., Dougherty, B., & Schmidt, D. C. (2009). Selecting highly optimal architectural feature sets with filtered cartesian flattening. Journal of Systems and Software, 82(8), 1268–1284.CrossRefGoogle Scholar
  59. Yoon, I., Sussman, A., Memon, A., & Porter, A. (2007). Direct-dependency-based software compatibility testing. In Automated software engineering conference (ASE) (pp. 409–412). Atlanta, Georgia, USA.Google Scholar
  60. Ziadi, T., & Jézéquel, J. M. (2006). Product line engineering with the UML: Deriving products. In Families research book. Springer.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Gilles Perrouin
    • 1
  • Sebastian Oster
    • 2
  • Sagar Sen
    • 3
  • Jacques Klein
    • 4
  • Benoit Baudry
    • 5
  • Yves le Traon
    • 4
  1. 1.University of NamurNamurBelgium
  2. 2.Real-Time Systems Group, Technische UniversitätDarmstadtGermany
  3. 3.INRIA Sophia Antipolis, 2004, route des LuciolesSophia Antipolis CedexFrance
  4. 4.University of Luxembourg, SnT and LASSYLuxembourg-KirchbergLuxembourg
  5. 5.Triskell Team, IRISA/INRIA Rennes Bretagne AtlantiqueRennesFrance

Personalised recommendations