Software & Systems Modeling

, Volume 16, Issue 1, pp 153–171 | Cite as

Statistical prioritization for software product line testing: an experience report

  • Xavier Devroey
  • Gilles Perrouin
  • Maxime Cordy
  • Hamza Samih
  • Axel Legay
  • Pierre-Yves Schobbens
  • Patrick Heymans
Theme Section Paper


Software product lines (SPLs) are families of software systems sharing common assets and exhibiting variabilities specific to each product member of the family. Commonalities and variabilities are often represented as features organized in a feature model. Due to combinatorial explosion of the number of products induced by possible features combinations, exhaustive testing of SPLs is intractable. Therefore, sampling and prioritization techniques have been proposed to generate sorted lists of products based on coverage criteria or weights assigned to features. Solely based on the feature model, these techniques do not take into account behavioural usage of such products as a source of prioritization. In this paper, we assess the feasibility of integrating usage models into the testing process to derive statistical testing approaches for SPLs. Usage models are given as Markov chains, enabling prioritization of probable/rare behaviours. We used featured transition systems, compactly modelling variability and behaviour for SPLs, to determine which products are realizing prioritized behaviours. Statistical prioritization can achieve a significant reduction in the state space, and modelling efforts can be rewarded by better automation. In particular, we used MaTeLo, a statistical test cases generation suite developed at ALL4TEC. We assess feasibility criteria on two systems: Claroline, a configurable course management system, and Sferion™, an embedded system providing helicopter landing assistance.


Software product line testing Prioritization Statistical testing 

Mathematics Subject Classification

D.2.5 D.2.7 



We would like to thank Jean-Roch Meurisse and Didier Belhomme from the University of Namur for providing the Webcampus Apache access log.


  1. 1.
  2. 2.
  3. 3.
    Asirelli, P., ter Beek, M., Gnesi, S., Fantechi, A.: Formal description of variability in product families. In: Software product line conference (SPLC), 15th International, pp. 130–139 (2011). doi: 10.1109/SPLC.2011.34
  4. 4.
    Asirelli, P., ter Beek, M.H., Fantechi, A., Gnesi, S., Mazzanti, F.: Design and validation of variability in product lines. In: Proceedings of the 2nd International Workshop on Product Line Approaches in Software Engineering, PLEASE ’11, ACM, New York, 25–30. (2011). doi: 10.1145/1985484.1985492
  5. 5.
    Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press, Massachusetts (2008)zbMATHGoogle Scholar
  6. 6.
    Burch, J.R., Clarke, E.M., McMillan, K.L., Dill, D.L., Hwang, L.J.: Symbolic model checking: 1020 states and beyond. Inf. Comput. 98(2), 142–170 (1992). doi: 10.1016/0890-5401(92)90017-A CrossRefzbMATHGoogle Scholar
  7. 7.
    Cichos, H., Oster, S., Lochau, M., Schürr, A.: Model-based coverage-driven test suite generation for software product lines. In: Proceedings of the 14th International Conference on Model Driven Engineering Languages and Systems, MODELS’11, Springer, Heidelberg, pp.425–439. (2011).
  8. 8.
  9. 9.
    Classen, A., Boucher, Q., Heymans, P.: A text-based approach to feature modelling: syntax and semantics of TVL. Science of computer programming 76(12), 1130–1143 (2011). doi: 10.1016/j.scico.2010.10.005.
  10. 10.
    Classen, A., Cordy, M., Schobbens, P.Y., Heymans, P., Legay, A., Raskin, J.F.: Featured transition systems: foundations for verifying variability-intensive systems and their application to LTL model checking. IEEE Trans. Softw. Eng. 39(8), 1069–1089 (2013). doi: 10.1109/TSE.2012.86 CrossRefGoogle Scholar
  11. 11.
    Classen, A., Heymans, P., Schobbens, P.Y.: What’s in a Feature: A requirements engineering perspective. In: Fiadeiro, J.L., Inverardi, P. (eds.) Proceedings of the 11th International Conference on Fundamental Approaches to Software Engineering (FASE’08), Held as Part of the Joint European Conferences on Theory and Practice of Software (ETAPS’08), LNCS, vol. 4961, pp. 16–30. Springer, Heidelberg (2008).
  12. 12.
    Classen, A., Heymans, P., Schobbens, P.Y., Legay, A.: Symbolic model checking of software product lines. In: Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11, pp. 321–330. ACM, New York (2011). doi: 10.1145/1985793.1985838
  13. 13.
    Classen, A., Heymans, P., Schobbens, P.Y., Legay, A., Raskin, J.F.: Model checking lots of systems: efficient verification of temporal properties in software product lines. In: Proceedings of the 32Nd ACM/IEEE International Conference on Software Engineering - Vol 1, ICSE ’10, pp. 335–344. ACM, New York (2010). doi: 10.1145/1806799.1806850
  14. 14.
    Cohen, M., Dwyer, M., Shi, J.: Constructing interaction test suites for highly-configurable systems in the presence of constraints: a greedy approach. IEEE Trans. Softw. Eng. 34(5), 633–650 (2008). doi: 10.1109/TSE.2008.50.
  15. 15.
    Cohen, M.B., Dwyer, M.B., Shi, J.: Coverage and adequacy in software product line testing. In: Proceedings of the ISSTA 2006 Workshop on Role of Software Architecture for Testing and Analysis, ROSATEA ’06, pp. 53–63. ACM, New York (2006). doi: 10.1145/1147249.1147257
  16. 16.
    Cohen, M.B., Dwyer, M.B., Shi, J.: Interaction testing of highly-configurable systems in the presence of constraints. In: Proceedings of the 2007 International Symposium on Software Testing and Analysis, ISSTA ’07, pp. 129–139. ACM, New York (2007). doi: 10.1145/1273463.1273482
  17. 17.
    Cordy, M., Classen, A., Heymans, P., Schobbens, P.Y., Legay, A.: Provelines: a product line of verifiers for software product lines. In: Proceedings of the 17th International Software Product Line Conference Co-located Workshops, SPLC ’13 Workshops, pp. 141–146. ACM, New York (2013). doi: 10.1145/2499777.2499781
  18. 18.
    Cordy, M., Heymans, P., Schobbens, P.Y., Sharifloo, A.M., Ghezzi, C., Legay, A.: Verification for reliable product lines. arXiv:1311.1343 (2013)
  19. 19.
    Czarnecki, K., Wasowski, A.: Feature diagrams and logics: there and back again. In: SPLC ’07, pp. 23–34. IEEE (2007). doi: 10.1109/SPLINE.2007.24
  20. 20.
    Devroey, X., Cordy, M., Perrouin, G., Kang, E.Y., Schobbens, P.Y., Heymans, P., Legay, A., Baudry, B.: A vision for behavioural model-driven validation of software product lines. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change, Lecture Notes in Computer Science, vol. 7609, pp. 208–222. Springer, Berlin Heidelberg (2012). doi: 10.1007/978-3-642-34026-0_16 CrossRefGoogle Scholar
  21. 21.
    Devroey, X., Perrouin, G.: Variability intensive system behavioural testing framework (VIBeS) (2014).
  22. 22.
    Devroey, X., Perrouin, G., Cordy, M., Schobbens, P., Legay, A., Heymans, P.: Towards statistical prioritization for software product lines testing. In: Collet, P., Wasowski, A., Weyer, T. (eds.) The Eighth International Workshop on Variability Modelling of Software-intensive Systems, VaMoS ’14, Sophia Antipolis, France, January 22–24, 2014, p. 10. ACM (2014). doi: 10.1145/2556624.2556635
  23. 23.
    Devroey, X., Perrouin, G., Legay, A., Cordy, M., Schobbens, P., Heymans, P.: Coverage criteria for behavioural testing of software product lines. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change - 6th International Symposium, ISoLA 2014, Imperial, Corfu, Greece, October 8–11, 2014, Proceedings, Part I, Lecture Notes in Computer Science, vol. 8802, pp. 336–350. Springer (2014). doi: 10.1007/978-3-662-45234-9_24
  24. 24.
    Devroey, X., Perrouin, G., Schobbens, P.: Abstract test case generation for behavioural testing of software product lines. In: Gnesi, S., Fantechi, A., ter Beek, M.H., Botterweck, G., Becker, M. (eds.) 18th International Software Product Lines Conference - Companion Volume for Workshop, Tools and Demo papers, SPLC ’14, Florence, Italy, September 15–19, 2014, pp. 86–93. ACM (2014). doi: 10.1145/2647908.2655971
  25. 25.
    Feliachi, A., Le Guen, H.: Generating transition probabilities for automatic model-based test generation. In: Software testing, verification and validation (ICST), third international conference, pp. 99–102 (2010). doi: 10.1109/ICST.2010.26
  26. 26.
    Fischbein, D., Uchitel, S., Braberman, V.: A foundation for behavioural conformance in software product line architectures. In: Proceedings of the ISSTA 2006 Workshop on Role of Software Architecture for Testing and Analysis, ROSATEA ’06, pp. 39–48. ACM, New York (2006). doi: 10.1145/1147249.1147254
  27. 27.
    Garca-Teodoro, P., Daz-Verdejo, J., Maci-Fernndez, G., Vzquez, E.: Anomaly-based network intrusion detection: techniques, systems and challenges. Comput. Secur. 28(12), 18–28 (2009). doi: 10.1016/j.cose.2008.08.003.
  28. 28.
    Ghezzi, C., Pezzè, M., Sama, M., Tamburrelli, G.: Mining behavior models from user-intensive web applications categories and subject descriptors. In: 36th International Conference on Software Engineering, ICSE ’14. ACM, Hyderabad (2014)Google Scholar
  29. 29.
    Gouraud, S.D., Denise, A., Gaudel, M.C., Marre, B.: A new way of automating statistical testing methods. In: Automated software engineering, 2001. (ASE 2001). In: Proceedings 16th Annual International Conference on Automated Software Engineering, pp. 5–12 (2001). doi: 10.1109/ASE.2001.989785
  30. 30.
    Henard, C., Papadakis, M., Perrouin, G., Klein, J., Heymans, P., Le Traon, Y.: Bypassing the combinatorial explosion: using similarity to generate and prioritize T-wise test configurations for software product lines. IEEE Trans. Soft. Eng. 40(7), 650–670 (2014). doi: 10.1109/TSE.2014.2327020.
  31. 31.
    Henard, C., Papadakis, M., Perrouin, G., Klein, J., Traon, Y.L.: Multi-objective test generation for software product lines. In: Proceedings of the 17th International Software Product Line Conference, SPLC ’13, pp. 62–71. ACM, New York (2013). doi: 10.1145/2491627.2491635
  32. 32.
    Johansen, M., Haugen, O., Fleurey, F., Eldegard, A., Syversen, T.: Generating better partial covering arrays by modeling weights on sub-product lines. In: France, R., Kazmeier, J., Breu, R., Atkinson, C. (eds.) Model Driven Engineering Languages and Systems, Lecture Notes in Computer Science, vol. 7590, pp. 269–284. Springer, Berlin Heidelberg (2012). doi: 10.1007/978-3-642-33666-9_18 CrossRefGoogle Scholar
  33. 33.
    Kang, K.C., Cohen, S.G., Hess, J.A., Novak, W.E., Spencer Peterson, A.: Feature-oriented domain analysis (FODA) feasibility study. Tech. rep., Software Engineering Institute, Carnegie Mellon University (1990)Google Scholar
  34. 34.
    Kim, C., Khurshid, S., Batory, D.: Shared execution for efficiently testing product lines. In: Software reliability engineering (ISSRE), IEEE 23rd International Symposium on, pp. 221–230 (2012). doi: 10.1109/ISSRE.2012.23
  35. 35.
    Lauenroth, K., Pohl, K., Toehning, S.: Model checking of domain artifacts in product line engineering. In: Automated software engineering, ASE ’09. 24th IEEE/ACM international conference, pp. 269–280 (2009). doi: 10.1109/ASE.2009.16
  36. 36.
    Lochau, M., Oster, S., Goltz, U., Schürr, A.: Model-based pairwise testing for feature interaction coverage in software product line engineering. Softw. Qual. J. 20(3–4), 567–604 (2012). doi: 10.1007/s11219-011-9165-4 CrossRefGoogle Scholar
  37. 37.
    Mathur, A.P.: Foundations of Software Testing. Pearson Education, New York (2008)Google Scholar
  38. 38.
    Metzger, A., Pohl, K.: Software product line engineering and variability management: achievements and challenges. In: Proceedings of the on Future of Software Engineering, FOSE, pp. 70–84. ACM, New York (2014). doi: 10.1145/2593882.2593888
  39. 39.
    Michel, R., Classen, A., Hubaux, A., Boucher, Q.: A formal semantics for feature cardinalities in feature diagrams. In: Proceedings of the 5th Workshop on Variability Modeling of Software-Intensive Systems, VaMoS, pp. 82–89. ACM, New York (2011). doi: 10.1145/1944892.1944902
  40. 40.
    Musa, J.D., Fuoco, G., Irving, N., Kropfl, D., Juhlin, B.: The operational profile. NATO ASI Ser. Comput. Syst. Sci. 154, 333–344 (1996)Google Scholar
  41. 41.
    Nguyen, H.V., Kästner, C., Nguyen, T.N.: Exploring variability-aware execution for testing plugin-based web applications. In: Jalote, P., Briand, L.C., van der Hoek, A. (eds.) 36th international conference on software engineering, ICSE ’14, Hyderabad, India - May 31 - June 07, 2014, pp. 907–918. ACM (2014). doi: 10.1145/2568225.2568300
  42. 42.
    Oster, S., Wöbbeke, A., Engels, G., Schürr, A.: Model-based software product lines testing survey. In: Model-Based Testing for Embedded Systems, pp. 339–382. CRC Press (2011)Google Scholar
  43. 43.
    Perrouin, G., Oster, S., Sen, S., Klein, J., Baudry, B., le Traon, Y.: Pairwise testing for software product lines: comparison of two approaches. Softw. Qual. J. 20(3–4), 605–643 (2012). doi: 10.1007/s11219-011-9160-9 CrossRefGoogle Scholar
  44. 44.
    Pohl, K., Böckle, G., Van Der Linden, F.: Software Product Line Engineering: Foundations, Principles, and Techniques. Springer, New York (2005)CrossRefzbMATHGoogle Scholar
  45. 45.
    Prowell, S.J.: JUMBL: a tool for model-based statistical testing. In: System sciences. In: Proceedings of the 36th Annual Hawaii International Conference, p. 9 (2003). doi: 10.1109/HICSS.2003.1174916
  46. 46.
    von Rhein, A., Apel, S., Kästner, C., Thüm, T., Schaefer, I.: The PLA model: on the combination of product-line analyses. In: Proceedings of the Seventh International Workshop on Variability Modelling of Software-intensive Systems (VaMoS). pp. 14:1–14:8. ACM, New York (2013). doi: 10.1145/2430502.2430522
  47. 47.
    Samih, H., Acher, M., Bogusch, R., Le Guen, H., Baudry, B.: Deriving usage model variants for model-based testing: an industrial case study. In: IEEE (ed.) 19th international conference on engineering of complex computer systems (ICECCS). Tianjin, Chine (2014).
  48. 48.
    Samih, H., Bogusch, R.: MPLM - MaTeLo Product line manager. In: Proceedings of the 18th International Software Product Line Conference: Companion Volume for Workshops, Demonstrations and Tools - Vol 2, SPLC , pp. 138–142. ACM, New York (2014). doi: 10.1145/2647908.2655980
  49. 49.
    Samih, H., Le Guen, H., Bogusch, R., Acher, M., Baudry, B.: An Approach to Derive Usage Models Variants for Model-based Testing. In: The 26th IFIP International Conference on Testing Software and Systems (2014). Springer, Madrid, Espagne (2014).
  50. 50.
    Sampath, S., Bryce, R.C., Viswanath, G., Kandimalla, V., Koru, a.G.: Prioritizing user-session-based test cases for web applications testing. In: 1st international conference on Software testing, verification, and validation, pp. 141–150. IEEE (2008). doi: 10.1109/ICST.2008.42.
  51. 51.
    Schobbens, P.Y., Heymans, P., Trigaux, J.C., Bontemps, Y.: Generic semantics of feature diagrams. Comput. Netw. 51(2), 456–479 (2007). doi: 10.1016/j.comnet.2006.08.008. Feature Interaction
  52. 52.
  53. 53.
    Segura, S., Sánchez, A.B., Ruiz-Cortés, A.: Automated variability analysis and testing of an e-commerce site.: an experience report. In: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, ASE ’14, pp. 139–150. ACM, New York (2014). doi: 10.1145/2642937.2642939
  54. 54.
    Sprenkle, S.E., Pollock, L.L., Simko, L.M.: Configuring effective navigation models and abstract test cases for web applications by analysing user behaviour. Softw. Test. Verif. Reliab. 23(6), 439–464 (2013). doi: 10.1002/stvr.1496 CrossRefGoogle Scholar
  55. 55.
    Thévenod-Fosse, P., Waeselynck, H.: An investigation of statistical software testing. Softw. Test. Verif. Reliab. 1(2), 5–25 (1991)CrossRefGoogle Scholar
  56. 56.
    Tretmans, J.: Model based testing with labelled transition systems. In: Hierons, R., Bowen, J., Harman, M. (eds.) Formal Methods and Testing, Lecture Notes in Computer Science, vol. 4949, pp. 1–38. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-78917-8_1 Google Scholar
  57. 57.
    Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann, Burlington (2007)Google Scholar
  58. 58.
    Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches (April 2011), 297–312 (2012). doi: 10.1002/stvr
  59. 59.
    Verwer, S., Eyraud, R., De La Higuera, C.: PAutomaC: a probabilistic automata and hidden Markov models learning competition. Mach. Learn. pp. 1–26 (2013). doi: 10.1007/s10994-013-5409-9.
  60. 60.
    Viterbi, A.J.: Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inf. Theory 13(2), 260–269 (1967)CrossRefzbMATHGoogle Scholar
  61. 61.
    Weißleder, S., Sokenou, D., Schlingloff, B.: Reusing state machines for automatic test generation in product lines. In: 1st workshop on model-based testing in practice (MoTiP), p. 19. Citeseer, Berlin (2008).
  62. 62.
    Whittaker, J., Thomason Michael, G.: A Markov chain model for statistical software testing. IEEE Trans. Softw. Eng. 20(10), 812–824 (1994). doi: 10.1109/32.328991 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Xavier Devroey
    • 1
  • Gilles Perrouin
    • 1
  • Maxime Cordy
    • 1
  • Hamza Samih
    • 2
    • 3
    • 4
  • Axel Legay
    • 4
  • Pierre-Yves Schobbens
    • 1
  • Patrick Heymans
    • 1
  1. 1.PReCISEUniversity of NamurNamurBelgium
  2. 2.Alcatel-Lucent, IP T&R / Wireless TransmissionNozayFrance
  3. 3.All4tec GLLaval CedexFrance
  4. 4.Inria Rennes, Bretagne AtlantiqueRennes CedexFrance

Personalised recommendations