Software Quality Journal

, Volume 24, Issue 2, pp 365–405 | Cite as

Testing variability-intensive systems using automated analysis: an application to Android

  • José A. Galindo
  • Hamilton Turner
  • David Benavides
  • Jules White
Article

Abstract

Software product lines are used to develop a set of software products that, while being different, share a common set of features. Feature models are used as a compact representation of all the products (e.g., possible configurations) of the product line. The number of products that a feature model encodes may grow exponentially with the number of features. This increases the cost of testing the products within a product line. Some proposals deal with this problem by reducing the testing space using different techniques. However, a daunting challenge is to explore how the cost and value of test cases can be modeled and optimized in order to have lower-cost testing processes. In this paper, we present TESting vAriAbiLity Intensive Systems (TESALIA), an approach that uses automated analysis of feature models to optimize the testing of variability-intensive systems. We model test value and cost as feature attributes, and then we use a constraint satisfaction solver to prune, prioritize and package product line tests complementing prior work in the software product line testing literature. A prototype implementation of TESALIA is used for validation in an Android example showing the benefits of maximizing the mobile market share (the value function) while meeting a budgetary constraint.

Keywords

Testing Software product lines Automated analysis Feature models Android 

References

  1. Acher, M., Alferez, M., Galindo, J. A., Romenteau, P., & Baudry, B. (2014). Vivid: A variability-based tool for synthesizing video sequences. In 18th International software product line conference (SPLC’14), tool track, Florence, Italie, http://hal.inria.fr/hal-01020933
  2. Akbar, M., Manning, E., Shoja, G., & Khan, S. (2001). Heuristic solutions for the multiple-choice multi-dimension knapsack problem. In Computational science-ICCS, 2001 (pp. 659–668).Google Scholar
  3. Alférez, M., Galindo, J. A., Acher, M., & Baudry, B. (2014). Modeling variability in the video domain: Language and experience report. In Rapport de recherche RR-8576, INRIA, http://hal.inria.fr/hal-01023159
  4. Batory, D., Benavides, D., & Ruiz-Cortes, A. (2006). Automated analysis of feature models: Challenges ahead. Communications of the ACM, 49(12), 45–47.CrossRefGoogle Scholar
  5. Beck, K. (2003). Test-driven development: By example. Reading, MA: Addison-Wesley Professional.Google Scholar
  6. Beizer, B. (1990). Software testing techniques (2nd ed.). New York, NY: Van Nostrand Reinhold Co.Google Scholar
  7. Benavides, D., Segura, S., & Ruiz-Cortés, A. (2010). Automated analysis of feature models 20 years later: A literature review. Information Systems, 35(6). doi:10.1016/j.is.2010.01.001.
  8. Bertolino, A., & Gnesi, S. (2003). Use case-based testing of product lines. ACM SIGSOFT Software Engineering Notes, 28(5), 355–358.CrossRefGoogle Scholar
  9. Binkley, D., & Society, I. C. (1997). Test Cost Reduction, 23(8), 498–516.Google Scholar
  10. Boehm, B. W. (2006). Value-based software engineering: Seven key elements and ethical considerations. In S. Biffl, A. Aurum, B. Boehm, H. Erdogmus & P. Grünbacher (Eds.), Value-based software engineering (pp. 109–132). Berlin, Heidelberg: Springer. doi:10.1007/3-540-29263-2_6, http://dx.doi.org/10.1007/3-540-29263-2_6.
  11. Boehm, B. W., & Sullivan, K. J. (2000). Software economics: A roadmap. In Proceedings of the conference on the future of software engineering (pp. 319–343). New York, NY: ACM. doi:10.1145/336512.336584, http://doi.acm.org/10.1145/336512.336584.
  12. Coello Coello, C. A. (2006). Evolutionary multi-objective optimization: A historical view of the field. Computational Intelligence Magazine, IEEE, 1(1), 28–36.MathSciNetCrossRefGoogle Scholar
  13. Coffman, E., Jr, Garey, M., & Johnson, D. (1996). Approximation algorithms for bin packing: A survey. In Approximation algorithms for NP-hard problems (pp. 46–93). PWS Publishing Co.Google Scholar
  14. Cohen, M. B., Dwyer, M. B., & Shi, J. (2006). Coverage and adequacy in software product line testing. In Proceedings of the ISSTA 2006 workshop on role of software architecture for testing and analysis—ROSATEA ’06 (pp. 53–63), doi: 10.1145/1147249.1147257, http://portal.acm.org/citation.cfm?doid=1147249.1147257
  15. Cohen, M. B., Dwyer, M. B., & Shi, J. (2008). Constructing interaction test suites for highly-configurable systems in the presence of constraints: A greedy approach. Software Engineering, IEEE Transactions on, 34(5), 633–650.Google Scholar
  16. Colanzi, T. E., Assunção, W. K. G., de Freitas Guilhermino Trindade, D., Zorzo, C. A., & Vergilio, S. R. (2013). Evaluating different strategies for testing software product lines. Journal of Electronic Testing, 29(1), 9–24. doi:10.1007/s10836-012-5343-y.CrossRefGoogle Scholar
  17. Cordy, M., Schobbens, P. Y., Heymans, P., & Legay, A. (2013). Beyond boolean product line model checking: Dealing with feature attributes and multi-features. In Software engineering (ICSE), 2013 35th international conference on (pp. 472–481). doi:10.1109/ICSE.2013.6606593
  18. Dalal, S. R., Jain, A., Karunanithi, N., Leaton, J. M., Lott, C. M., Patton, G. C., & Horowitz, B. M. (1999). Model-based testing in practice. In Proceedings of the international conference on software engineering (ICSE ’99) (pp. 285–294).Google Scholar
  19. Deb, K. (2001). Multi-objective optimization. In Multi-objective optimization using evolutionary algorithms (pp. 13–46).Google Scholar
  20. do Carmo Machado, I., McGregor, J. D., & Santana de Almeida, E. (2012). Strategies for testing products in software product lines. ACM SIGSOFT Software Engineering Notes, 37(6), 1. doi:10.1145/2382756.2382783.
  21. Dougherty, B., White, J., & Schmidt, D. C. (2012). Model-driven auto-scaling of green cloud computing infrastructure. Future Generation Computer Systems, 28(2), 371–378.CrossRefGoogle Scholar
  22. Galindo, J. A., Benavides, D., & Segura, S. (2010). Debian packages repositories as software product line models. Towards automated analysis. In ACoTA (pp. 29–34).Google Scholar
  23. Galindo, J. A., Alférez, M., Acher, M., Baudry, B., & Benavides, D. (2014). A variability-based testing approach for synthesizing video sequences. In Proceedings of the 2014 international symposium on software testing and analysis, ACM, New York, NY, USA, ISSTA 2014 (pp. 293–303). doi:10.1145/2610384.2610411
  24. García-Galán, J., Rana, O. F., Trinidad, P., & Ruiz-Cortés, A. (2013). Migrating to the cloud: A software product line based analysis. In 3rd International conference on cloud computing and services science (CLOSER’13).Google Scholar
  25. Hartman, A. (2005). Software and hardware testing using combinatorial covering suites. In Graph theory, combinatorics and algorithms (pp. 237–266). Berlin: Springer.Google Scholar
  26. Henard, C., Papadakis, M., Perrouin, G., Klein, J., & Traon, Y. L. (2013). Multi-objective test generation for software product lines. In Proceedings of the 17th international software product line conference, ACM, New York, NY, USA, SPLC ’13 (pp. 62–71). doi:10.1145/2491627.2491635
  27. Johansen, M. F., Haugen, O. Y., & Fleurey, F. (2012a) An algorithm for generating t-wise covering arrays from large feature models. In Proceedings of the 16th international software product line conference on—SPLC ’12 (Vol. 1, p. 46). doi:10.1145/2362536.2362547, http://dl.acm.org/citation.cfm?doid=2362536.2362547
  28. Johansen, M. F., Haugen, O. Y., Fleurey, F., Eldegard, A. G., & Syversen, T. R. (2012b) Generating better partial covering arrays by modeling weights on sub-product lines (pp. 269–284).Google Scholar
  29. Kang, K. C., Cohen, S. G., Hess, J. A., Novak, W. E., & Peterson, A. S. (1990). Feature-oriented domain analysis (FODA) feasibility study. Technical report., DTIC document.Google Scholar
  30. Kästner, C., von Rhein, A., Erdweg, S., Pusch, J., Apel, S., Rendel, T., & Ostermann, K. (2012). Toward variability-aware testing. In: Proceedings of the 4th international workshop on feature-oriented software development, ACM, New York, NY, USA, FOSD ’12 (pp. 1–8). doi:10.1145/2377816.2377817
  31. Kuhn, D., Wallace, D., & Gallo, J. A. M. (2004). Software fault interactions and implications for software testing. Software Engineering, IEEE Transactions on, 30(6), 418–421. doi:10.1109/TSE.2004.24.CrossRefGoogle Scholar
  32. Lamancha, B. P., & Usaola, M. P. (2010). Testing product generation in software product lines using pairwise for features coverage. In A. Petrenko, A. Simão & J. C. Maldonado (Eds.), Testing software and systems, Lecture notes in computer science (Vol. 6435, pp. 111–125). Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-16573-3_9, http://dx.doi.org/10.1007/978-3-642-16573-3_9.
  33. Lopez-Herrejon, R. E., Galindo, J. A., Benavides, D., Segura, S., & Egyed, A. (2012). Reverse engineering feature models with evolutionary algorithms: An exploratory study. In 4th Symposium on search based software engineering (pp. 168–182). Trento, Italy: Springer.Google Scholar
  34. Lotufo, R., She, S., Berger, T., Czarnecki, K., & Wasowski, A. (2010). Evolution of the Linux kernel variability model. In Software product lines (pp. 136–150). Going Beyond.Google Scholar
  35. Martello, S., & Toth, P. (1990). Knapsack problems: Algorithms and computer implementations. New York: Wiley.MATHGoogle Scholar
  36. Mendonca, M., Branco, M., & Cowan, D. (2009). SPLOT: Software product lines online tools. In Proceedings of the 24th ACM SIGPLAN conference companion on object oriented programming systems languages and Applications, ACM, New York, NY, USA, OOPSLA ’09 (pp. 761–762). doi:10.1145/1639950.1640002
  37. Muccini, H., & Van Der Hoek, A. (2003). Towards testing product line architectures. Electronic Notes in Theoretical Computer Science, 82(6), 99–109.CrossRefGoogle Scholar
  38. Nie, C., & Leung, H. (2011). A survey of combinatorial testing. ACM Computing Surveys (CSUR), 43(2), 11.CrossRefMATHGoogle Scholar
  39. Olaechea, R., Stewart, S., Czarnecki, K., & Rayside, D. (2012). Modelling and multi-objective optimization of quality attributes in variability-rich software. In Proceedings of the fourth international workshop on nonfunctional system properties in domain specific modeling languages, ACM, New York, NY, USA, NFPinDSML ’12 (pp. 2:1–2:6). doi:10.1145/2420942.2420944
  40. Oster, S., Markert, F., & Ritter, P. (2010). Automated incremental pairwise testing of software product lines. In J. Bosch & J. Lee (Eds.), Software product lines: Going beyond, Lecture notes in computer science (Vol. 6287, pp. 196–210). Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-15579-6_14, http://dx.doi.org/10.1007/978-3-642-15579-6_14.
  41. Passos, L., Novakovic, M., Xiong, Y., Berger, T., Czarnecki, K., Wasowski, A. (2011). A study of non-boolean constraints in variability models of an embedded operating system. ACM, Munich, Germany, http://fosd.de/2011
  42. Perrouin, G., Sen, S., Klein, J., Baudry, B., Traon, Y. L. (2010). Automated and scalable T-wise test case generation Strategies for software product lines. In 2010 Third international conference on software testing, verification and validation (pp. 459–468). doi:10.1109/ICST.2010.43, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5477055
  43. Perrouin, G., Oster, S., Sen, S., Klein, J., Baudry, B., & Traon, Y. (2011). Pairwise testing for software product lines: Comparison of two approaches. Software Quality Journal, 605–643. doi:10.1007/s11219-011-9160-9.
  44. Pohl, K., & Metzger, A. (2006). Software product line testing. Communications of the ACM, 49(12), 78–81. doi:10.1145/1183236.1183271.CrossRefGoogle Scholar
  45. Roos-Frantz, F., Benavides, D., Ruiz-Cortés, A., Heuer, A., & Lauenroth, K. (2012). Quality-aware analysis in product line engineering with the orthogonal variability model. Software Quality Journal, 20(3–4), 519–565.CrossRefGoogle Scholar
  46. Rothermel, G., & Hall, D. (1997). A safe. Efcient Regression Test Selection Technique, 2, 1–35.Google Scholar
  47. Sayyad, A., Menzies, T., & Ammar, H. (2013). On the value of user preferences in search-based software engineering: A case study in software product lines. In Software engineering (ICSE), 2013 35th international conference on (pp. 492–501). doi:10.1109/ICSE.2013.6606595
  48. Segura, S., Galindo, J., Benavides, D., Parejo, J., & Ruiz-Cortés, A. (2012). Betty: Benchmarking and testing on the automated analysis of feature models. In U. Eisenecker, S. Apel, & S. Gnesi (Eds.), Sixth international workshop on variability modelling of software-intensive systems (VaMoS’12) (pp. 63–71). Leipzig, Germany: ACM.Google Scholar
  49. She, S., Lotufo, R., Berger, T., Wasowski, A., & Czarnecki, K. (2010). The variability model of the Linux kernel. VaMoS, 10, 45–51.Google Scholar
  50. She, S., Ryssel, U., Andersen, N., Wasowski, A., Czarnecki, K. (2014). Efficient synthesis of feature models. Information and Software Technology (0), doi:10.1016/j.infsof.2014.01.012, http://www.sciencedirect.com/science/article/pii/S0950584914000238
  51. Sinnema, M., & Deelstra, S. (2007). Classifying variability modeling techniques. Information and Software Technology, 49(7), 717–739.CrossRefGoogle Scholar
  52. Smith, B., & Feather, M. S. (2000). Challenges and methods in testing the remote agent planner. In Proceedings of the 5th international conference on artificial intelligence planning and scheduling (pp. 254–263). AIPS.Google Scholar
  53. Sneed, H. (2009). Value driven testing. 2009 testing: Academic and industrial conference—practice and research techniques (pp. 157–166). doi:10.1109/TAICPART.2009.13, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5381632
  54. Spillner, A., Linz, T., & Schaefer, H. (2011). Software testing foundations: A study guide for the certified tester exam (3rd ed.). Rocky Nook.Google Scholar
  55. Srikanth, H., Williams, L., & Osborne, J. (2005). System test case prioritization of new and regression test cases. In Empirical software engineering, 2005. 2005 international symposium on (pp 10). doi:10.1109/ISESE.2005.1541815
  56. Tang, Q. Y., Friedberg, P., Cheng, G., & Spanos, C. J. (2007). Circuit size optimization with multiple sources of variation and position dependant correlation. In Advanced lithography, international society for optics and photonics (pp. 65,210P–65,210P).Google Scholar
  57. Thum, T., Batory, D., & Kastner, C. (2009). Reasoning about edits to feature models. In Software engineering, 2009. ICSE 2009. IEEE 31st international conference on, IEEE (pp. 254–264).Google Scholar
  58. van der Linden, F. J., Schmid, K., & Rommes, E. (2007). Software product lines in action. Berlin: Springer.CrossRefGoogle Scholar
  59. White, J., Galindo, J. A., Saxena, T., Dougherty, B., Benavides, D., & Schmidt, D. C. (2014). Evolving feature model configurations in software product lines. Journal of Systems and Software 87(0), 119–136. doi:10.1016/j.jss.2013.10.010, http://www.sciencedirect.com/science/article/pii/S0164121213002434
  60. Withey, J. (1996). Investment analysis of software assets for product lines. Technical report CMU/SEI-96-TR-010. Software Engineering institute, Carnegie Mellon University.Google Scholar
  61. Zhu, H., Hall, P. A. V., & May, J. H. R. (1997). Software unit test coverage and adequacy. ACM Computing Surveys (CSUR), 29(4), 366–427.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • José A. Galindo
    • 1
  • Hamilton Turner
    • 2
  • David Benavides
    • 1
  • Jules White
    • 2
    • 3
  1. 1.Dept. Lenguajes y Sistemas InformáticosUniversity of SevilleSevilleSpain
  2. 2.Bradley Department of Electrical and Computer EngineeringVirginia TechBlacksburgUSA
  3. 3.Department of Electrical Engineering and Computer ScienceVanderbilt UniversityNashvilleUSA

Personalised recommendations