STIPI: Using Search to Prioritize Test Cases Based on Multi-objectives Derived from Industrial Practice

  • Dipesh PradhanEmail author
  • Shuai Wang
  • Shaukat Ali
  • Tao Yue
  • Marius Liaaen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9976)


The importance of cost-effectively prioritizing test cases is undeniable in automated testing practice in industry. This paper focuses on prioritizing test cases developed to test product lines of Video Conferencing Systems (VCSs) at Cisco Systems, Norway. Each test case requires setting up configurations of a set of VCSs, invoking a set of test APIs with specific inputs, and checking statuses of the VCSs under test. Based on these characteristics and available information related with test case execution (e.g., number of faults detected), we identified that the test case prioritization problem in our particular context should focus on achieving high coverage of configurations, test APIs, statuses, and high fault detection capability as quickly as possible. To solve this problem, we propose a search-based test case prioritization approach (named STIPI) by defining a fitness function with four objectives and integrating it with a widely applied multi-objective optimization algorithm (named Non-dominated Sorting Genetic Algorithm II). We compared STIPI with random search (RS), Greedy algorithm, and three approaches adapted from literature, using three real sets of test cases from Cisco with four time budgets (25 %, 50 %, 75 % and 100 %). Results show that STIPI significantly outperformed the selected approaches and managed to achieve better performance than RS for on average 39.9 %, 18.6 %, 32.7 % and 43.9 % for the coverage of configurations, test APIs, statuses and fault detection capability, respectively.


Test case prioritization Search Configurations Test APIs 



This research is supported by the Research Council of Norway (RCN) funded Certus SFI. Shuai Wang is also supported by the RFF Hovedstaden funded MBE-CR project. Shaukat Ali and Tao Yue are also supported by the RCN funded Zen-Configurator project, the EU Horizon 2020 project funded U-Test, the RFF Hovedstaden funded MBE-CR project and the RCN funded MBT4CPS project.


  1. 1.
    Walcott, K.R., Soffa, M.L., Kapfhammer, G.M., Roos, R.S.: Timeaware test suite prioritization. In: Proceedings of 2006 International Symposium on Software Testing and Analysis, pp. 1–12 (2006)Google Scholar
  2. 2.
    Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Test case prioritization: an empirical study. In: Proceedings of International Conference on Software Maintenance (ICSM 1999), pp. 179–188 (1999)Google Scholar
  3. 3.
    Li, Z., Harman, M., Hierons, R.M.: Search algorithms for regression test case prioritization. IEEE Trans. Softw. Eng. (TSE) 33, 225–237 (2007)CrossRefGoogle Scholar
  4. 4.
    Wang, S., Buchmann, D., Ali, S., Gotlieb, A., Pradhan, D., Liaaen, M.: Multi-objective test prioritization in software product line testing: an industrial case study. In: International Software Product Line Conference, pp. 32–41 (2014)Google Scholar
  5. 5.
    Wang, S., Ali, S., Yue, T., Bakkeli, Ø., Liaaen, M.: Enhancing test case prioritization in an industrial setting with resource awareness and multi-objective search. In: ICSE, pp. 182–191 (2016)Google Scholar
  6. 6.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. TSE 6, 182–197 (2002)Google Scholar
  7. 7.
    Arrieta, A., Wang, S., Sagardui, G., Etxeberria, L.: Test case prioritization of configurable cyber-physical systems with weight-based search algorithms. In: Genetic and Evolutionary Computation (GECCO), pp. 1053–1060 (2016)Google Scholar
  8. 8.
    Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. TSE 27, 929–948 (2001)Google Scholar
  9. 9.
    Elbaum, S., Malishevsky, A., Rothermel, G.: Incorporating varying test costs and fault severities into test case prioritization. In: Proceedings of International Conference on Software Engineering (ICSE), pp. 329–338 (2001)Google Scholar
  10. 10.
    Wang, S., Ali, S., Yue, T., Li, Y., Liaaen, M.: A practical guide to select quality indicators for assessing pareto-based search algorithms in search-based software engineering. In: ICSE, pp. 631–642 (2016)Google Scholar
  11. 11.
    Wang, S., Ali, S., Gotlieb, A.: Cost-effective test suite minimization in product lines using search techniques. J. Syst. Softw. 103, 370–391 (2015)CrossRefGoogle Scholar
  12. 12.
    Wang, S., Ali, S., Gotlieb, A.: Minimizing test suites in software product lines using weight-based genetic algorithms. In: Proceedings of 15th Annual Conference on Genetic and Evolutionary Computation, pp. 1493–1500 (2013)Google Scholar
  13. 13.
    Sarro, F., Petrozziello, A., Harman, M.: Multi-objective software effort estimation. In: ICSE, pp. 619–630 (2016)Google Scholar
  14. 14.
    Wang, S., Ali, S., Yue, T., Liaaen, M.: UPMOA: an improved search algorithm to support user-preference multi-objective optimization. In: International Symposium on Software Reliability Engineering (ISSRE), pp. 393–404 (2015)Google Scholar
  15. 15.
  16. 16.
    Lu, Y., Lou, Y., Cheng, S., Zhang, L., Hao, D., Zhou, Y., Zhang, L.: How does regression test prioritization perform in real-world software evolution? In: Proceedings of 38th ICSE, pp. 535–546 (2016)Google Scholar
  17. 17.
    Nebro, A.J., Luna, F., Alba, E., Dorronsoro, B., Durillo, J.J., Beham, A.: AbYSS: adapting scatter search to multiobjective optimization. IEEE Trans. Evol. Comput. 12, 439–457 (2008)CrossRefGoogle Scholar
  18. 18.
    Vargha, A., Delaney, H.D.: A critique and improvement of the CL common language effect size statistics of McGraw and Wong. J. Educ. Behav. Stat. 25, 101–132 (2000)Google Scholar
  19. 19.
    Arcuri, A., Briand, L.: A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: 33rd International Conference on Software Engineering (ICSE), pp. 1–10 (2011)Google Scholar
  20. 20.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18, 50–60 (1947)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Durillo, J.J., Nebro, A.J.: jMetal: a Java framework for multi-objective optimization. Adv. Eng. Softw. 42, 760–771 (2011)CrossRefGoogle Scholar
  22. 22.
    Konak, A., Coit, D.W., Smith, A.E.: Multi-objective optimization using genetic algorithms: a tutorial. Reliab. Eng. Syst. Safety 91, 992–1007 (2006)CrossRefGoogle Scholar
  23. 23.
    De Oliveira Barros, M., Neto, A.: Threats to validity in search-based software engineering empirical studies. Technical report 6, UNIRIO-Universidade Federal do Estado do Rio de Janeiro (2011)Google Scholar
  24. 24.
    Arcuri, A., Fraser, G.: On parameter tuning in search based software engineering. In: Cohen, M.B., Ó Cinnéide, M. (eds.) SSBSE 2011. LNCS, vol. 6956, pp. 33–47. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  25. 25.
    Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22, 67–120 (2012)CrossRefGoogle Scholar
  26. 26.
    Catal, C., Mishra, D.: Test case prioritization: a systematic mapping study. Softw. Qual. J. 21, 445–478 (2013)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2016

Authors and Affiliations

  • Dipesh Pradhan
    • 1
    Email author
  • Shuai Wang
    • 1
  • Shaukat Ali
    • 1
  • Tao Yue
    • 1
    • 2
  • Marius Liaaen
    • 3
  1. 1.Certus V&V CenterSimula Research LaboratoryOsloNorway
  2. 2.University of OsloOsloNorway
  3. 3.Cisco SystemsOsloNorway

Personalised recommendations