Software Quality Journal

, Volume 25, Issue 4, pp 1203–1237 | Cite as

Test case selection in industry: an analysis of issues related to static approaches

  • Vincent Blondeau
  • Anne Etien
  • Nicolas Anquetil
  • Sylvain Cresson
  • Pascal Croisy
  • Stéphane Ducasse
Article
  • 396 Downloads

Abstract

Automatic testing constitutes an important part of everyday development practice. Worldline, a major IT company, is creating more and more tests to ensure the good behavior of its applications and gains in efficiency and quality. But running all these tests may take hours. This is especially true for large systems involving, for example, the deployment of a web server or communication with a database. For this reason, tests are not launched as often as they should be and are mostly run at night. The company wishes to improve its development and testing process by giving to developers rapid feedback after a change. An interesting solution is to reduce the number of tests to run by identifying only those exercising the piece of code changed. Two main approaches are proposed in the literature: static and dynamic. The static approach creates a model of the source code and explores it to find links between changed methods and tests. The dynamic approach records invocations of methods during the execution of test scenarios. Before deploying a test case selection solution, Worldline created a partnership with us to investigate the situation in its projects and to evaluate these approaches on three industrial, closed source, cases to understand the strengths and weaknesses of each solution. We propose a classification of problems that may arise when trying to identify the tests that cover a method. We give concrete examples of these problems and list some possible solutions. We also evaluate other issues such as the impact on the results of the frequency of modification of methods or considering groups of methods instead of single ones. We found that solutions must be combined to obtain better results, and problems have different impacts on projects. Considering commits instead of individual methods tends to worsen the results, perhaps due to their large size.

Keywords

Test selection Dynamic Static Industrial case 

Notes

Acknowledgments

This work was supported by Worldline and by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council, CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015–2020.

References

  1. Agrawal, H., Alberi, J. L., Horgan, J. R., Li, J. J., London, S., Wong, W. E., et al. (1998). Mining system tests to aid software maintenance. IEEE Computer, 31(7), 64–73. doi: 10.1109/2.689678.CrossRefGoogle Scholar
  2. Badri, L., Badri, M., & St-Yves, D. (2005). Supporting predictive change impact analysis: a control call graph based technique. In Software Engineering Conference, 2005. APSEC’05. 12th Asia-Pacific, IEEE, (pp. 9–pp)Google Scholar
  3. Beszedes, A., Gergely, T., Schrettner, L., Jasz, J., Lango, L., & Gyimothy, T. (2012). Code coverage-based regression test selection and prioritization in webkit. In 2012 28th IEEE international conference on software maintenance (ICSM), (pp. 46–55), doi: 10.1109/ICSM.2012.6405252
  4. Biswas, S., Mall, R., Satpathy, M., & Sukumaran, S. (2011). Regression test selection techniques: A survey. Informatica, (03505596) 35, (3)Google Scholar
  5. Ducasse, S., Lanza, M., & Tichelaar, S. (2000). Moose: an extensible language-independent environment for reengineering object-oriented systems. In Proceedings of the 2nd international symposium on constructing software engineering tools, CoSET ’00, http://scg.unibe.ch/archive/papers/Duca00bMooseCoset
  6. Ducasse, S., Anquetil, N., Bhatti, U., Cavalcante Hora, A., Laval, J., & Girba, T. (2011). MSE and FAMIX 3.0: an interexchange format and source code model family. Tech. rep., RMod – INRIA Lille-Nord Europe, http://rmod.lille.inria.fr/archives/reports/Duca11c-Cutter-deliverable22-MSE-FAMIX30
  7. Ekelund, E. D., & Engström, E. (2015). Efficient regression testing based on test history: An industrial evaluation. In International Conference on Software Maintenance and Evolution, IEEE Computer Society Google Scholar
  8. Elbaum, S., Kallakuri, P., Malishevsky, A. G., Rothermel, G., & Kanduri, S. (2003). Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software Testing, Verification, and Reliability Google Scholar
  9. Engström, E., Skoglund, M., & Runeson, P. (2008). Empirical evaluations of regression test selection techniques: A systematic review. In Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement, ACM, (pp. 22–31)Google Scholar
  10. Engström, E., Runeson, P., & Skoglund, M. (2010). A systematic review on regression test selection techniques. Information and Software Technology, 52(1), 14–30.CrossRefGoogle Scholar
  11. Ernst, M.D. (2003). Static and dynamic analysis: Synergy and duality. In WODA 2003: ICSE Workshop on Dynamic Analysis, Citeseer, (pp. 24–27)Google Scholar
  12. Gligoric, M., Eloussi, L., & Marinov, D. (2015). Practical regression test selection with dynamic file dependencies. In Proceedings of the 2015 international symposium on software testing and analysis, ACM, New York, NY, USA, ISSTA 2015, (pp. 211–222), doi: 10.1145/2771783.2771784
  13. Gupta, R., Harrold, M.J., & Soffa, M.L. (1992). An approach to regression testing using slicing. In Software Maintenance, 1992. Proceedings Conference on IEEE, (pp. 299–308)Google Scholar
  14. Hsia, P., Li, X., Chenho Kung, D., Hsu, C. T., Li, L., Toyoshima, Y., et al. (1997). A technique for the selective revalidation of oo software. Journal of Software Maintenance: Research and Practice, 9(4), 217–233.CrossRefGoogle Scholar
  15. Huang, S., Li, Z. J., Zhu, J., Xiao, Y., & Wang, W. (2011). A novel approach to regression test selection for j2ee applications. In 2011 27th IEEE international conference on software maintenance (ICSM), (pp. 13–22), doi: 10.1109/ICSM.2011.6080768
  16. Jász, J., Beszédes, Á., Gyimóthy, T., & Rajlich, V. (2008). Static execute after/before as a replacement of traditional software dependencies. In IEEE international conference on software maintenance, 2008. ICSM 2008., IEEE, (pp. 137–146)Google Scholar
  17. Leung, H.K., & White, L. (1989). Insights into regression testing. In Software Maintenance, 1989., Proceedings Conference on IEEE, (pp. 60–69), http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=65194
  18. Lingampally, R., Gupta, A., & Jalote, P. (2007). A multipurpose code coverage tool for java. In HICSS 2007. 40th Annual Hawaii International Conference on System Sciences, 2007, (pp. 261b–261b), doi: 10.1109/HICSS.2007.24
  19. Nanda, A., Mani, S., Sinha, S., Harrold, M., & Orso, A. (2011). Regression testing in the presence of non-code changes. In 2011 IEEE Fourth International conference on software testing, verification and validation (ICST), (pp. 21–30). doi: 10.1109/ICST.2011.60
  20. Parsai, A., Soetens, Q. D., Murgia, A., & Demeyer, S. (2014). Considering polymorphism in change-based test suite reduction. In Dingsoyr, T., Moe, N., Tonelli, R., Counsell, S., Gencel, C., Petersen, K. (Eds.), Agile methods. Large-scale development, refactoring, testing, and estimation, Lecture Notes in Business Information Processing, vol 199. Springer International Publishing, pp 166–181. doi: 10.1007/978-3-319-14358-3_14
  21. Rothermel, G., & Harrold, M.J. (1993). A safe, efficient algorithm for regression test selection. In Proceedings of the international conference on software maintenance (ICSM ’93), IEEE, (pp. 358–367)Google Scholar
  22. Runeson, P., & Höst, M. (2009). Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering, 14(2), 131–164.CrossRefGoogle Scholar
  23. Soetens, Q. D., Demeyer, S., & Zaidman, A. (2013). Change-based test selection in the presence of developer tests. In Software maintenance and reengineering (CSMR), 2013 17th European conference on IEEE, (pp. 101–110)Google Scholar
  24. Tengeri, D., Horváth, F., Beszédes, Á., Gergely, T., & Gyimóthy, T. (2016). Negative effects of bytecode instrumentation on Java source code coverage. In Proceedings of the IEEE 23rd international conference on software analysis, evolution, and reengineering (SANER’16), (pp. 225–235)Google Scholar
  25. White, L., & Leung, H. (1992). A firewall concept for both control-flow and data-flow in regression integration testing. In Software Maintenance, 1992. Proceedings., Conference on, (pp. 262–271), doi: 10.1109/ICSM.1992.242535
  26. White, L., Jaber, K., & Robinson, B. (2005). Utilization of extended firewall for object-oriented regression testing. In Software Maintenance, 2005. Proceedings of the 21st IEEE international conference on ICSM’05, (pp. 695–698), doi: 10.1109/ICSM.2005.101
  27. Willmor, D., & Embury, S. (2005). A safe regression test selection technique for database-driven applications. In Software Maintenance, 2005. Proceedings of the 21st IEEE International Conference on ICSM’05, (pp. 421–430), doi: 10.1109/ICSM.2005.15
  28. Yoo, S., & Harman, M. (2012). Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2), 67–120. doi: 10.1002/stvr.430.CrossRefGoogle Scholar
  29. Zheng, J., Williams, L., Robinson, B., & Smiley, K. (2007). Regression test selection for black-box dynamic link library components. In Incorporating COTS Software into Software Systems: Tools and Techniques, 2007. IWICSS ’07. Second International Workshop on, (pp. 9–9), doi: 10.1109/IWICSS.2007.8

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Vincent Blondeau
    • 1
    • 2
  • Anne Etien
    • 2
  • Nicolas Anquetil
    • 2
  • Sylvain Cresson
    • 1
  • Pascal Croisy
    • 1
  • Stéphane Ducasse
    • 2
  1. 1.WorldlineSeclinFrance
  2. 2.CNRS, Inria, Centrale Lille, UMR 9189 - CRIStALUniv. LilleLilleFrance

Personalised recommendations