Skip to main content
Log in

Influences on regression testing strategies in agile software development environments

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Regression testing is a well-established practice in software development, but in recent years it has seen a change of status and emphasis with the increasing popularity of agile methods, which stress the central role of regression testing in maintaining software quality. The objectives of this article are to investigate regression testing strategies in agile development teams and identify the factors that can influence the adoption and implementation of this practice. We have used a mixed methods approach to our research, beginning with an analysis of the literature to identify research themes related to the adoption of regression testing techniques under agile methodologies, from which we developed an analytical framework for the study. This was followed by three exploratory case studies that we used to exercise the main elements of the framework, develop some key themes of interest, and devise a questionnaire for the final stage of the study, an on-line survey to explore the main issues identified in the case studies across different contexts. Within our specific sample, our results suggest that organizational maturity is a key factor in effective regression testing practices and that the adoption of such practices is helped by a coherent testing philosophy and change management processes. We also found that the return on investment in automated regression testing was positive for our respondents and that adopting these practices in the context of agile methods had been a relatively painless process for the organizations in our survey. We conclude that investing in regression testing tools and processes is likely to be beneficial for organizations. However, further work is needed in assessing how organizational culture impacts on the quality process and the financial outcomes for commercial software development organizations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Damm, L.-O., Lundberg, L., & Olsson, D. (2005). Introducing test automation and test-driven development: an experience report. Electronic Notes in Theoretical Computer Science, 116, 3–15.

    Article  Google Scholar 

  • Do, H., Elbaum, S., & Rothermel, G. (2004). Infrastructure support for controlled experimentation with software testing and regression testing techniques. In Proceedings of 2004 international symposium on empirical software engineering (ISESE ‘04) (pp. 60–70).

  • Dustin, E., Rashka, J., & Paul, J. (1999). Automated software testing. Boston: Addison-Wesley.

    Google Scholar 

  • Elbaum, S., Gable, D., & Rothermel, G. (2001). Understanding and measuring the sources of variation in the prioritization of regression test suites. In Proceedings of seventh international software metrics symposium (METRICS 2001) (pp. 169–179).

  • Engström, E., & Runeson, P. (2010). A qualitative survey of regression testing practices. In M. Ali Babar, M. Vierimaa & M. Oivo (Eds.), Proceedings 11th international conference on product-focused software process improvement (PROFES). Lecture Notes in Computer Science (Vol. 6156, pp. 3–16). Berlin, Heidelberg: Springer.

  • Engström, E., Runeson, P., & Skoglund, M. (2010). A systematic review on regression test selection techniques. Information and Software Technology, 52, 14–30.

    Article  Google Scholar 

  • Fenton, N. E., & Ohlsson, N. (2000). Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering, 26(8), 797–814.

    Google Scholar 

  • Gittens, M., Lutfiyya, H., Bauer, M., Godwin, D., Kim, Y.W., & Gupta, P. (2002). An empirical evaluation of system and regression testing. In Proceedings of the 2002 conference of the centre for advanced studies on collaborative research (CASCON ‘02) (p. 3).

  • Hetzel, W. (1984). The complete guide to software testing. Wellesley, Mass: QED Information Sciences.

    Google Scholar 

  • Kim, J.-M., Porter, A., & Rothermel, G. (2005). An empirical study of regression test application frequency. Software Testing, Verification and Reliability, 15(4), 257–279.

    Article  Google Scholar 

  • Korel, B., & AI-Yami, A. (1998) Automated regression test generation. In Proceedings of ISSTA 98. Clearwater Beach Florida USA.

  • Leung, H. K. N., & White, L. (1989). Insights into regression testing. In Proceedings of the conference on software maintenance (pp. 60–69).

  • Loo, P. S., & Tsai, W. K. (1988). Random testing revisited. Information and Software Technology, 30(7), 402–417.

    Article  Google Scholar 

  • Martin, R. (2011). Agile software development, principles, patterns, and practices. London: Pearson.

    Google Scholar 

  • Meszaros, G. (2003). Agile regression testing using record & playback. In Proceedings of the 18th annual ACM SIGPLAN conference on object-oriented programming, systems, languages, and applications (OOPSLA ‘03) (pp. 353–360). ACM.

  • Orso, A., Apiwattanapong, T., & Harrold, M. J. (2003). Leveraging field data for impact analysis and regression testing. In Proceedings of the 9th European software engineering conference (pp. 128–137). ACM.

  • Persson, C., & Yilmaztürk, N. (2004). Establishment of automated regression testing at ABB: Industrial experience report on ‘Avoiding the Pitfalls’. In Proceedings of the 19th IEEE international conference on automated software engineering.

  • Puleio, M. (2006). How not to do agile testing. Proceedings of Agile Conference, 2006, 307–314.

    Google Scholar 

  • Runeson, P., Andersson, C., & Host, M. (2003). Test processes in software product evolution—A qualitative survey on the state of practice. Journal of Software Maintenance and Evolution: Research and Practice, 15, 41–59.

    Article  MATH  Google Scholar 

  • Runeson, P., & Höst, M. (2009). Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering, 14, 131–164.

    Article  Google Scholar 

  • Salama, R. (2011). A regression testing framework for financial time-series databases. An effective combination of fitnesse, scala, and KDB/Q. In Proceedings of the ACM international conference on object oriented programming systems languages and applications (SPLASH ‘11) (pp. 149–154).

  • Siegel, S. (1996). Object oriented software testing: a hierarchical approach. New York: Wiley.

    Google Scholar 

  • Srivastava, A., & Thiagarajan, J. (2002). Effectively prioritizing tests in development environment. In Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis (ISSTA ‘02) (pp. 97–106).

  • Svensson, H., & Host, M. (2005). Introducing an agile process in a software maintenance and evolution organization. In Proceedings of the 9th European conference on software maintenance and reengineering (pp. 256–264).

  • Talby, D., Keren, A., Hazzan, O., & Dubinsky, Y. (2006). Agile software testing in a large-scale project. IEEE Software, 23(4), 30–37.

    Article  Google Scholar 

  • Tsai, W.-T., Poonawala, M., & Suganuma, H. (1998). Regression testing in an industrial environment. Communications of the ACM, 41(5), 81–86.

    Article  Google Scholar 

  • Wong, W. E., Horgan, J. R., London, S., & Agrawal, H. (1997). A study of effective regression testing in practice. In Proceedings of the 8th international symposium on software reliability engineering (pp. 264–274).

  • Yoo, S., & Harman, M. (2012). Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2), 67–120.

    Article  Google Scholar 

  • Zheng, J., Robinson, B., Williams, L., & Smiley, K. (2006). Applying regression test selection for COTS-based applications. Proceedings of the 28th international conference on Software engineering (ICSE ‘06) (pp. 512–522).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Parsons.

Appendix: Survey questions

Appendix: Survey questions

Question 1: What type(s) of software product(s) are being regression tested in your organization? (select all that apply)

  1. A.

    System software

  2. B.

    Middleware/infrastructure

  3. C.

    Application software (for in-house use)

  4. D.

    Application software (commercial off-the-shelf)

  5. E.

    Application software (for third-party clients)

  6. F.

    Other (please specify)

Question 2: What is the maturity of the development team, organization, or other relevant context within which regression testing takes place? These options are based on the 5 CMM levels of maturity from Initial (lowest) to Optimizing (highest). Select the highest level that is appropriate for your organization/team.

  1. A.

    Initial (ad hoc, chaotic)

  2. B.

    Managed (processes are planned and controlled)

  3. C.

    Defined (practices are standardized and embedded across the organization

  4. D.

    Quantitatively managed (performance data are gathered and analyzed)

  5. E.

    Optimizing (culture of continuous improvement)

Question 3: What level of regulatory compliance is required for your development context?

  1. A.

    Minimal or None (only general legal compliance is required)

  2. B.

    Limited (some external regulations have to be complied with but the overhead is small)

  3. C.

    Significant (there is significant regulatory compliance required that impacts on the development process)

  4. D.

    Major (regulatory compliance is a major concern and an essential aspect of the software)

The following questions were all free-text responses:

Question 4: What hardware configuration is used for regression testing in your organization?

Question 5: What software architecture is used for regression testing in your organization?

Question 6: How did your organization handle change and risk management when it introduced automated regression testing?

Question 7: How does your organization handle change and risk management in the on-going evolution of automated regression testing?

Question 8: What quality controls are in place for the test process within your organization?

Question 9: How would you characterize the organizational testing philosophy?

Question 10: What types of investment have been made by your organization in staff, training and infrastructure to support regression testing?

Question 11: Please provide any general thoughts that you have relating to how regression testing can be successfully introduced and maintained, in the context of different types organizations, teams, and products.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Parsons, D., Susnjak, T. & Lange, M. Influences on regression testing strategies in agile software development environments. Software Qual J 22, 717–739 (2014). https://doi.org/10.1007/s11219-013-9225-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-013-9225-z

Keywords

Navigation