Skip to main content

Predicting failures in agile software development through data analytics

Abstract

Artificial intelligence-driven software development paradigms have been attracting much attention in academia, industry and the government. More specifically, within the last 5 years, a wave of data analytics is affecting businesses from all domains, influencing engineering management practices in many industries and making a difference in academic research. Several major software vendors have been adopting a form of “intelligent” development in one or more phases of their software development processes. Agile for example, is a well-known example of a lifecycle used to build intelligent and analytical systems. The agile process consists of multiple sprints; in each sprint a specific software feature is developed, tested, refined and documented. However, because agile development depends on the context of the project, testing is performed differently in every sprint. This paper introduces a method to predict software failures in the subsequent agile sprints. That is achieved by utilizing analytical and statistical methods (such as using Mean Time between Failures and modelling regression). The novel method is called: analytics-driven testing (ADT). ADT predicts errors and their locations (with a certain statistical confidence level). That is done by continuously measuring MTBF for software components, and using a forecasting regression model for estimating where and what types of software system failures are likely to occur. ADT is presented and evaluated.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Notes

  1. The Agile Manifesto document: http://www.agilemanifesto.org/.

  2. See footnote 2.

  3. See footnote 2.

  4. Microsoft developer network: http://msdn.microsoft.com/en-US/enus/library/.

  5. See footnote 3.

  6. See footnote 3.

  7. www.pushtotest.com/products.html.

  8. www.saucelabs.com.

  9. www.blazemeter.com.

References

  • Afzal, W., Torkar, R., & Feldt, R. (2009). A systematic review of search based testing for non-functional system properties, Journal of Information and Software Technology, vol. 51.

  • Afzal, W., Torkar, R., & Feldt, R. (2010). Search—based prediction of fault-slip-through in large software projects, In 2nd IEEE International Symposium on Search Based Software Engineering, pp 79–88.

  • Bach, J. (2013). Heuristic test planning: Context model, by Satisfice, Inc.

  • Batarseh, F. A. (2014). Context-driven testing on the cloud. In P. Brézillon & A. J. Gonzalez (Eds.), Context in computing: A cross disciplinary approach for modeling the real world (pp. 25–44). New York: Springer.

  • Batarseh, F. A., & Gonzalez, A. J. (2012). Incremental lifecycle validation of knowledge-based systems through commonKADS, Published at the IEEE Transactions on Systems, Man and Cybernetics (TSMC-A).

  • Batarseh, F. A., & Gonzalez, A. J. (2015). Validation of knowledge-based systems: a reassessment of the field. Artificial Intelligence Review, 43, 485–500. doi:10.1007/s10462-013-9396-9.

  • Batarseh, F. A., Gonzalez, A. J., & Knauf, R. (2013). Context-assisted test cases reduction for cloud validation, In Proceedings of the Eighth International and Interdisciplinary Conference on Modeling and Context 2013, Annecy, France.

  • Gunga., V, Kishnah., S, & Pudaruth., S. (2013). Design of a multi-agent system architecture for the scrum methodology, In Proceedings of the international journal of software engineering and applications (IJSEA), Vol. 4, No. 4.

  • Herzig, K. (2014). Using pre-release test failures to build early post release defect prediction models, The 25th IEEE international symposium on software reliability engineering.

  • Kalliosaari, L., Taipale O., & Smolander, K. (2012). Testing in the cloud: Exploring the practice, a paper published at the IEEE software magazine.

  • Kenett, R., & Baker, E. (2010). Process improvement and CMMI for systems and software, a book published by CRC Press, ISBN 9781420060508.

  • Kennett, R. (2013). Implementing scrum using business process management and patterns analysis methodologies, Published at the Dynamic Relationships Management Journal.

  • Lee, S., & O’Keefe, R. (1994). Developing a strategy for expert system validation and verification. IEEE Proceedings of the IEEE Transaction on systems Man and Cybernetics, 24, 643–655.

    Article  Google Scholar 

  • Mende, T. (2010). Effort-aware defect prediction models, Published at the 14th European Conference on Software Maintenance and Reengineering (CSMR).

  • Meziane, F., & Vadera, S. (2010). Artificial intelligence in software engineering, Chapter 14, information science reference. Computer Software Development. ISBN 978-1-60566-758-4.

  • Mosqueira-Rey, E. & Moret-Bonillo, V. (2000). Validation of intelligent systems: A critical study and a tool. In Proceedings of expert systems with applications, pp. 1–16.

  • Musa, J. D., Laninno, A., & Okumoto K. (1988). Software reliability: Measurement, prediction, and application, Published at the International Quality and Reliability Engineering Journal by Wiley. Vol 4.

  • Onoyama, T., Oyanagi, K., Kubota, S., & Tsuruta, S. (2000). Concept of validation and its tools for intelligent systems, In IEEE 2000, the proceedings of the digital object identifier, TENCON, pp. 394–399.

  • Qurashi, S., & Qureshi, M. (2014). Scrum of scrums solution for large size teams using scrum methodology, Proceedings of the Life Science Journal, 11(8), 443–449.

  • Rafi, S.M. (2012). Incorporating fault dependent correction delay in SRGM with testing effort and release policy analysis, Published at the CSI Sixth International Conference on Software Engineering (CONSEG).

  • Rao, R. (2014). Ten cloud based testing tools, a report published under: Tools Journal, cloud based testing tools.

  • Reed, W. (2010). A flexible parametric survival model which allows a bathtub shaped hazard rate function. Journal of Applied Statistics, 8, 1–17.

  • Richmond, B. (2006). Introduction to data analytics handbook. Migrant and Seasonal Head Start Technical Assistance Center, Academy for Educational Development. http://files.eric.ed.gov/fulltext/ED536788.pdf.

  • Sommerville, I. (2007). Software Engineering. 8th edition, 2007, Chapter 4, published by Addison Wesley.

  • Speaks, S. (2000). Reliability and MTBF overview, White Paper by Vicor Reliability Engineering.

  • Sykes A. (2014). Introduction to Regression Analysis. A paper published at the University of Chicago College of Law. http://www.law.uchicago.edu/files/files/20.Sykes_.Regression.pdf.

  • Tassey, G. (2002). The economic impacts of inadequate infrastructure for software testing, A report prepared by the National Institute of Standards and Technology, US Department of Commerce.

  • Turing, A. (1950). Computing machinery and intelligence, In Proceedings of Mind 1950, LIX, Number 236, pp. 433–460.

  • Vermesan, A., & Hogberg, F. (1999). Applicability of conventional software verification and validation to knowledge base components: A qualitative assessment, In Proceedings of the 5th European symposium on validation and verification of knowledge based systems—theory, tools and practice, (EUROVAV ‘99), pp. 343–364.

  • Weibull, W. (1951). A statistical distribution function of wide applicability. Journal of Applied Mechanics, 293–300.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feras A. Batarseh.

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Batarseh, F.A., Gonzalez, A.J. Predicting failures in agile software development through data analytics. Software Qual J 26, 49–66 (2018). https://doi.org/10.1007/s11219-015-9285-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-015-9285-3

Keywords