Requirements Engineering

, Volume 17, Issue 2, pp 117–132 | Cite as

Early failure prediction in feature request management systems: an extended study

  • Camilo Fitzgerald
  • Emmanuel Letier
  • Anthony Finkelstein
RE’11 Best Papers

Abstract

Online feature request management systems are popular tools for gathering stakeholders’ change requests during system evolution. Deciding which feature requests require attention and how much upfront analysis to perform on them is an important problem in this context: too little upfront analysis may result in inadequate functionalities being developed, costly changes, and wasted development effort; too much upfront analysis is a waste of time and resources. Early predictions about which feature requests are most likely to fail due to insufficient or inadequate upfront analysis could facilitate such decisions. Our objective is to study whether it is possible to make such predictions automatically from the characteristics of the online discussions on feature requests. This paper presents a study of feature request failures in seven large projects, an automated tool-implemented framework for constructing failure prediction models, and a comparison of the performance of the different prediction techniques for these projects. The comparison relies on a cost-benefit model for assessing the value of additional upfront analysis. In this model, the value of additional upfront analysis depends on its probability of success in preventing failures and on the relative cost of the failures it prevents compared to its own cost. We show that for reasonable estimations of these two parameters, automated prediction models provide more value than a set of baselines for many failure types and projects. This suggests automated failure prediction during requirements elicitation to be a promising approach for guiding requirements engineering efforts in online settings.

Keywords

Early failure prediction Cost-benefit of requirements engineering Feature requests management systems Global software development Open source 

References

  1. 1.
    Abrahams, AS, Becker A, Fleder D, MacMillan IC (2005) Handling generalized cost functions in the partitioning optimization problem through sequential binary programming Data Mining. In: Fifth IEEE international conference on, 2005Google Scholar
  2. 2.
    Berry D, Damian D, Finkelstein A, Gause D, Hall R, Wassyng A et al. (2005) To do or not to do: If the requirements engineering payoff is so good, why aren’t more companies doing it? In: International conference on requirements engineering, 2005Google Scholar
  3. 3.
    Bird C, Pattison D, D’Souza R (2008) Latent social structure in open source projects. In: ACM SIGSOFT international symposium on foundations of software engineering, 2008Google Scholar
  4. 4.
    Boehm B, Papaccio P (2002) Understanding and controlling software costs. In: IEEE transactions on software engineeringGoogle Scholar
  5. 5.
    Boehm B, Turner R (2003) Balancing agility and discipline: a guide for the perplexed. Addison-Wesley Professional, New YorkGoogle Scholar
  6. 6.
    Cleland-Huang J, Dumitru H, Duan C, Castro-Herrera C (2009) Automated support for managing feature requests in open forums. ACM Commun 52:68–74Google Scholar
  7. 7.
    Damian D (2004) RE challenges in multi-site software development organisations. Int Conf Requir Eng 8:149–160Google Scholar
  8. 8.
    Fenton N, Neil M, Marsh W, Hearty P, Ł Radliński, Krause P (2008) On the effectiveness of early life-cycle defect prediction with bayesian nets. Empir Softw Eng 13(5):499–537Google Scholar
  9. 9.
    Fitzgerald C (2009) Support for collaborative elaboration of requirements models. Internal UCL reportGoogle Scholar
  10. 10.
    Fitzgerald C (2012) Collaborative reviewing and early failure prediction in feature request management systems. UCL doctoral thesisGoogle Scholar
  11. 11.
    Fitzgerald C, Letier E, Finkelstein A (2011) Early failure prediction in feature request management systems. International Conference on Requirements Engineering, pp 229–238Google Scholar
  12. 12.
    Gnesi S, Lami G, Trentanni G (2005) An automatic tool for the analysis of natural language requirements. Comput Syst Sci EngGoogle Scholar
  13. 13.
    Granger C (1969) Investigating causal relations by econometric models and cross-spectral methods. EconometricaGoogle Scholar
  14. 14.
    Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn ResGoogle Scholar
  15. 15.
    Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten I (2009) The weka data mining software: An update. ACM SIGKDD Explor Newslett 11:10–18Google Scholar
  16. 16.
    Kaufhold J, Abbott J, Kaucic R (2006) Distributed cost boosting and bounds on mis-classification cost. In: IEEE computer society conference on computer vision and pattern recognition, vol 1. pp 146–153Google Scholar
  17. 17.
    Kiyavitskaya N, Zeni N, Mich L, Berry D (2008) Requirements for tools for ambiguity identification and measurement in natural language requirements specifications. Int Conf Requir Eng 1:146–153Google Scholar
  18. 18.
    Laurent P, Cleland-Huang J (2009) Lessons learned from open source projects for facilitating online requirements processes. In: Lecture notes in computer science, requirements engineering: foundation for software quality, vol 5512. pp 240–255 Google Scholar
  19. 19.
    Laurent P, Cleland-Huang J, Duan C (2007) Towards automated requirements triage. Int Conf Requir Eng, pp 131–140Google Scholar
  20. 20.
    Madachy R, Boehm B (2008) Assessing quality processes with ODC COQUALMO. In: Lecture notes in computer science, making globally distributed software development a success storyGoogle Scholar
  21. 21.
    McConnell S (2004) Code complete, vol 2. Microsoft Press, WashingtonGoogle Scholar
  22. 22.
    Musa J (2004) Software reliability engineering: more reliable software, faster and cheaper. Tata McGraw-Hill, New YorkGoogle Scholar
  23. 23.
    Nagappan N, Ball T, Zeller A (2006) Mining metrics to predict component failures. In: International conference on software engineeringGoogle Scholar
  24. 24.
    Nuseibeh B, Easterbrook SM (2000) Requirements engineering—a roadmap. In: ICSE: Future of SE Track. pp 35–46Google Scholar
  25. 25.
    Sebastiani F (2002) Machine learning in automated text categorization. ACM Comput Surv 34:1–47Google Scholar
  26. 26.
    Shull F, Basili V, Boehm B, Brown A, Costa P, Lindvall M, Port D, Rus I, Tesoriero R, Zelkowitz M (2002) What we have learned about fighting defects. In: IEEE symposium software metricsGoogle Scholar
  27. 27.
    van Lamsweerde A (2009) Requirements engineering: from system goals to UML models to software specifications. Wiley, New YorkGoogle Scholar
  28. 28.
    Verma K, Kass A (2008) Requirements analysis tool: a tool for automatically analyzing software requirements documents. In: International semantic web conferenceGoogle Scholar
  29. 29.
    Widmer G, Kubat M (1996) Learning in the presence of concept drift and hidden contexts. Mach Learn 23:69–101Google Scholar
  30. 30.
    Witten I, Frank E (2005) Data mining: practical machine learning tools and techniques. Morgan Kaufmann Pub, CambridgeMATHGoogle Scholar
  31. 31.
    Wolf T, Schroter A, Damian D, Nguyen T (2009) Predicting build failures using social network analysis on developer communication. In: IEEE international conference on software engineeringGoogle Scholar
  32. 32.
    Zimmermann T, Premraj R, Zeller A (2007) Predicting defects for eclipse. In: International workshop on predictor models in software engineeringGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  • Camilo Fitzgerald
    • 1
  • Emmanuel Letier
    • 1
  • Anthony Finkelstein
    • 1
  1. 1.Department of Computer ScienceUniversity College LondonLondonUK

Personalised recommendations