Advertisement

Empirical Software Engineering

, Volume 24, Issue 2, pp 1017–1055 | Cite as

An ensemble-based model for predicting agile software development effort

  • Onkar MalgondeEmail author
  • Kaushal Chari
Article
  • 214 Downloads

Abstract

To support agile software development projects, an array of tools and systems is available to plan, design, track, and manage the development process. In this paper, we explore a critical aspect of agile development i.e., effort prediction, that cuts across these tools and agile project teams. Accurate effort prediction can improve the planning of a sprint by enabling optimal assignments of both stories and developers. We develop a model for story-effort prediction using variables that are readily available when a story is created. We use seven predictive algorithms to predict a story’s effort. Interestingly, none of the predictive algorithms consistently outperforms others in predicting story effort across our test data of 423 stories. We develop an ensemble-based method based on our model for predicting story effort. We conduct computational experiments to show that our ensemble-based approach performs better in comparison to other ensemble-based benchmarking approaches. We then demonstrate the practical application of our predictive model and our ensemble-based approach by optimizing sprint planning for two projects from our dataset using an optimization model.

Keywords

Agile Effort prediction Ensemble Machine learning Scrum Sprint planning 

Notes

Acknowledgements

This paper has benefited from the feedback received at Workshop in Information Technology and Systems (WITS) 2014 (Auckland) and WITS 2016 (Dublin) where preliminary versions of the paper were presented. There were many individuals that helped us with this research project. First, we would like to thank those individuals at our data site that helped us gain access to the dataset. We would also like to thank Dr. Terry Sincich for helping us with the design and choice of statistical tests, developers/project managers who shared their insights on the implications of this research to practice, and Dr. Patricia Nickinson for proofreading and editing our draft. Finally, we would like to express our sincere gratitude to the three anonymous reviewers and the editors for providing constructive feedback on our earlier submission.

References

  1. Abrahamsson P, Salo O, Ronkainen J, Warsta J (2002) Agile software development methods: Review and analysis. Report, VTTGoogle Scholar
  2. Abrahamsson P, Moser R, Pedrycz W, Sillitti A, Succi G (2007) Effort prediction in iterative software development processes - incremental versus global prediction models. Empirical Software Engineering and Measurement, pp 344–353Google Scholar
  3. Abrahamsson P, Fronza I, Moser R, Vlasenko J, Pedrycz W (2011) Predicting development effort from user stories. In: International Symposium on Empirical Software Engineering and Measurement, pp 400–403Google Scholar
  4. Aggarwal C (2015) Data Mining: The Textbook. Springer, New YorkzbMATHGoogle Scholar
  5. Azhar D, Riddle P, Mendes E, Mittas N, Angelis l (2013) Using ensembles for web effort estimation. In: ACM/IEEE International Symposium on Empirical Software Engineering and MeasurementGoogle Scholar
  6. Azzeh M, Nassif AB, Minku L (2015) An empirical evaluation of ensemble adjustment methods for analogy-based effort estimation. The Journal of Systems and Software 103:36–52Google Scholar
  7. Bayley S, Falessi D (2018) Optimizing prediction intervals by tuning random forest via meta-validation. arXiv:1801.07194
  8. Beck K, Andres C (2004) Extreme Programming Explained:Embrace Change. Addison-Wesley, ReadingGoogle Scholar
  9. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B (Methodological) 57 (1):289–300MathSciNetzbMATHGoogle Scholar
  10. Bergmeir C, Benitez JM (2011) Forecaster performance evaluation with cross-validation and variants. In: 11Th international conference on intelligent systems design and applications (ISDA). IEEE, pp 849-854Google Scholar
  11. Chari K, Agrawal M (2018) Impact of incorrect and new requirements on waterfall software project outcomes. Empir Softw Eng 23(1):165–185Google Scholar
  12. Chowdhury S, Di Nardo S, Hindle A, Jiang ZMJ (2018) An exploratory study on assessing the energy impact of logging on android applications. Empir Softw Eng 23(3):1422–1456Google Scholar
  13. Cinnéide MÓ, Moghadam IH, Harman M, Counsell S, Tratt L (2017) An experimental search-based approach to cohesion metric evaluation. Empir Softw Eng 22(1):292–329Google Scholar
  14. Conboy K (2009) Agility from first principles: Reconstructing the concept of agility in information systems development. Inf Syst Res 20(3):329–354Google Scholar
  15. Dejaeger K, Verbeke W, Martens D, Baesens B (2012) Data mining techniques for software effort estimation: A comparative study. IEEE Trans Softw Eng 38(2):375–97Google Scholar
  16. Grenning J (2002) Planning poker or how to avoid analysis paralysis while release planning. Report, Hawthorn Woods: Renaissance Software ConsultingGoogle Scholar
  17. Hastie T, Tibshirani R, Friedman J (2008) The Elements of Statistical Learning. Springer, New YorkzbMATHGoogle Scholar
  18. Haugen NC (2006) An empirical study of using planning poker for user story estimation. In: Agile Conference, 2006, IEEE, pp 9–ppGoogle Scholar
  19. Hearty P, Fenton N, Marquez D, Neil M (2009) Predicting project velocity in XP using a learning dynamic bayesian network model. IEEE Trans Softw Eng 35 (1):124–137Google Scholar
  20. Hussain I, Kosseim L, Ormandjieva O (2013) Approximation of cosmic functional size to support early effort estimation in agile. Data and Knowledge Engineering 85:2–14Google Scholar
  21. Idri A, Hosni M, Abran A (2016) Systematic literature review of ensemble effort estimation. J Syst Softw 118:151–175Google Scholar
  22. Jahedpari F (2016) Artificial prediction markets for online prediction of continuous variables. PhD thesis, University of Bath, BathGoogle Scholar
  23. James G, Witten D, Hastie T, Tibshirani R (2015) An Introduction to Statistical Learning with Applications in R. Springer Texts in Statistics. Springer, New YorkzbMATHGoogle Scholar
  24. Jonsson L, Borg M, Broman D, Sandahl K, Eldh S, Runeson P (2016) Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts. Empir Softw Eng 21(4):1533–1578Google Scholar
  25. Jørgensen M, Shepperd M (2007) A systematic review of software development cost estimation studies. IEEE Trans Softw Eng 33(1):33–53Google Scholar
  26. Karner G (1993) Resource estimation for objectory projects. Objective Systems SF AB, p 17Google Scholar
  27. Kocaguneli E, Menzies T, Keung JW (2012) On the value of ensemble effort estimation. IEEE Trans Softw Eng 38(6):1403–1416Google Scholar
  28. Kultur Y, Turhanm B, Bener AB (2008) ENNA: software effort estimation using ensemble of neural networks with associative memory. In: 16th ACM SIGSOFTGoogle Scholar
  29. Lee D (2016) Alternatives to p value: confidence interval and effect size. Korean Journal of Anesthesiology 69(6):555–562Google Scholar
  30. Li Y, Yue T, Ali S, Zhang L (2017) Zen-ReqOptimizer: A search-based approach for requirements assignment optimization. Empir Softw Eng 22(1):175–234Google Scholar
  31. Logue K, McDaid K, Greer D (2007) Allowing for task uncertainties and dependencies in agile release planning. In: 4th Proceedings of the Software Measurement European Forum, pp 275–284Google Scholar
  32. Lokan C, Mendes E (2014) Investigating the use of duration-based moving windows to improve software effort prediction: A replicated study. Inf Softw Technol 56(9):1063–1075Google Scholar
  33. MacDonell SG, Shepperd M (2010) Data accumulation and software effort prediction. In: ACM-IEEE International Symposium on Empirical Software Engineering and MeasurementGoogle Scholar
  34. Magazinius A, Börjesson S, Feldt R (2012) Investigating intentional distortions in software cost estimation–an exploratory study. J Syst Softw 85(8):1770–1781Google Scholar
  35. Mahnic V, Hovelja T (2012) On using planning poker for estimating user stories. The Journal of Systems and Software 85:2086–2095Google Scholar
  36. Minku L, Yao X (2013) Ensembles and locality: Insight on improving software effort estimation. Inf Softw Technol 55(8):1512–1528Google Scholar
  37. Miyazaki Y, Takanou A, Nozaki H, Nakagawa N, Okada K (1991) Method to estimate parameter values in software prediction models. Inf Softw Technol 33:239–243Google Scholar
  38. Neill J (2008) Why use effect sizes instead of significance testing in program evaluation? http://www.wilderdom.com/research/effectsizes.html, accessed: 2018-07
  39. Nunes N, Constantine L, Kazman R (2011) iUCP: Estimating interactive-software project size with enhanced use-case points. IEEE Software 28(04):64–73Google Scholar
  40. Palmer S, Felsing J (2002) A Practical Guide to Feature-driven Development. Prentice Hall, Upper Sadle RiverGoogle Scholar
  41. Papatheocharous E, Papadopoulos H, Andreou A (2010) Feature subset selection for software cost modelling and estimation. Eng Intell Syst 18:233–246Google Scholar
  42. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: Machine learning in python. J Mach Learn Res 12:2825–2830MathSciNetzbMATHGoogle Scholar
  43. Pendharkar P, Subramanian G, Rodger J (2005) A probabilistic model for predicting software development effort. IEEE Trans Softw Eng 31(7):615–624Google Scholar
  44. Perols J, Chari K, Agrawal M (2009) Information market-based decision fusion. Manag Sci 55(5):827–842Google Scholar
  45. Pikkarainen M, Haikara J, Salo O, Abrahamsson P, Still J (2008) The impact of agile practices on communication in software development. Empir Softw Eng 13(3):303–337Google Scholar
  46. Santana C, Leoneo F, Vasconcelos A, Gusmão C (2011) Using function points in agile projects. In: International Conference on Agile Software Development. Springer, pp 176–191Google Scholar
  47. Schwaber K, Sutherland J (2016) The scrum guide (2013). Dostopno na:. http://www.scrumguidesorg/docs/scrumguide/v1/scrum-guide-uspdf (dostop 28–4–2016)
  48. Shmueli G, Bruce P, Patel N (2016) Data Mining for Business Analytics: Concepts, Techniques, and Applications with XLMiner. Wiley, HobokenGoogle Scholar
  49. Stapleton J (1997) Dynamic systems development method. Addison-Wesley, BostonGoogle Scholar
  50. Usman M, Mendes E, Weidt F, Britto R (2014) Effort estimation in agile software development: a systematic literature review. In: 10th International Conference on Predictive Models in Software Engineering, pp 82–91Google Scholar
  51. Usman M, Mendes E, Börstler J (2015) Effort estimation in agile software development: a survey on the state of the practice. In: 19th International Conference on Evaluation and Assessment in Software EngineeringGoogle Scholar
  52. Vargha A, Delaney HD (2000) A critique and improvement of the cl common language effect size statistics of mcgraw and wong. J Educ Behav Stat 25(2):101–132Google Scholar
  53. VersionOne (2016) 10th annual state of agile report. Technical reportGoogle Scholar
  54. Vidgen R, Wang X (2009) Coevolving systems and the organization of agile software development. Inf Syst Res 20(3):355–376Google Scholar
  55. Wen J, Li S, Lin Z, Hu Y, Huang C (2012) Systematic literature review of machine learning based software development effort estimation models. Inf Softw Technol 54(1):41–59Google Scholar
  56. Wolpert DH (1992) Stacked generalization. Neural networks 5(2):241–259Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Operations Management & Information Systems, College of BusinessNorthern Illinois UniversityDeKalbUSA
  2. 2.Information Systems and Decision Sciences, Muma College of BusinessUniversity of South FloridaTampaUSA

Personalised recommendations