A Comparative Analysis on Effort Estimation for Agile and Non-agile Software Projects Using DBN-ALO

  • Anupama KaushikEmail author
  • Devendra Kr. Tayal
  • Kalpana Yadav


At present, in the software industry, agile and non-agile software development approaches are followed and effort estimation is an intrinsic part of both the approaches. This work investigates the application of deep belief network (DBN) along with antlion optimization (ALO) technique for effort prediction in both agile as well as non-agile software development environment. The study also provides a prediction interval of effort to handle uncertainty in estimation. This will help the project managers to estimate the effort in ranges instead of a crisp value. The proposed DBN-ALO approach is applied on four promise repository datasets for traditional software development (non-agile), and on three agile datasets. It provides the best results in all the evaluation criteria used. The proposed approach is also statistically validated using nonparametric tests, and it is found that DBN-ALO worked best for both agile and non-agile development approaches.


Software development effort Deep belief network Antlion optimization Agile software development Non-agile software development 


  1. 1.
    Rijwani, P.; Jain, S.: Enhanced software effort estimation using multi layered feed forward artificial neural network technique. Procedia Comput. Sci. 89, 307–312 (2016)CrossRefGoogle Scholar
  2. 2.
    Laqrichi, S.; Marmier, F.; Gourc, D.; Nevoux, J.: Integrating uncertainty in software effort estimation using bootstrap based neural networks. IFAC Pap Online 48(3), 954–959 (2015)CrossRefGoogle Scholar
  3. 3.
    Nassif, A.B.; Azzeh, M.; Idri, A.; Abran, A.: Software development effort estimation using regression fuzzy models. J. Comput. Intell. Neurosci. (2019). CrossRefGoogle Scholar
  4. 4.
    Zare, F.; Zare, H.K.; Fallahnezhad, M.S.: Software effort estimation based on the optimal bayesian belief network. J. Appl. Soft Comput. 49, 968–980 (2016)CrossRefGoogle Scholar
  5. 5.
    Sehra, S.K.; Brar, Y.S.; Kaur, N.; Sehra, S.S.: Software effort estimation using FAHP and weighted kernel LSSVM machine. J. Soft Comput. (2018). CrossRefGoogle Scholar
  6. 6.
    Kaushik, A.; Verma, S.; Singh, H.J.; Chabbra, G.: Software cost optimization integrating fuzzy system and COA-Cuckoo optimization algorithm. Int. J. Syst. Assur. Eng. Manag. 8(Suppl. 2), 1461–147 (2017)Google Scholar
  7. 7.
    Kaushik, A.; Tayal, D.K.; Yadav, K.; Kaur, A.: Integrating firefly algorithm in artificial neural network models for accurate software cost predictions. J. Softw. Evol. Process. 28(8), 665–688 (2016)CrossRefGoogle Scholar
  8. 8.
    Sivanageswara Rao, G.; Phani Krishna C.V.; Rajasekhara Rao, K.: Multi objective particle swarm optimization for software cost estimation. In: Proceedings of 48th Annual Convention of Computer Society of India-Vol I. Advances in Intelligent Systems and Computing, vol. 248, pp. 125–132 (2014)Google Scholar
  9. 9.
    Venkataiah, V.; Mohanty, R.; Pahariya, J.S.; Nagaratna, M.: Application of ant colony optimization techniques to predict software cost estimation. Comput. Commun. Network. Internet Secur. 5, 315–325 (2017)CrossRefGoogle Scholar
  10. 10.
    Manifesto for Agile Software Development (2019). Accessed 15 April 2019
  11. 11.
    Satapathy, S.M.; Rath, S.K.: Empirical assessment of machine learning models for agile software development effort estimation using story points. J. Innov. Syst. Softw. Eng. 13(2–3), 191–200 (2017)CrossRefGoogle Scholar
  12. 12.
    Panda, A.; Satapathy, S.M.; Rath, S.K.: Empirical validation of neural network models for agile software effort estimation based on story points. Procedia Comput. Sci. 57, 772–781 (2015)CrossRefGoogle Scholar
  13. 13.
    Karhunen, J.; Raiko, T.; Cho, K.H.: Unsupervised deep learning: a short review. In: Advances in Independent Component Analysis and Learning Machines, pp. 125–142. Academic Press (2015)Google Scholar
  14. 14.
    Mirjalili, S.: The antlion optimizer. Adv. Eng. Softw. 83, 80–98 (2015)CrossRefGoogle Scholar
  15. 15.
    Trendowicz, A.; Jefferey, R.: Software project effort estimation. Foundations and best practice guidelines for success. Springer, Berlin (2014)CrossRefGoogle Scholar
  16. 16.
    Jorgensen, M.; Sjoberg, D.I.K.: An effort prediction interval approach based on the empirical distribution of previous estimation accuracy. J. Inf. Softw. Technol. 45(3), 123–136 (2003)CrossRefGoogle Scholar
  17. 17.
    Abdelali, Z.; Mustapha, H.; Abdelwahed, N.: Investigating the use of random forest in software effort estimation. Procedia Comput. Sci. 148, 343–352 (2019)CrossRefGoogle Scholar
  18. 18.
    Pai, D.R.; McFall, K.S.; Subramanian, G.H.: Software effort estimation using a neural network ensemble. J. Comput. Inf. Syst. 53(4), 49–58 (2013)Google Scholar
  19. 19.
    Benala, T.R.; Mall, R.: DABE: differential evolution in analogy-based software development effort estimation. J. Swarm Evol. Comput. 38, 158–172 (2018)CrossRefGoogle Scholar
  20. 20.
    Ezghari, S.; Zahi, A.: Uncertainty management in software effort estimation using a consistent fuzzy analogy-based method. J. Appl. Soft Comput. 67, 540–557 (2018)CrossRefGoogle Scholar
  21. 21.
    Abdelali, Z., Hicham, M., Abdelwahed, N.: An ensemble of optimal trees for software development effort estimation. In: Smart Data and Computational Intelligence. AIT2S 2018, vol. 66, pp. 55–68. LNNS, Springer, Cham (2018)CrossRefGoogle Scholar
  22. 22.
    Nguyen, V.; Boehm, B.; LiGuo, H.: Determining relevant training data for effort estimation using window based COCOMO calibration. J. Syst. Softw. 147, 124–146 (2019)CrossRefGoogle Scholar
  23. 23.
    Ziauddin, S.; Tipu, S.K.; Zia, S.: An effort estimation model for agile software development. J. Adv Comput Sci Appl 2(1), 314–324 (2012)Google Scholar
  24. 24.
    Martínez, J.L.; Noriega, A.R.; Ramírez, R.J.; Licea, G.; Jiménez, S.: User stories complexity estimation using bayesian networks for inexperienced developers. J. Clust. Comput. 21(1), 715–728 (2018)CrossRefGoogle Scholar
  25. 25.
    Dragicevic, S.; Celar, S.; Turic, M.: Bayesian network model for task effort estimation in agile software development. J. Syst. Softw. 127, 109–119 (2017)CrossRefGoogle Scholar
  26. 26.
    Tanveer, B.: Hybrid effort estimation of changes in agile software development. In: Agile Processes in Software Engineering, and Extreme Programming, vol. 251, pp. 316–320. LNBIP, Springer, Cham (2016)CrossRefGoogle Scholar
  27. 27.
    Tanveer, B.; Guzmán, L.; Engel, U.M.: Effort estimation in agile software development: case study and improvement framework. J. Softw. Evol. Process. (2017). CrossRefGoogle Scholar
  28. 28.
    Bilgayian, S.; Mishra, S.; Das, M.: Effort estimation in agile software development using experimental validation of neural network models. Int. J. Inf. Technol. 11(3), 569–573 (2019)Google Scholar
  29. 29.
    Britto, R.; Usman, M.; Mendes, E.: Effort estimation in agile global software development context. In: Large-Scale Development, Refactoring, Testing, and Estimation. XP 2014, vol. 199, pp. 182–192. LNBIP (2014)Google Scholar
  30. 30.
    Usman, M.; Britto, R.; Damm, L.O.; Borstler, J.: Effort estimation in large scale software development: an industrial case study. J. Inf. Softw. Technol. 99, 21–40 (2018)CrossRefGoogle Scholar
  31. 31.
    Satapathy, S.M.; Panda, A.; Rath, S. K.: Story point approach based agile software effort estimation using various SVR kernel methods. In: Proceedings of the International Conference on Software Engineering and Knowledge Engineering, pp. 304–307 (2014)Google Scholar
  32. 32.
    Tung, K.T.; Hanh, L.T.M.: A novel hybrid abc-pso algorithm for effort estimation of software projects using agile methodologies. J. Intell. Syst. 27(3), 489–506 (2018)CrossRefGoogle Scholar
  33. 33.
    Zakrani, A.; Najm, A.; Marzak, A.: Support vector regression based on grid-search method for agile software effort prediction. In: Proceedings of International Congress on Information Science and Technology, pp. 492–497 (2018)Google Scholar
  34. 34.
    Tera promise: Data Categories . Accessed 15 March 2019
  35. 35.
    Kaur, P.; Gossain, A.: FF-SMOTE: a metaheuristic approach to combat class imbalance in binary classification. J. Appl. Artif. Intell. 33(5), 420–439 (2019)CrossRefGoogle Scholar
  36. 36.
    Kocaguneli, E.; Menzies, T.: Software effort models should be assessed via leave-one-out validation. J. Syst. Softw. 86(7), 1879–1890 (2013)CrossRefGoogle Scholar
  37. 37.
    Mittas, N.; Papatheocharous, E.; Angelis, L.; Andreou, A.S.: Integrating non-parametric models with linear components for producing software cost estimations. J. Syst. Softw. 99, 120–134 (2015)CrossRefGoogle Scholar
  38. 38.
    Foss, T.; Stensrud, E.; Kitchenham, B.; Myrtveit, I.: A simulation study of the model evaluation criterion MMRE. IEEE Trans. Softw. Eng. 29(11), 985–995 (2003)CrossRefGoogle Scholar
  39. 39.
    Kaushik, A.; Soni, A.K.; Soni, R.: An improved functional link artificial neural networks with intuitionistic fuzzy clustering for software cost estimation. Int. J. Syst. Assur. Eng. Manag. 7(1), 50–61 (2016)CrossRefGoogle Scholar
  40. 40.
    Shepperd, M.; MacDonell, S.: Evaluating prediction systems in software project estimation. Inf. Softw. Technol. 54, 820–827 (2012)CrossRefGoogle Scholar
  41. 41.
    Benala, T.R.; Korada, C.; Mall, R.; Dehuri, S.: A particle swarm optimized functional link artificial neural networks (PSO-FLANN) in software cost estimation. In: Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA). Advances in Intelligent Systems and Computing, vol. 199, pp. 59–66 (2013)Google Scholar
  42. 42.
    Alcalá-Fdez, J.; Fernández, A.; Luengo, J.; Derrac, J.; García, S.; Sánchez, L.; Herrera, F.: KEEL data-mining software tool: dataset repository, integration of algorithms and experimental analysis framework. J. Mult. Valued Logic Soft Comput. 17(2), 255–287 (2011)Google Scholar
  43. 43.
    Hodges, J.L.; Lehmann, E.L.: Rank methods for combination of independent experiments in analysis of variance. Ann. Math. Stat. 33(2), 482–497 (1962)MathSciNetCrossRefGoogle Scholar
  44. 44.
    Holm, S.: A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6(2), 65–70 (1979)MathSciNetzbMATHGoogle Scholar
  45. 45.
    Wilcoxon, F.: Individual comparisons by ranking methods. Biom. Bull. 1(6), 80–83 (1945)CrossRefGoogle Scholar

Copyright information

© King Fahd University of Petroleum & Minerals 2019

Authors and Affiliations

  1. 1.Department of ITMaharaja Surajmal Institute of TechnologyNew DelhiIndia
  2. 2.IGDTUW (Indira Gandhi Delhi Technical University for Women)DelhiIndia
  3. 3.Department of Computer ScienceIGDTUW (Indira Gandhi Delhi Technical University for Women)DelhiIndia
  4. 4.Department of ITIGDTUW (Indira Gandhi Delhi Technical University for Women)DelhiIndia

Personalised recommendations