Empirical Software Engineering

, Volume 20, Issue 3, pp 640–693 | Cite as

Empirical evidence on the link between object-oriented measures and external quality attributes: a systematic literature review

  • Ronald Jabangwe
  • Jürgen Börstler
  • Darja Šmite
  • Claes Wohlin
Article

Abstract

There is a plethora of studies investigating object-oriented measures and their link with external quality attributes, but usefulness of the measures may differ across empirical studies. This study aims to aggregate and identify useful object-oriented measures, specifically those obtainable from the source code of object-oriented systems that have gone through such empirical evaluation. By conducting a systematic literature review, 99 primary studies were identified and traced to four external quality attributes: reliability, maintainability, effectiveness and functionality. A vote-counting approach was used to investigate the link between object-oriented measures and the attributes, and to also assess the consistency of the relation reported across empirical studies. Most of the studies investigate links between object-oriented measures and proxies for reliability attributes, followed by proxies for maintainability. The least investigated attributes were: effectiveness and functionality. Measures from the C&K measurement suite were the most popular across studies. Vote-counting results suggest that complexity, cohesion, size and coupling measures have a better link with reliability and maintainability than inheritance measures. However, inheritance measures should not be overlooked during quality assessment initiatives; their link with reliability and maintainability could be context dependent. There were too few studies traced to effectiveness and functionality attributes; thus a meaningful vote-counting analysis could not be conducted for these attributes. Thus, there is a need for diversification of quality attributes investigated in empirical studies. This would help with identifying useful measures during quality assessment initiatives, and not just for reliability and maintainability aspects.

Keywords

Systematic literature review Object-oriented system Source code analysis Source code measures Software metrics Software quality Static analysis 

References

  1. Abreu F B, Carapuça R (1994) Object-oriented software engineering: measuring and controlling the development process. In: Proceedings of the 4th international conference on software quality, vol 186Google Scholar
  2. Abreu F, Melo W (1996) Evaluating the impact of object-oriented design on software quality. In: Proceedings of the 3rd international software metrics symposium, pp 90–99Google Scholar
  3. Abreu F B E, Goulão M, Esteves R, Abreu O B E (1995) Toward the design quality evaluation of object-oriented software systems. In: Proceedings of the 5th international conference on software quality, pp 44–57Google Scholar
  4. Abubakar A, AlGhamdi J, Ahmed M (2006) Can cohesion predict fault density? In: Proceedings of the 30th IEEE international conference on computer systems and applications, pp 890–893Google Scholar
  5. Aggarwal K, Singh Y, Kaur A, Malhotra R (2007) Investigating effect of design metrics on fault proneness in object-oriented systems. J Object Technol 6(10): 127–141CrossRefGoogle Scholar
  6. Aggarwal K, Singh Y, Kaur A, Malhotra R (2009) Empirical analysis for investigating the effect of object-oriented metrics on fault proneness: a replicated case study. Softw Process: Improv Pract 14(1): 39–62CrossRefGoogle Scholar
  7. Ajrnal Chaumun M, Kabaili H, Keller R, Lustman F (1999) A change impact model for changeability assessment in object-oriented software systems. In: Proceedings of the 3rd european conference on software maintenance and reengineering, 1999, pp 130–138Google Scholar
  8. Al Dallal J (2011a) Improving the applicability of object-oriented class cohesion metrics. Info Softw Technol 53(9): 914–928CrossRefGoogle Scholar
  9. Al Dallal J (2011b) Transitive-based object-oriented lack-of-cohesion metric. Procedia Comput Sci 3: 1581–1587CrossRefGoogle Scholar
  10. Al Dallal J (2012a) Fault prediction and the discriminative powers of connectivity-based object-oriented class cohesion metrics. Inf Softw Technol 54(4): 396–416CrossRefGoogle Scholar
  11. Al Dallal J (2012b) The impact of accounting for special methods in the measurement of object-oriented class cohesion on refactoring and fault prediction activities. J Syst Softw 85(5): 1042–1057CrossRefGoogle Scholar
  12. Al Dallal J, Briand L (2010) An object-oriented high-level design-based class cohesion metric. Inf Softw Technol 52(12): 1346–1361CrossRefGoogle Scholar
  13. Al Dallal J, Briand L C (2012) A precise method-method interaction-based cohesion metric for object-oriented classes. ACM Trans Softw Eng Methodol 21(2): 8:1–8:34CrossRefGoogle Scholar
  14. Alshayeb M, Li W (2003) An empirical validation of object-oriented metrics in two different iterative software processes. IEEE Trans Softw Eng 29: 1043–1049CrossRefGoogle Scholar
  15. Aman H, Mochiduki N, Yamada H (2006) A model for detecting cost-prone classes based on mahalanobis-taguchi method. IEICE Trans Info Syst E89-D: 1347–1358CrossRefGoogle Scholar
  16. Arisholm E (2006) Empirical assessment of the impact of structural properties on the changeability of object-oriented software. Info Softw Technol 48(11): 1046–1055CrossRefGoogle Scholar
  17. Arisholm E, Sjøberg D (2000) Towards a framework for empirical assessment of changeability decay. J Syst Softw 53(1): 3–14CrossRefGoogle Scholar
  18. Babich D, Clarke P J, Power J F, Kibria B M G (2011) Using a class abstraction technique to predict faults in OO classes: a case study through six releases of the eclipse JDT. In: Proceedings of the 2011 ACM symposium on applied computing. ACM, New York, pp 1419–1424Google Scholar
  19. Badri M, Toure F (2012) Evaluating the effect of control flow on the unit testing effort of classes: an empirical analysis. Adv Soft Eng 2012: 5:5–5:17Google Scholar
  20. Badri L, Badri M, Toure F (2011) An empirical analysis of lack of cohesion metrics for predicting testability of classes. Int J Softw Eng Appl 5(2): 69–86Google Scholar
  21. Bakar N S A A (2011) Empirical analysis of object-oriented coupling and cohesion measures in determining the quality of open source systems. In: Proceedings of the IASTED international conference on software engineering and applications, SEA 2011Google Scholar
  22. Bandi RK, Vaishnavi VK, Turk DE (2003) Predicting maintenance performance using object-oriented design complexity metrics. IEEE Trans Softw Eng 29(1): 77–87CrossRefGoogle Scholar
  23. Bansiya J, Davis C G (2002) A hierarchical model for object-oriented design quality assessment. IEEE Trans Softw Eng 28(1): 4–17CrossRefGoogle Scholar
  24. Basili V R, Briand L C, Melo W L (1996) A validation of object-oriented design metrics as quality indicators. IEEE Trans Softw Eng 22(10): 751–761CrossRefGoogle Scholar
  25. Benlarbi S, Melo W (1999) Polymorphism measures for early risk prediction. In: Proceedings of the 1999 international conference on software engineering, pp 334–344Google Scholar
  26. Benlarbi S, El Emam K, Goel N, Rai S (2000) Thresholds for object-oriented measures. In: Proceedings of the 11th international symposium on software reliability engineering, pp 24–38Google Scholar
  27. Bocco M, Moody D, Piattini M (2005) Assessing the capability of internal metrics as early indicators of maintenance effort through experimentation. J Softw Maint Evol 17(3): 225–246CrossRefGoogle Scholar
  28. Brereton P, Kitchenham B A, Budgen D, Turner M, Khalil M (2007) Lessons from applying the systematic literature review process within the software engineering domain. J Syst Softw 80(4): 571–583CrossRefGoogle Scholar
  29. Briand L, Wüst J (2002) Empirical studies of quality models in object-oriented systems. Adv Comput 56: 97–166CrossRefGoogle Scholar
  30. Briand L, Devanbu P, Melo W (1997) An investigation into coupling measures for C++. In: Proceedings of the 19th international conference on software engineering, pp 412–421Google Scholar
  31. Briand L C, Wüst J, Ikonomovski S V, Lounis H (1999) Investigating quality factors in object-oriented designs: an industrial case study. In: Proceedings of the 21st international conference on software engineering, pp 345–354Google Scholar
  32. Briand L C, Wüst J, Daly J W, Porter D V (2000) Exploring the relationship between design measures and software quality in object-oriented systems. J Syst Softw 51(3): 245–273CrossRefGoogle Scholar
  33. Briand L, Melo W, Wüst J (2002) Assessing the applicability of fault-proneness models across object-oriented software projects. IEEE Trans Softw Eng 28(7): 706–720CrossRefGoogle Scholar
  34. Bruntink M, van Deursen A (2006) An empirical study into class testability. J Syst Softw 79(9): 1219–1232CrossRefGoogle Scholar
  35. Cartwright M, Shepperd M (2000) Empirical investigation of an object-oriented software system. IEEE Trans Softw Eng 26(8): 786–796CrossRefGoogle Scholar
  36. Catal C, Diri B (2009) A systematic review of software fault prediction studies. Expert Syst Appl 36(4): 7346–7354CrossRefGoogle Scholar
  37. Catal C, Diri B, Ozumut B (2007) An artificial immune system approach for fault prediction in object-oriented software. In: Proceedings of the 2nd international conference on dependability of computer systems, pp 238–245Google Scholar
  38. Chidamber S, Kemerer C (1991) Towards a metrics suite for object oriented design. SIGPLAN Not 26(11): 197–211CrossRefGoogle Scholar
  39. Chidamber S R, Kemerer C F (1994) A metrics suite for object oriented design. IEEE Trans Softw Eng 20(6): 476–493CrossRefGoogle Scholar
  40. Chidamber S R, Darcy D P, Kemerer C F (1998) Managerial use of metrics for object-oriented software: an exploratory analysis. IEEE Trans Softw Eng 24(8): 629–639CrossRefGoogle Scholar
  41. Cruz A E C, Ochimizu K (2010) A UML approximation of three Chidamber-Kemerer metrics and their ability to predict faulty code across software projects. IEICE Trans Info Syst 93(11): 3038–3050CrossRefGoogle Scholar
  42. Dagpinar M, Jahnke J H (2003) Predicting maintainability with object-oriented metrics – an empirical comparison. In: Proceedings of the 10th working conference on reverse engineering, pp 155–164Google Scholar
  43. Dandashi F, Rine D (2002) A method for assessing the reusability of object-oriented code using a validated set of automated measurements. In: Proceedings of the 2002 ACM symposium on applied computing, pp 997–1003Google Scholar
  44. Darcy D, Kemerer C, Slaughter S, Tomayko J (2005) The structural complexity of software: an experimental test. IEEE Trans Softw Eng 31(11): 982–994CrossRefGoogle Scholar
  45. Díaz J, Pérez J, Alarcón P P, Garbajosa J (2011) Agile product line engineering–a systematic literature review. Softw: Pract experience 41(8): 921–941Google Scholar
  46. Dick S, Sadia A (2006) Fuzzy clustering of open-source software quality data: a case study of Mozilla. In: Proceedings of the international joint conference on neural networks, pp 4089–4096Google Scholar
  47. Dybå T, Dingsøyr T, Hanssen G (2007) Applying systematic reviews to diverse study types : an experience report. In: Proceedings of the 1st international symposium on empirical software engineering and measurement, pp 225–234Google Scholar
  48. El Emam K, Benlarbi S, Goel N, Melo W, Lounis H, Rai S (2002) The optimal class size for object-oriented software. IEEE Trans Softw Eng 28(5): 494–509CrossRefGoogle Scholar
  49. Elish MO (2010) Exploring the relationships between design metrics and package understandability: a case study. In: Proeceeding of the 18th IEEE international conference on program comprehension, pp 144–147Google Scholar
  50. Elish MO, Rine D (2006) Design structural stability metrics and post-release defect density: an empirical study. In: 30th annual international conference of computer software and applications, pp 1–8Google Scholar
  51. Elish MO, Al-Yafei A H, Al-Mulhem M (2011) Empirical comparison of three metrics suites for fault prediction in packages of object-oriented systems: a case study of eclipse. Adv Eng Softw 42(10): 852– 859CrossRefGoogle Scholar
  52. Eski S, Buzluca F (2011) An empirical study on object-oriented metrics and software evolution in order to reduce testing costs by predicting change-prone classes. In: Proceedings of the 4th IEEE international conference on software testing, verification and validation workshops, pp 566–571Google Scholar
  53. Etzkorn L, Davis C, Li W (1997) A statistical comparison of various definitions of the LCOM metric. Technical Report TR-UAH-CS-1997-02Google Scholar
  54. Fenton N E, Pfleeger S L (1998) Software metrics: a rigorous and practical approach, 2nd edn. PWS Publishing, BostonGoogle Scholar
  55. Fioravanti F, Nesi P (2001) A study on fault-proneness detection of object-oriented systems. In: Proceedings of the European conference on software maintenance and reengineering, pp 121–130Google Scholar
  56. Genero M, Piattini M, Jiménez L (2001) Empirical validation of class diagram complexity metrics. In: Proceedings of the 11th internatinal conference of the Chilean computer science society, pp 95–104Google Scholar
  57. Genero M, Piattini M, Calero C (2005) A survey of metrics for UML class diagrams. J Object Technol 4: 59–92CrossRefGoogle Scholar
  58. Giger E, Pinzger M, Gall H (2012) Can we predict types of code changes? An empirical analysis. In: Proceedings of the 9th IEEE working conference on mining software repositories, pp 217– 226Google Scholar
  59. Goel B, Singh Y (2008) Empirical investigation of metrics for fault prediction on object-oriented software. Stud Comput Intell 131: 255–265CrossRefGoogle Scholar
  60. Guo Y, Wuersch M, Giger E, Gall H (2011) An empirical validation of the benefits of adhering to the law of demeter. In: Proceeding of the 18th working conference on reverse engineering, pp 239–243Google Scholar
  61. Gupta V, Chhabra J K (2009) Package coupling measurement in object-oriented software. J Comput Sci Technol 24(2): 273–283CrossRefGoogle Scholar
  62. Gupta V, Chhabra J K (2012) Package level cohesion measurement in object-oriented software. J Braz Comput Soc 18(3): 251–266CrossRefGoogle Scholar
  63. Gyimóthy T, Ferenc R, Siket I (2005) Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans Softw Eng 31(10): 897–910CrossRefGoogle Scholar
  64. Halstead M H (1977) Elements of software science (Operating and programming systems series). Elsevier, New YorkGoogle Scholar
  65. Harrison R, Counsell S (1998) The role of inheritance in the maintainability of object-oriented systems. In: Proceedings of European software control and metrics conference, pp 449–457Google Scholar
  66. Harrison R, Counsell S, Nithi R (1997) An overview of object-oriented design metrics. In: Proceedings of the 8th IEEE international workshop on software technology and engineering practice, pp 230–235Google Scholar
  67. Henningsson K, Wohlin C (2005) Monitoring fault classification agreement in an industrial context. In: Proceedings of the 9th conference on empirical assessment in software engineeringGoogle Scholar
  68. Holschuh T, Päuser M, Herzig K, Zimmermann T, Premraj R, Zeller A (2009) Predicting defects in SAP Java code: an experience report. In: Proceedings of the 31st international conference on software engineering-companion volume, pp 172–181Google Scholar
  69. Huang P, Zhu J (2009) A multi-instance model for software quality estimation in OO systems. In: Proceedings of the 5th international conference on natural computation, pp 436–440Google Scholar
  70. ISO/IEC-25010 (2010) Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) – system and software quality models. International organization for standardizationGoogle Scholar
  71. ISO/IEC-9126 (2001) Software engineering – product quality – Part 1: quality model. International organization for standardizationGoogle Scholar
  72. ISO/IEC/IEEE-24765 (2010) Systems and software engineering – vocabulary. International organization for standardizationGoogle Scholar
  73. Janes A, Scotto M, Pedrycz W, Russo B, Stefanovic M, Succi G (2006) Identification of defect-prone classes in telecommunication software systems using design metrics. Info Sci 176(24): 3711–3734CrossRefGoogle Scholar
  74. Jia H, Shu F, Yang Y, Wang Q (2009) Predicting fault-prone modules: a comparative study. In: Software engineering approaches for offshore and outsourced development, vol 35. Springer, Berlin, pp 45–59Google Scholar
  75. Jin C, Jin S-W, Ye J-M, Zhang Q-G (2009) Quality prediction model of object-oriented software system using computational intelligence. In: Proceedings of the 2nd international conference on power electronics and intelligent transportation system, vol 2, pp 120–123Google Scholar
  76. Kamiya T, Kusumoto S, Inoue K (1999) Prediction of fault-proneness at early phase in object-oriented development. In: Proceedings of the IEEE 2nd international symposium on object-oriented real-time distributed computing, pp 253–258Google Scholar
  77. Kanellopoulos Y, Antonellis P, Antoniou D, Makris C, Theodoridis E, Tjortjis C, Tsirakis N (2010) Code quality evaluation methodology using the ISO/IEC 9126 standard. Int J Softw Eng Appl 1(3): 17–36Google Scholar
  78. Kanmani S, Rhymend Uthariaraj V, Nakkeeran R, Inbavani P (2004) Object oriented software fault prediction using adaptive neuro fuzzy inference system. WSEAS Trans Info Sci Appl 1(5): 1142–1145Google Scholar
  79. Kanmani S, Uthariaraj V R, Sankaranarayanan V, Thambidurai P (2007) Object-oriented software fault prediction using neural networks. Info Softw Technol 49(5): 483–492CrossRefGoogle Scholar
  80. Karus S, Dumas M (2012) Code churn estimation using organisational and code metrics: an experimental comparison. Inf Softw Technol 54(2): 203–211CrossRefGoogle Scholar
  81. Kitchenham B (2010) Whats up with software metrics? A preliminary mapping study. J Syst Softw 83(1): 37–51CrossRefGoogle Scholar
  82. Kitchenham BA, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. Technical Report EBSE-2007-01, Keele UniversityGoogle Scholar
  83. Landis J, Koch G (1977) The measurement of observer agreement for categorical data. Biometrics 33(1): 159–174CrossRefMATHMathSciNetGoogle Scholar
  84. Lavazza L, Morasca S, Taibi D, Tosi D (2012) An empirical investigation of perceived reliability of open source java programs. In: Proceedings of the 27th Annual ACM Symposium on Applied Computing. ACM, New York, pp. 1109–1114Google Scholar
  85. Li W, Shatnawi R (2007) An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. J Syst Softw 80(7): 1120–1128CrossRefGoogle Scholar
  86. Lincke Rd, Lundberg J, Löwe W (2008) Comparing software metrics tools. In: Proceedings of the international symposium on software testing and analysis, pp 131–142Google Scholar
  87. Liu Y, Poshyvanyk D, Ferenc R, Gyimothy T, Chrisochoides N (2009) Modeling class cohesion as mixtures of latent topics. In: Proceedings of the 2009 IEEE international conference on software maintenance, pp 233–242Google Scholar
  88. Lorenz M, Kidd J (1994) Object-oriented software metrics: a practical guide. Prentice-Hall, New JerseyGoogle Scholar
  89. Malhotra R, Jain A (2011) Software fault prediction for object oriented systems: a literature review. SIGSOFT Softw Eng notes 36(5): 1–6CrossRefGoogle Scholar
  90. Malhotra R, Jain A (2012) Fault prediction using statistical and machine learning methods for improving software quality. J Inf Process Syst 8(2): 241–262CrossRefGoogle Scholar
  91. Marinescu R, Marinescu C (2011) Are the clients of flawed classes (also) defect prone? In: Proceedings of the 11th IEEE international working conference on source code analysis and manipulation, pp 65–74Google Scholar
  92. McCabe T J (1976) A complexity measure. IEEE Trans Softw Eng (4):308–320Google Scholar
  93. Nair T G, Selvarani R (2012) Defect proneness estimation and feedback approach for software design quality improvement. Inf Softw Technol 54(3): 274–285CrossRefGoogle Scholar
  94. Nguyen V, Boehm B, Danphitsanuphan P (2011) A controlled experiment in assessing and estimating software maintenance tasks. Inf Softw Technol 53(6): 682–691CrossRefGoogle Scholar
  95. Olague HM, Etzkorn LH, Cox GW (2006) An entropy-based approach to assessing object-oriented software maintainability and degradation – A method and case study. In: Proceedings of the international conference on software engineering research and practice, pp 442–452Google Scholar
  96. Olague H, Etzkorn L, Gholston S, Quattlebaum S (2007) Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. IEEE Trans Softw Eng 33(6): 402–419CrossRefGoogle Scholar
  97. Olague HM, Etzkorn LH, Messimer SL, Delugach HS (2008) An empirical validation of object-oriented class complexity metrics and their ability to predict error-prone classes in highly iterative, or agile, software: A case study. J Softw Maint Evol Res Pract 20(3): 171–197CrossRefGoogle Scholar
  98. Olbrich S, Cruzes DS, Basili V, Zazworka N (2009) The evolution and impact of code smells: a case study of two open source systems. In: Proceedings of the 2009 3rd international symposium on empirical software engineering and measurement, pp 390–400Google Scholar
  99. Pai G, Bechta Dugan J (2007) Empirical analysis of software fault content and fault proneness using bayesian methods. IEEE Trans Softw Eng 33(10): 675–686CrossRefGoogle Scholar
  100. Pickard LM, Kitchenham BA, Jones PW (1998) Combining empirical results in software engineering. Inf Softw Technol 40(14): 811–821CrossRefGoogle Scholar
  101. Poshyvanyk D, Marcus A, Ferenc R, Gyimóthy T (2009) Using information retrieval based coupling measures for impact analysis. Empir Softw Eng 14(1): 5–32CrossRefGoogle Scholar
  102. Pritchett I, W. W (2001) An object-oriented metrics suite for Ada 95. In: Proceedings of the 2001 annual ACM SIGAda international conference on Ada, pp 117–126Google Scholar
  103. Quah JT, Thwin MM (2002) Prediction of software readiness using neural network. In: Proceedings of 1st international conference on information technology & applications, pp 307–312Google Scholar
  104. Radjenović D, Heric̆ko M, Torkar R, živkovic̆ A (2013) Software fault prediction metrics: a systematic literature review. Inf Softw Technol 55: 1397–1418CrossRefGoogle Scholar
  105. Ramasubbu N, Kemerer C F, Hong J (2012) Structural complexity and programmer team strategy: an experimental test. IEEE Trans Softw Eng 38(5): 1054–1068CrossRefGoogle Scholar
  106. Rathore S, Gupta A (2012a) Investigating object-oriented design metrics to predict fault-proneness of software modules. In: Proceedings of 6th CSI international conference on software engineering, pp 1–10Google Scholar
  107. Rathore S, Gupta A (2012b) Validating the effectiveness of object-oriented metrics over multiple releases for predicting fault proneness. In: Proceedings of the 19th Asia-Pacific Software Engineering Conference (APSEC), vol. 1, pp 350–355Google Scholar
  108. Revelle M, Gethers M, Poshyvanyk D (2011) Using structural and textual information to capture feature coupling in object-oriented software. Empir Softw Eng 16(6): 773–811CrossRefGoogle Scholar
  109. Reyes L, Carver D (1998) Predicting object reuse using metrics. In: Proceedings of the 10th international conference on software engineering and knowledge engineering, pp 156–159Google Scholar
  110. Riaz M, Mendes E, Tempero E (2009) A systematic review of software maintainability prediction and metrics. In: Proceedings of the 3rd international symposium on empirical software engineering and measurement, pp 367–377Google Scholar
  111. Robson C (2011) Real world research, 2nd edn. John Wiley & Sons, West SussexGoogle Scholar
  112. Rosenberg LH, Hyatt LE (1997) Software quality metrics for object-oriented environments. Crosstalk JournalGoogle Scholar
  113. Saxena P, Saini M (2011) Empirical studies to predict fault proneness: a review. Int J Comput Appl 22(8): 41–45Google Scholar
  114. Shatnawi R (2010) A quantitative investigation of the acceptable risk levels of object-oriented metrics in open-source systems. IEEE Trans Softw Eng 36(2): 216–225CrossRefGoogle Scholar
  115. Shatnawi R, Li W (2008) The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process. J Syst Softw 81(11): 1868–1882CrossRefGoogle Scholar
  116. Shatnawi R, Li W, Swain J, Newman T (2010) Finding software metrics threshold values using ROC curves. J Softw Maint Evol Res Pract 22(1): 1–16CrossRefGoogle Scholar
  117. Singh Y, Saha A (2012) Prediction of testability using the design metrics for object-oriented software. Int J Comput Appl Technol 44(1): 12–22CrossRefGoogle Scholar
  118. Singh P, Verma S (2012) Empirical investigation of fault prediction capability of object oriented metrics of open source software. In: Proceeding of the international joint conference on computer science and software engineering, pp 323–327Google Scholar
  119. Singh Y, Kaur A, Malhotra R (2007) Application of logistic regression and artificial neural network for predicting software quality models. In: Software engineering research and practice, pp 664–670Google Scholar
  120. Singh Y, Kaur A, Malhotra R (2009a) Comparative analysis of regression and machine learning methods for predicting fault proneness models. Int J Comput Appl Technol 35(2): 183–193CrossRefGoogle Scholar
  121. Singh Y, Kaur A, Malhotra R (2009b) Software fault proneness prediction using support vector machines. In: Proceedings of the world congress on engineering, vol. 1., pp 1–3Google Scholar
  122. Singh Y, Kaur A, Malhotra R (2010) Empirical validation of object-oriented metrics for predicting fault proneness models. Softw Qual J 18(1): 3–35CrossRefGoogle Scholar
  123. Singh Y, Kaur A, Malhotra R (2011) Comparative analysis of J48 with statistical and machine learning methods in predicting fault-prone classes using object-oriented systems. J Stat Manag Syst 14(3): 595–616CrossRefGoogle Scholar
  124. Subramanyam R, Krishnan M (2003) Empirical analysis of CK metrics for object-oriented design complexity: Implications for software defects. IEEE Trans Softw Eng 29(4): 297–310CrossRefGoogle Scholar
  125. Succi G, Pedrycz W, Stefanovic M, Miller J (2003) Practical assessment of the models for identification of defect-prone classes in object-oriented commercial systems using design metrics. J Syst Soft 65(1): 1–12CrossRefGoogle Scholar
  126. Szabo RM, Khoshgoftaar TM (2004) Classifying software modules into three risk groups. Int J Reliab, Qual Saf Eng 11(1): 59–80CrossRefGoogle Scholar
  127. Újházi B, Ferenc RDP, Gyimóthy T (2010) New conceptual coupling and cohesion metrics for object-oriented systems. In: Proceedings of the IEEE working conference on source code analysis and manipulation, pp 33–42Google Scholar
  128. Xenos M, Stavrinoudis D, Zikouli K, Christodoulakis D (2000) Object-oriented metrics – a survey. In: Proceedings of the European software measurement conference, pp 1–10Google Scholar
  129. Xu J, Ho D, Capretz L. F (2008) An empirical validation of object-oriented design metrics for fault prediction. J Comput Sci 4(7)Google Scholar
  130. Yu P, Systa T, Muller H (2002) Predicting fault-proneness using OO metrics: an industrial case study. In: Proceedings of the 6th European conference on software maintenance and reengineering, pp 99–107Google Scholar
  131. Zhou Y, Leung H (2006) Empirical analysis of object-oriented design metrics for predicting high and low severity faults. IEEE Trans Softw Eng 32(10): 771–789CrossRefGoogle Scholar
  132. Zhou Y, Xu B, Leung H (2010) On the ability of complexity metrics to predict fault-prone classes in object-oriented systems. J Syst Softw 83(4): 660–674CrossRefGoogle Scholar
  133. Zhou Y, Leung H, Song Q, Zhao J, Lu H, Chen L, Xu B (2012) An in-depth investigation into the relationships between structural metrics and unit testability in object-oriented systems. Sci China Inf Sci 55(12): 2800–2815CrossRefGoogle Scholar
  134. Zimmerman T, Nagappan N, Herzig K, Premraj R, Williams L (2011) An empirical study on the relation between dependency neighborhoods and failures. In: Proceedings of the IEEE 4th international conference on software testing, verification and validation, pp 347–356Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Ronald Jabangwe
    • 1
  • Jürgen Börstler
    • 1
  • Darja Šmite
    • 1
  • Claes Wohlin
    • 1
  1. 1.Blekinge Institute of TechnologyKarlskronaSweden

Personalised recommendations