Empirical Software Engineering

, Volume 17, Issue 3, pp 200–242 | Cite as

The ability of object-oriented metrics to predict change-proneness: a meta-analysis

Article

Abstract

Many studies have investigated the relationships between object-oriented (OO) metrics and change-proneness and conclude that OO metrics are able to predict the extent of change of a class across the versions of a system. However, there is a need to re-examine this subject for two reasons. First, most studies only analyze a small number of OO metrics and, therefore, it is not clear whether this conclusion is applicable to most, if not all, OO metrics. Second, most studies only uses relatively few systems to investigate the relationships between OO metrics and change-proneness and, therefore, it is not clear whether this conclusion can be generalized to other systems. In this paper, based on 102 Java systems, we employ statistical meta-analysis techniques to investigate the ability of 62 OO metrics to predict change-proneness. In our context, a class which is changed in the next version of a system is called change-prone and not change-prone otherwise. The investigated OO metrics cover four metric dimensions, including 7 size metrics, 18 cohesion metrics, 20 coupling metrics, and 17 inheritance metrics. We use AUC (the area under a relative operating characteristic, ROC) to evaluate the predictive effectiveness of OO metrics. For each OO metric, we first compute AUCs and the corresponding variances for individual systems. Then, we employ a random-effect model to compute the average AUC over all systems. Finally, we perform a sensitivity analysis to investigate whether the AUC result from the random-effect model is robust to the data selection bias in this study. Our results from random-effect models reveal that: (1) size metrics exhibit moderate or almost moderate ability in discriminating between change-prone and not change-prone classes; (2) coupling and cohesion metrics generally have a lower predictive ability compared to size metrics; and (3) inheritance metrics have a poor ability to discriminate between change-prone and not change-prone classes. Our results from sensitivity analyses show that these conclusions reached are not substantially influenced by the data selection bias.

Keywords

Object-oriented Metrics Change-proneness Meta-analysis Random-effect model Sensitivity analysis 

References

  1. Aman H, Yamasaki K, Yamada H, Noda MT (2002) A proposal of class cohesion metrics using sizes of cohesive parts. Knowledge-based Software Engineering. In: T. Welzer et al. (eds.) IOS Press, 102–107Google Scholar
  2. Arisholm E, Briand LC, Føyen A (2004) Dynamic coupling measurement for object-oriented software. IEEE Trans Softw Eng 30(8):491–506CrossRefGoogle Scholar
  3. Badri L, Badri M (2004) A proposal of a new class cohesion criterion: an empirical study. J Object Tech 3(4):145–159CrossRefGoogle Scholar
  4. Bansiya J, Etzkorn L, Davis C, Li W (1999) A class cohesion metric for object-oriented designs. J Object-Oriented Program 11(8):47–52Google Scholar
  5. Benlarbi S, Melo WL (1999) Polymorphism measures for early risk prediction. In: Proceedings of the 21st International Conference on Software Engineering, Los Angeles, California, United States, 334–344Google Scholar
  6. Bieman JM, Kang BK (1995) Cohesion and reuse in an object-oriented system. ACM SIGSOFT Software Eng Notes 20(Special Issue):259–262CrossRefGoogle Scholar
  7. Bieman JM, Jain D, Yang HJ (2001) OO design patterns, design structure, and program changes: an industrial case study. In: Proceedings of the 17th International Conference on Software Maintenance, 580–589Google Scholar
  8. Bieman JM, Straw G, Wang H, Munger PW, Alexander RT (2003a) Design patterns and change proneness: an examination of five evolving systems. In: Proceedings of the 9th Software Metrics Symposium, 40–49Google Scholar
  9. Bieman JM, Andrews AA, Yang HJ (2003b) Understanding change-proneness in OO software through visualization. In: Proceedings of the 11th Workshop on Program Comprehension, 44–53Google Scholar
  10. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR (2009) Introduction to Meta-analysis. John Wiley & Sons, LtdGoogle Scholar
  11. Briand LC, Wüst J (2001) Modeling development effort in object-oriented systems using design properties. IEEE Trans Softw Eng 27(11):963–986CrossRefGoogle Scholar
  12. Briand LC, Morasca S, Basili VR (1996) Property-based software engineering measurement. IEEE Trans Softw Eng 22(1):68–86CrossRefGoogle Scholar
  13. Briand LC, Devanbu PT, Melo WL (1997) An investigation into coupling measures for C++. In: Proceedings of ICSE, 412–421Google Scholar
  14. Briand LC, Daly JW, Wüst J (1998) A unified framework for cohesion measurement in object-oriented systems. Empir Softw Eng 3(1):65–117CrossRefGoogle Scholar
  15. Briand LC, Daly JW, Wüst J (1999) A unified framework for coupling measurement in object-oriented systems. IEEE Trans Softw Eng 25(1):91–121CrossRefGoogle Scholar
  16. Briand LC, Wüst J, Daly JW, Porter DV (2000) Exploring the relationships between design measures and software quality in object-oriented systems. J Syst Softw 51(3):245–273CrossRefGoogle Scholar
  17. Chidamber SR, Kemerer CF (1991) Towards a metrics suite for object-oriented design. In: the 6th Annual Conference of Object-oriented Programming, Systems, Languages, and Applications, Arizona, 197–211Google Scholar
  18. Chidamber SR, Kemerer CF (1994) A metrics suite for object-oriented design. IEEE Trans Softw Eng 20(6):476–493CrossRefGoogle Scholar
  19. Counsell S, Swift S, Crampton J (2006) The interpretation and utility of three cohesion metrics for object-oriented design. ACM Trans Softw Eng Methodol 15(2):123–149CrossRefGoogle Scholar
  20. Di Penta M, Cerulo L, Gueheneuc YG, Antoniol G (2008) An empirical study of the relationships between design pattern roles and class change-proneness. In Proceedings of the 24th International Conference on Software Maintenance, 217–226Google Scholar
  21. Dickinson W, Leon D, Podgurski A (2001) Finding failures by cluster analysis of execution profiles. In: Proceedings of the 23rd International Conference on Software Engineering, 339–348Google Scholar
  22. Duval S, Tweedie R (2000) Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56(2):455–463MATHCrossRefGoogle Scholar
  23. El Emam K, Benlarbi S, Goel N, Rai SN (2001) The confounding effect of class size on the validity of object-oriented metrics. IEEE Trans Softw Eng 27(7):630–650CrossRefGoogle Scholar
  24. Hanley JA, McNeil BJ (1982) The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143(1):29–36Google Scholar
  25. Hannay JE, Dybå T, Arisholm E, Sjøberg DIK (2009) The effectiveness of pair-programming: a meta-analysis. Inf Softw Technol 51(7):1110–1122CrossRefGoogle Scholar
  26. Hayes W (1999) Research synthesis in software engineering: A case for meta-analysis. In: Proceedings of the 6th IEEE International Software Metrics Symposium, 143–151Google Scholar
  27. Henderson-Sellers B (1996) Software metrics. Prentice-Hall, Hemel HempstaedGoogle Scholar
  28. Higgins J, Thompson S, Deeks J, Altman D (2003) Measuring inconsistency in meta-analyses. Br Med J 327:557–560CrossRefGoogle Scholar
  29. Hitz M, Montazeri B (1995) Measuring coupling and cohesion in object-oriented systems. In: Proceedings of the International Symposium on Applied Corporate Computing, Monterrey, MexicoGoogle Scholar
  30. Hopkins WG (2003) A new view of statistics. SportScience. Dunedin, New ZealandGoogle Scholar
  31. Julious SA (2004) Using confidence intervals around individual means to assess statistical significance between two means. Pharm Stat 3(3):217–222CrossRefGoogle Scholar
  32. Kim E, Kusumoto S, Kikuno T (1996) Heuristics for computing attribute values of C++ program complexity metrics. In: Proceedings of the 20th Conference on Computer Software and Applications, 104–109Google Scholar
  33. Koru AG, Liu H (2007) Identifying and characterizing change-prone classes in two large-scale open-source products. J Syst Softw 80(1):63–73CrossRefGoogle Scholar
  34. Koru AG, Tian J (2005) Comparing high-change modules and modules with the highest measurement values in two large-scale open-source products. IEEE Trans Softw Eng 31(8):625–642CrossRefGoogle Scholar
  35. Lake A, Cook C (1994) Use of factor analysis to develop OOP software complexity metrics. In: Proceedings of the 6th Annual Oregon Workshop on Software Metrics. Silver Falls, OregonGoogle Scholar
  36. Lee Y, Liang B, Wu S, Wang F (1995) Measuring the coupling and cohesion of an object-oriented program based on information flow. In: Proceedings of the International Conference on Software Quality. Maribor, SloveniaGoogle Scholar
  37. Li W, Henry SM (1993) Object-oriented metrics that predict maintainability. J Syst Softw 23(2):111–122CrossRefGoogle Scholar
  38. Lindvall M (1998) Are large C++ classes change-prone? An empirical investigation. Software Pract Ex 28(15):1551–1558CrossRefGoogle Scholar
  39. Lindvall M (1999) Measurement of change: stable and change-prone constructs in a commercial C++ system. In: Proceedings of the 6th Software Metrics Symposium, 40–49Google Scholar
  40. Lorenz M, Kidd J (1994) Object-oriented software metrics. Prentice Hall Object-Oriented Series, Englewood CliffsGoogle Scholar
  41. Miller J (2000) Applying meta-analytical procedures to software engineering experiments. J Syst Softw 54(1):29–39CrossRefGoogle Scholar
  42. Myers EW (1986) An O(ND) difference algorithm and its variations. Algorithmic 1(2):251–266MATHCrossRefGoogle Scholar
  43. Payton ME, Greenstone MH, Schenker N (2003) Overlapping confidence intervals or standard error intervals: what do they mean in terms of statistical significance? J Insect Sci 34(3):1–6Google Scholar
  44. Pickard L, Kitchenham B, Jones P (1998) Combining empirical results in software engineering. Inf Softw Technol 40(14):811–821CrossRefGoogle Scholar
  45. Scientific Toolworks, Inc., Understand for Java: user guide and reference manual. 2005, http://www.scitools.com
  46. Succi G, Pedrycz W, Djokic S, Zuliani P, Russo B (2005) An empirical exploration of the distributions of the Chidamber and Kemerer Object-oriented metrics suite. Empir Softw Eng 10(1):81–104CrossRefGoogle Scholar
  47. Swets JA (1988) Measuring the accuracy of diagnostic systems. Science 240(4857):1285–1293MathSciNetMATHCrossRefGoogle Scholar
  48. Tegarden D, Sheetz S, Monarchi D (1992) A software complexity model of object-oriented systems. Decis Support Syst 13(34):241–262Google Scholar
  49. Witten IH, Frank E (2000) Data mining: practical machine learning tools and techniques with Java implementation. Second Edition, Morgan Kaufmann PublishersGoogle Scholar
  50. Zhou Y, Xu B, Leung H (2009) Examining the potentially confounding effect of class size on the associations between object-oriented metrics and change-proneness. IEEE Trans Softw Eng 35(5):607–623CrossRefGoogle Scholar
  51. Zhou Y, Xu B, Leung H (2010) On the ability of complexity metrics to predict fault-prone classes in object-oriented systems. J Syst Softw 83(4):660–674CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringSoutheast UniversityNanjingChina
  2. 2.State Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina
  3. 3.Department of ComputingHong Kong Polytechnic UniversityHung HomHong Kong

Personalised recommendations