Empirical Software Engineering

, Volume 11, Issue 3, pp 395–431

Subjective evaluation of software evolvability using code smells: An empirical study

Article

Abstract

This paper presents the results of an empirical study on the subjective evaluation of code smells that identify poorly evolvable structures in software. We propose use of the term software evolvability to describe the ease of further developing a piece of software and outline the research area based on four different viewpoints. Furthermore, we describe the differences between human evaluations and automatic program analysis based on software evolvability metrics. The empirical component is based on a case study in a Finnish software product company, in which we studied two topics. First, we looked at the effect of the evaluator when subjectively evaluating the existence of smells in code modules. We found that the use of smells for code evaluation purposes can be difficult due to conflicting perceptions of different evaluators. However, the demographics of the evaluators partly explain the variation. Second, we applied selected source code metrics for identifying four smells and compared these results to the subjective evaluations. The metrics based on automatic program analysis and the human-based smell evaluations did not fully correlate. Based upon our results, we suggest that organizations should make decisions regarding software evolvability improvement based on a combination of subjective evaluations and code metrics. Due to the limitations of the study we also recognize the need for conducting more refined studies and experiments in the area of software evolvability.

Keywords

Code smells Subjective evaluation Perceived evaluation Maintainability Evolvability Code metrics Software metrics Human factors 

References

  1. AFOTEC (1996) Software maintainability evaluation guide. Department of the Air Force, HQ Air Force Operational Test and Evaluation CenterGoogle Scholar
  2. Arnold RS (1989) Software restructuring. Proc IEEE 77:607–617CrossRefGoogle Scholar
  3. Balazinska M, Merlo E, Dagenais M, Lague B, Kontogiannis K (2000) Advanced clone-analysis to support object-oriented system refactoring. Proceedings of Seventh Working Conference on Reverse Engineering, pp 98–107.Google Scholar
  4. Bandi RK, Vaishnavi VK, Turk DE (2003) Predicting maintenance performance using object-oriented design complexity metrics. IEEE Trans Softw Eng 29:77–87CrossRefGoogle Scholar
  5. Bansiya J, David CG (2002) A hierarchical model for object-oriented design quality. IEEE Trans Softw Eng 28:4–17CrossRefGoogle Scholar
  6. Beck K, Beedle M, van Bennekum A, Cockburn A, Cunningham W, Fowler M, Grenning J et al (2001) Manifesto for agile software development. [cited 8/21 2003]. Available from http://agilemanifesto.org/
  7. Briand LC, Daly JW, Wüst JK (1997) A unified framework for cohesion measurement in object-oriented systems. Proceedings of the Fourth International Software Metrics Symposium, pp 43–53Google Scholar
  8. Briand LC, Daly JW, Wüst JK (1999) A unified framework for coupling measurement in object-oriented systems. IEEE Trans Softw Eng 25:91–121CrossRefGoogle Scholar
  9. Brown WJ, R Malveau C, McCormick HW, T Mowbray J (1998) AntiPatterns: refactoring software, architectures, and projects in crisis. Wiley, New YorkGoogle Scholar
  10. Chidamber SR, Kemerer CF (1994) A metric suite for object oriented design. IEEE Trans Softw Eng 20:476–493CrossRefGoogle Scholar
  11. Chidamber SR, Darcy DP, Kemerer CF (1998) Managerial use of metrics for object-oriented software: an exploratory analysis. IEEE Trans Softw Eng 24:629–639CrossRefGoogle Scholar
  12. Chikofsky EJ, Cross JH (1990) Reverse engineering and design recovery: a taxonomy. IEEE Softw 7:13–17CrossRefGoogle Scholar
  13. Coleman D, Ash D, Lowther B, Oman PW (1994) Using metrics to evaluate software system maintainability. Computer 27:44–49CrossRefGoogle Scholar
  14. Coleman D, Lowther B, Oman PW (1995) The application of software maintainability models in industrial software systems. J Syst Softw 29:3–16CrossRefGoogle Scholar
  15. Cusumano MA, Selby RW (1995) Microsoft secrets. Free Press, USAGoogle Scholar
  16. Cusumano MA, Yoffie DB (1998) Design strategy. In: Competing on internet time. Free Press, New York, USA, pp 180–198Google Scholar
  17. Ducasse S, Rieger M, Demeyer S (1999) A language independent approach for detecting duplicated code. Proceedings of the International Conference on Software Maintenance, Oxford, England, UK, pp 109–118Google Scholar
  18. Fowler M (2000) Refactoring: improving the design of existing code, 1st edn. Addison-Wesley, BostonGoogle Scholar
  19. Fowler M, Beck K (2000) Bad smells in code. In: Refactoring: improving the design of existing code, 1st edn. Addison-Wesley, Boston, pp 75–88Google Scholar
  20. Garvin DA (1984) What does “product quality” really mean? Sloan Manage Rev 26:25–43Google Scholar
  21. Grady RB (1994) Successfully applying software metrics. Computer 27:18–25CrossRefGoogle Scholar
  22. Halstead MH (1977) Elements of software science. Elsevier, New YorkMATHGoogle Scholar
  23. Harrison R, Counsell SJ, Nithi RV (1998) An evaluation of the MOOD set of object-oriented software metrics. IEEE Trans Softw Eng 24:491–496CrossRefGoogle Scholar
  24. Henderson-Sellers B (1996) Object-oriented metrics. Prentice Hall, Upper Saddle River, New JerseyGoogle Scholar
  25. Hitz M, Montazeri B (1996) Chidamber and kemerer's metrics suite: a measurement theory perspective. IEEE Trans Softw Eng 22:267–271CrossRefGoogle Scholar
  26. IEEE (1998) IEEE standard for software maintenance. The Institute of Electrical and Electronics Engineers, Inc, New YorkGoogle Scholar
  27. IEEE (1990) IEEE standard glossary of software engineering terminology. The Institute of Electrical and Electronics Engineers, Inc, New YorkGoogle Scholar
  28. Iio K, Furuyama T, Arai Y (1997) Experimental analysis of the cognitive processes of program maintainers during software maintenance. Proceedings of International Conference on Software Maintenance, pp 242–249Google Scholar
  29. Kafura DG, Reddy GR (1987) The use of software complexity metrics in software maintenance. IEEE Trans Softw Eng 13:335–343CrossRefGoogle Scholar
  30. Kataoka Y, Ernst MD, Griswold WG, Notkin D (2001) Automated support for program refactoring using invariants. Proceedings of International Conference on Software Maintenance, Florence, Italy, pp 736–743Google Scholar
  31. Kataoka Y, Imai T, Andou H, Fukaya T (2002) A quantative evaluation of maintainability enhancement by refactoring. Proceedings of the International Conference on Software Maintenance, Montreal, Canada, pp 576–585Google Scholar
  32. Kendall M, (1948) The problem of m ranking. In: Rank correlation methods, 5th edn. Edward Arnold, London, pp 117–143Google Scholar
  33. Kitchenham BA, Pfleeger SL (1996) Software quality: the elusive target. IEEE Softw 13:12–21CrossRefGoogle Scholar
  34. Kitchenham BA, Pfleeger SL (2002a) Principles of survey research part 2: designing a survey. ACM SIGSOFT Softw Eng Notes 27:18–20CrossRefGoogle Scholar
  35. Kitchenham BA, Pfleeger SL (2002b) Principles of survey research part 4: questionnaire evaluation. ACM SIGSOFT Softw Eng Notes 27:20–23Google Scholar
  36. Kitchenham BA, Pfleeger SL (2002c) Principles of survey research: part 3: constructing a survey instrument. ACM SIGSOFT Softw Eng Notes 27:20–24Google Scholar
  37. Kitchenham BA, Pfleeger SL (2002d) Principles of survey research: part 5: populations and samples. ACM SIGSOFT Softw Eng Notes 27:17–20CrossRefGoogle Scholar
  38. Lehman MM (1980) On understanding laws, evolution, and conservation in the large-program life cycle. J Syst Softw 1:213–221CrossRefGoogle Scholar
  39. Li W, Henry SM (1993) Object-oriented metrics that predict maintainability. J Syst Softw 23:111–122CrossRefGoogle Scholar
  40. Lorenz M, Kidd J (1994) Object-oriented software metrics. Prentice Hall, Upper Saddle River, New JerseyGoogle Scholar
  41. Mäntylä MV, Vanhanen J, Lassenius C (2003) A taxonomy and an initial empirical study of bad smells in code. Proceedings of the International Conference on Software Maintenance, Amsterdam, The Netherlands, pp 381–384Google Scholar
  42. Marinescu R (2004) Detection strategies: metrics-based rules for detecting design flaws. In: Proceedings of Software Maintenance, Chicago, Illinois, USA, pp 350–359Google Scholar
  43. Maruyama K, Shima K (1999) Automatic method refactoring using weighted dependence graphs. Proceedings of the International Conference on Software Engineering, Los Angeles, California, USA, pp 236–245Google Scholar
  44. McCabe TJ (1976) A complexity measure. IEEE Trans Softw Eng 2:308–320CrossRefMathSciNetGoogle Scholar
  45. McConnell S (1993) Code complete. Microsoft, Redmond, WashingtonGoogle Scholar
  46. McConnell S (2004) High-quality routines. In: Code complete 2, 2nd edn. Microsoft, Redmond, Washington, pp 161–186Google Scholar
  47. Mens T, Tourwe T (2004) A survey of software refactoring. IEEE Trans Softw Eng 30:126–139CrossRefGoogle Scholar
  48. Muthanna S, Stacey B, Kontogiannis K, Ponnambalam K (2000) A maintainability model for industrial software systems using design level metrics. Proceedings of Seventh Working Conference on Reverse Engineering, Brisbane, Australia, pp 248–256Google Scholar
  49. Oman PW, Hagemeister J (1994) Constructing and testing of polynomials predicting software maintainability. J Syst Softw 24:251–266CrossRefGoogle Scholar
  50. Oman PW, Hagemeister J, Ash D (1991) A definition and taxonomy for software maintainability. Software Engineering Test Lab, University of Idaho, pp 91–08Google Scholar
  51. Pfleeger SL, Kitchenham BA (2001) Principles of survey research. Part 1. Turning lemons into lemonade. ACM SIGSOFT Softw Eng Notes 26:16–18CrossRefGoogle Scholar
  52. Pigoski TM (1996) Practical software maintenance. WileyGoogle Scholar
  53. Rajlich VT, Bennett KH (2000) A staged model for the software life cycle. Computer 33:66–71CrossRefGoogle Scholar
  54. Robillard MP, Coelho W, Murphy GC (2004) How effective developers investigate source code: an exploratory study. IEEE Trans Softw Eng 30:889–903CrossRefGoogle Scholar
  55. Rombach DH (1987) Controlled experiment on the impact of software structure on maintainability. IEEE Trans Softw Eng 13:344–354CrossRefGoogle Scholar
  56. Schwanke RW, Hanson SJ (1994) Using neural networks to modularize software. Mach Learn 15:137–168Google Scholar
  57. Shepperd MJ (1990) System architecture metrics for controlling software maintainability. IEE Colloquium on Software Metrics 4/1–4/3Google Scholar
  58. Shneiderman B (1980) Software psychology: human factors in computer and information systems. Winthrop, Cambridge, MassachusettsGoogle Scholar
  59. Siegel S (1956) Nonparametric statistics for the behavioral sciences, 1st edn. McGraw-Hill, New YorkMATHGoogle Scholar
  60. Simon F, Steinbruckner F, Lewerentz C (2001) Metrics based refactoring. Proceedings Fifth European Conference on Software Maintenance and Reengineering, Lisbon, Portugal, pp 30–38Google Scholar
  61. Sommerville I (2001) Software engineering. Addison-Wesley, Reading, MassachusettsGoogle Scholar
  62. Stevens W, Myers G, Constantine L (1974) Structured design. IBM Syst J 13:115–139CrossRefGoogle Scholar
  63. Subramanyam R, Krishnan MS (2003) Empirical analysis of CK metrics for object-oriented design complexity: implications for software defects. IEEE Trans Softw Eng 29:297–310CrossRefGoogle Scholar
  64. Succi G, Pedrycz W, Djokic S, Zuliani P, Russo B (2005) An empirical exploration of the distributions of the Chidamber and Kemerer object-oriented metrics suite. Empirical Software Engineering 10:81–104CrossRefGoogle Scholar
  65. Sun Microsystems (1999) Code conventions for the java programming language. in Sun Microsystems [database online]. [cited 7/20 1999]. Available from http://java.sun.com/docs/codeconv/
  66. Szulewski PA, Budlong FC (1996) Metrics for ada 95: focus on reliability and maintainability. CrossTalk—The Journal of Defence Software Engineering 1996Google Scholar
  67. Tourwé T, Mens T (2003) Identifying refactoring opportunities using logic meta programming. Proceedings of the Seventh European Conference on Software Maintenance and Reengineering, 2003, Benevento, Italy, pp 91–100Google Scholar
  68. Wake WC (2003) Refactoring workbook, 1st edn. Addison WesleyGoogle Scholar
  69. Welker KD, Oman PW, Atkinson GG (1997) Development and application of an automated source code maintainability index. J Softw Maint Res Pract 9:127–159CrossRefGoogle Scholar
  70. Yu H, Ikeda M, Mizoguchi R (1994) Helping novice programmers bridge the conceptual gap. Proceedings of International Conference on Expert Systems for Development, Bangkok, Thailand, pp 192–197Google Scholar

Copyright information

© Springer Science + Business Media, LLC 2006

Authors and Affiliations

  1. 1.Helsinki University of TechnologyHelsinkiFinland

Personalised recommendations