Source Code Metrics and Maintainability: A Case Study

  • Péter Hegedűs
  • Tibor Bakota
  • László Illés
  • Gergely Ladányi
  • Rudolf Ferenc
  • Tibor Gyimóthy
Part of the Communications in Computer and Information Science book series (CCIS, volume 257)


Measuring high level quality attributes of operation-critical IT systems is essential for keeping the maintainability costs under control. International standards and recommendations, like ISO/IEC 9126, give some guidelines regarding the different quality characteristics to be assessed, however, they do not define unambiguously their relationship to the low level quality attributes. The vast majority of existing quality models use source code metrics for measuring low level quality attributes. Although, a lot of researches analyze the relation of source code metrics to other objective measures, only a few studies deal with their expressiveness of subjective feelings of IT professionals. Our research involved 35 IT professionals and manual evaluation results of 570 class methods of an industrial and an open source Java system. Several statistical models have been built to evaluate the relation of low level source code metrics and high level subjective opinions of IT experts. A decision tree based classifier achieved a precision of over 76% during the estimation of the Changeability ISO/IEC 9126 attribute.


Metrics evaluation Empirical quality model ISO/IEC 9126 Software maintainability 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bagheri, E., Gasevic, D.: Assessing the maintainability of software product line feature models using structural metrics. Software Quality Journal 19(3), 579–612 (2011)CrossRefGoogle Scholar
  2. 2.
    Bakota, T., Hegedűs, P., Körtvélyesi, P., Rudolf, F., Gyimóthy, T.: A Probabilistic Software Quality Model. In: Proceedings of the 27th IEEE International Conference on Software Maintenance, ICSM 2011, pp. 368–377. IEEE Computer Society, Williamsburg (2011)Google Scholar
  3. 3.
    Bansiya, J., Davis, C.: A Hierarchical Model for Object-Oriented Design Quality Assessment. IEEE Transactions on Software Engineering 28, 4–17 (2002)CrossRefGoogle Scholar
  4. 4.
    Basili, V.R., Briand, L.C., Melo, W.L.: A Validation of Object-Oriented Design Metrics as Quality Indicators. IEEE Transactions on Software Engineering 22, 751–761 (1996)CrossRefGoogle Scholar
  5. 5.
    Chidamber, S.R., Kemerer, C.F.: A Metrics Suite for Object Oriented Design. IEEE Trans. Softw. Eng., 476–493 (June 1994)Google Scholar
  6. 6.
    Chrissis, M.B., Konrad, M., Shrum, S.: CMMI Guidlines for Process Integration and Product Improvement. Addison-Wesley Longman Publishing Co., Inc., Boston (2003)Google Scholar
  7. 7.
    Chua, B., Dyson, L.: Applying the ISO9126 model to the evaluation of an e-learning system. In: Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference, pp. 184–190. Citeseer, Perth (2004)Google Scholar
  8. 8.
    Ferenc, R., Beszédes, Á., Gyimóthy, T.: Extracting Facts with Columbus from C++ Code. In: Tools for Software Maintenance and Reengineering, Franco Angeli Milano, pp. 16–31 (2004)Google Scholar
  9. 9.
    Ferenc, R., Beszédes, Á., Tarkiainen, M., Gyimóthy, T.: Columbus – Reverse Engineering Tool and Schema for C++. In: Proceedings of the 18th International Conference on Software Maintenance (ICSM 2002), pp. 172–181. IEEE Computer Society (October 2002)Google Scholar
  10. 10.
    Ferenc, R., Siket, I., Gyimóthy, T.: Extracting Facts from Open Source Software. In: Proceedings of the 20th International Conference on Software Maintenance (ICSM 2004), pp. 60–69. IEEE Computer Society (September 2004)Google Scholar
  11. 11.
    Gyimóthy, T., Ferenc, R., Siket, I.: Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction. IEEE Transactions on Software Engineering, 897–910 (2005)Google Scholar
  12. 12.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA Data Mining Software: An Update. SIGKDD Explorations (2009)Google Scholar
  13. 13.
    Harrison, R., Counsell, S., Nithi, R.: An evaluation of the mood set of object-oriented software metrics. IEEE Transactions on Software Engineering 24, 491–496 (1998)CrossRefGoogle Scholar
  14. 14.
    Heitlager, I., Kuipers, T., Visser, J.: A Practical Model for Measuring Maintainability. In: Proceedings of the 6th International Conference on Quality of Information and Communications Technology, pp. 30–39 (2007)Google Scholar
  15. 15.
    International Organization for Standardization: ISO/IEC 15504:2004 Information technology – Process assessment – Part 3: Guidance on performing an assessment. Tech. rep., International Organization for Standardization (2004)Google Scholar
  16. 16.
    ISO/IEC: ISO/IEC 9126. Software Engineering – Product quality. ISO/IEC (2001)Google Scholar
  17. 17.
    ISO/IEC: ISO/IEC 25000:2005. Software Engineering – Software product Quality Requirements and Evaluation (SQuaRE) – Guide to SQuaRE. ISO/IEC (2005)Google Scholar
  18. 18.
    Jolliffe, I.: Principal Component Analysis. Springer, Heidelberg (1986)CrossRefzbMATHGoogle Scholar
  19. 19.
    Jung, H.W., Kim, S.G., Chung, C.S.: Measuring Software Product Quality: A Survey of ISO/IEC 9126. IEEE Software, 88–92 (2004)Google Scholar
  20. 20.
    Olague, H.M., Etzkorn, L.H., Gholston, S., Quattlebaum, S.: Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes. IEEE Transactions on Software Engineering, 402–419 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Péter Hegedűs
    • 1
  • Tibor Bakota
    • 1
  • László Illés
    • 1
  • Gergely Ladányi
    • 2
  • Rudolf Ferenc
    • 1
  • Tibor Gyimóthy
    • 1
  1. 1.Department of Software EngineeringUniversity of SzegedSzegedHungary
  2. 2.DEAK Cooperation Research Private Unlimited CompanySzegedHungary

Personalised recommendations