Measuring Maintainability of OO-Software - Validating the IT-CISQ Quality Model

  • Johannes Braeuer
  • Reinhold PloeschEmail author
  • Matthias Saft
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 511)


The Consortium for IT Software Quality (IT-CISQ) standard claims to provide valid measures as well as an assessment method that are suitable for properly measuring software quality. We implemented the measures for maintainability as specified by the IT-CISQ standard and wanted to find out whether both – the IT-CISQ defined assessment method and the IT-CISQ measures – are suitable for determining the maintainability of object-oriented systems. We identified a reference study that classifies the maintainability of eight open-source Java projects. This study follows a comprehensive measurement and assessment process and therefore can be used for validating the IT-CISQ approach. Due to the missing consideration of project size metrics, the IT-CISQ assessment method is not capable of properly determining the quality of projects. Even considering size metrics does not substantially enhance the result, which is an indicator that the measures proposed by IT-CISQ do not properly measure maintainability. Finally, our benchmarking approach was applied. It sets the measurements in relation to 26 projects that constitute the benchmark base. Despite a lack of statistical significance, the benchmarking results show a better correlation with the ranking published by the reference study. As our benchmarking approach is well validated, we can conclude that the measures proposed by IT-CISQ have to be considerably enhanced, i.e., additional measures have to be added to be able to determine the maintainability of object-oriented software projects.


Quality Attribute Quality Model Software Quality Software Project Reference Study 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Riaz, M., Mendes, E., Tempero, E.: A systematic review of software maintainability prediction and metrics. In: Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, Washington, DC, USA, pp. 367–377 (2009)Google Scholar
  2. 2.
    Wagner, S., Lochmann, K., Winter, S., Goeb, A., Klaes, M.: Quality models in practice: a preliminary analysis. In: 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM 2009, pp. 464–467 (2009)Google Scholar
  3. 3.
    Bakota, T., Hegedus, P., Ladanyi, G., Kortvelyesi, P., Ferenc, R., Gyimothy, T.: A cost model based on software maintainability. In: 2012 28th IEEE International Conference on Software Maintenance (ICSM), pp. 316–325 (2012)Google Scholar
  4. 4.
    CISQ Specifications for Automated Quality Characteristic Measures, OMG, CISQ-TR-2012-01 (2012)Google Scholar
  5. 5.
    ISO/IEC 25010 Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE), ISO/IEC, ISO/IEC 25010:2011 (2011)Google Scholar
  6. 6.
    Wagner, S., Goeb, A., Heinemann, L., Klas, M., Lochmann, K., Ploesch, R., Seidl, A., Streit, J., Trendowicz, A.: The Quamoco product quality modelling and assessment approach. In: Proceedings of the 34th International Conference on Software Engineering (ICSE 2012), Zurich, pp. 1133–1142 (2012)Google Scholar
  7. 7.
    Ploesch, R., Gruber, H., Hentschel, A., Koerner, C., Pomberger, G., Schiffer, S., Saft, M., Storck, S.: The EMISQ method and its tool support-expert-based evaluation of internal software quality. Innov. Syst. Softw. Eng. 4(1), 3–15 (2008). SpringerCrossRefGoogle Scholar
  8. 8.
    Gruber, H., Ploesch, R., Saft, M.: On the validity of benchmarking for evaluating code quality. In: Proceedings of the Joined International Conferences on Software Measurement, IWSM/MetriKon/Mensura 2010, Aachen (2010)Google Scholar
  9. 9.
    Gruber, H., Ploesch, R., Schiffer, S., Hentschel, A.: Calculating software maintenance risks - a practical approach. In: Proccedings of the IASTED Software Engineering Conference (SE 2012), Maintenance Measures, Crete, Greece, pp. 452–455 (2012) and Proceedings of the 20th International Conference on Software Engineering, Washington, DC, USA (1998)Google Scholar
  10. 10.
    Ploesch, R., Schuerz, S., Koerner, C.: On the validity of the IT-CISQ quality model for automatic measurement of maintainability. In: Proceedings of the 39th International Computers, Software and Applications Conference (COMPSAC 2015), Taichung, Taiwan (2015)Google Scholar
  11. 11.
    Correia, J.P., Visser, J.: Certification of technical quality of software products. In: Proceedings of the Int’l Workshop on Foundations and Techniques for Open Source Software Certification, pp. 35–51 (2008)Google Scholar
  12. 12.
    Erlikh, L.: Leveraging legacy system dollars for e-business. IT Prof. 2(3), 17–23 (2000)CrossRefGoogle Scholar
  13. 13.
    Parnas, D.L.: Software aging. In: Proceedings of the 16th International Conference on Software Engineering, Los Alamitos, CA, USA, pp. 279–287 (1994)Google Scholar
  14. 14.
    Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994)CrossRefGoogle Scholar
  15. 15.
    McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. SE-2(4), 308–320 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Henderson-Sellers, B.: Object-Oriented Metrics: Measures of Complexity. Prentice Hall, Upper Saddle River (1996)Google Scholar
  17. 17.
    Binkley, A.B., Schach, S.R.: Validation of the coupling dependency metric as a predictor of run-time failures and maintenance measures. In: Proceedings of the 20th International Conference on Software Engineering, Washington, DC, USA, pp. 452–455 (1998)Google Scholar
  18. 18.
    Correia, J.P., Visser, J.: Benchmarking technical quality of software products. In: 15th Working Conference on Reverse Engineering, WCRE 2008, pp. 297–300 (2008)Google Scholar
  19. 19.
    Simon, F., Seng, O., Mohaupt, T.: Code-Quality-Management: technische Qualität industrieller Softwaresysteme transparent und vergleichbar gemacht. dpunkt-Verlag (2006)Google Scholar
  20. 20.
    Gruber, H., Koerner, C., Ploesch, R., Pomberger, G., Schiffer, S.: Benchmarking-oriented analysis of source code quality: experiences with the QBench approach. In: Proceedings of the IASTED International Conference on Software Engineering (SE 2008), Anaheim, CA, USA, pp. 7–13 (2008)Google Scholar
  21. 21.
    Heitlager, I., Kuipers, T., Visser, J.: A practical model for measuring maintainability. In: 6th International Conference on the Quality of Information and Communications Technology, QUATIC 2007, pp. 30–39 (2007)Google Scholar
  22. 22.
    Wagner, S., Goeb, A., Heinemann, L., Kläs, M., Lampasona, C., Lochmann, K., Mayr, A., Ploesch, R., Seidl, A., Streit, J., Trendowicz, A.: Operationalised product quality models and assessment: the Quamoco approach. Inf. Softw. Technol. 62, 101–123 (2015)CrossRefGoogle Scholar
  23. 23.
    Gruber, H.: Benchmarking-oriented Assessment of Source Code Quality - An Approach for Automatic Assessment using Static Code Analysis Tools, Dissertation (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Johannes Braeuer
    • 1
  • Reinhold Ploesch
    • 1
    Email author
  • Matthias Saft
    • 2
  1. 1.Department of Business Informatics – Software Engineering Johannes KeplerUniversity LinzLinzAustria
  2. 2.Corporate Technology Siemens AGMunichGermany

Personalised recommendations