Advertisement

Empirical Software Engineering

, Volume 12, Issue 5, pp 517–549 | Cite as

Building measure-based prediction models for UML class diagram maintainability

  • Marcela GeneroEmail author
  • Esperanza Manso
  • Aaron Visaggio
  • Gerardo Canfora
  • Mario Piattini
Article

Abstract

The usefulness of measures for the analysis and design of object oriented (OO) software is increasingly being recognized in the field of software engineering research. In particular, recognition of the need for early indicators of external quality attributes is increasing. We investigate through experimentation whether a collection of UML class diagram measures could be good predictors of two main subcharacteristics of the maintainability of class diagrams: understandability and modifiability. Results obtained from a controlled experiment and a replica support the idea that useful prediction models for class diagrams understandability and modifiability can be built on the basis of early measures, in particular, measures that capture structural complexity through associations and generalizations. Moreover, these measures seem to be correlated with the subjective perception of the subjects about the complexity of the diagrams. This fact shows, to some extent, that the objective measures capture the same aspects as the subjective ones. However, despite our encouraging findings, further empirical studies, especially using data taken from real projects performed in industrial settings, are needed. Such further study will yield a comprehensive body of knowledge and experience about building prediction models for understandability and modifiability.

Keywords

Maintainability Understandability Modifiability UML Class diagrams Structural complexity Size Measures Empirical validation Controlled experiments Prediction model 

Notes

Acknowledgements

This research is part of the MECENAS project (PBI06-0024) financed by “Consejería de Ciencia y Tecnología de la Junta de Comunidades de Castilla-La Mancha” and the following projects supported by the “Ministerio de Educación y Ciencia (Spain) and FEDER”: TIN2006-15175-C05-05, TIN2004-03145 and TIN2004-06689.

We thank Macario Polo, Félix García and Crescencio Bravo from the University of Castilla-La Mancha for having allowed us to perform the experiment with their students.

The authors are grateful to the anonymous reviewers for insight and feedback to several key issues covered in this research. Thanks to Chris Wright for proofreading the paper.

References

  1. Atkinson C, Kühne T (2003) Model-driven development: a metamodeling foundation. IEEE Softw 20(5):36–41CrossRefGoogle Scholar
  2. Bansiya J, Davis C (2002) A hierarchical model for object-oriented design quality assessment. IEEE Trans Softw Eng 28(1):4–17CrossRefGoogle Scholar
  3. Basili V, Rombach H (1988) The TAME project: towards improvement-oriented software environments. IEEE Trans Softw Eng 14(6):728–738CrossRefGoogle Scholar
  4. Basili V, Weiss D (1984) A methodology for collecting valid software engineering data. IEEE Trans Softw Eng 10:728–738CrossRefGoogle Scholar
  5. Basili V, Shull F, Lanubile F (1999) Building knowledge through families of experiments. IEEE Trans Softw Eng 25(4):435–437CrossRefGoogle Scholar
  6. Briand L, Wüst J (2001) Modeling development effort in object-oriented systems using design properties. IEEE Trans Software Eng 27(11):963–986CrossRefGoogle Scholar
  7. Briand L, Wüst J (2002) Empirical studies of quality models in object-oriented systems. In: Zelkowitz (ed) Advances in computers, vol 59. Academic, pp 97–166Google Scholar
  8. Briand L, Devanbu W, Melo W (1997) An investigation into coupling measures for C++. In: 19th International Conference on Software Engineering (ICSE 97), Boston, USA, pp 412–421Google Scholar
  9. Briand L, Wüst J, Lounis H (1998) Investigating quality factors in object-oriented designs: an industrial case study. Technical report ISERN 98-29 (version 2)Google Scholar
  10. Briand L, Wüst J, Lounis H (1999) A comprehensive investigation of quality factors in object-oriented designs: an industrial case study. In: 21st International Conference on Software Engineering, Los Angeles, pp 345–354Google Scholar
  11. Briand L, Arisholm S, Counsell F, Houdek F, Thévenod-Fosse P (2000) Empirical studies of object-oriented artefacts, methods and processes: state of the art and future directions. Emp Softw Eng 4(4):387–404CrossRefGoogle Scholar
  12. Briand L, Bunse C, Daly J (2001) A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Trans Softw Eng 27(6):513–530CrossRefGoogle Scholar
  13. Brito e Abreu F, Carapuça R (1994) Object-oriented software engineering: measuring and controlling the development process. In: Proceedings of 4th International Conference on Software Quality, Mc Lean, VA, USA, 3–5 OctoberGoogle Scholar
  14. Brooks A, Daly J, Miller J, Roper M, Wood M (1996) Replication of experimental results in software engineering. Technical report ISERN-96-10. International Software Engineering Research NetworkGoogle Scholar
  15. Calero C, Piattini M, Genero M (2001) Method for obtaining correct metrics. In: International Conference on Enterprise and Information Systems (ICEIS‘2001), pp 779–784Google Scholar
  16. Cantone G, Donzelli P (2000) Production and maintenance of software measurement models. J Softw Eng, Knowl Eng 5:605–626Google Scholar
  17. Card C, El-Emam K, Scalzo B (2001) Measurement of object-oriented software development projects. In: Software Productivity Consortium NFPGoogle Scholar
  18. Chapin N, Hale J, Khan K, Ramil J, Tan W (2001) Types of software evolution and software maintenance. J Softw Maint Evol: Res Prac 13:3–30zbMATHCrossRefGoogle Scholar
  19. Chidamber S, Kemerer C (1994) A metrics suite for object oriented design. IEEE Trans Softw Eng 20(6):476–493CrossRefGoogle Scholar
  20. Cook T, Campbell D (1979) Quasi-experimentation: design and analysis issues for field settings. Boston, Houghton MifflinGoogle Scholar
  21. Davis C (2002) Statistical methods for the analysis of repeated measurements, SpringerGoogle Scholar
  22. Erickson J, Siau K (2004) Theoretical and practical complexity of UML. In: 10th Americas Conference on Information Systems, New York, USA, pp 1669–1674Google Scholar
  23. Hardin JW, Hilbe JM (2002) Generalized estimating equations. Chapman & Hall, LondonGoogle Scholar
  24. El-Emam K (1999) The prediction of faulty classes using object-oriented design metrics, NRC/ERB1064. National Research Council CanadaGoogle Scholar
  25. El-Emam K (2001) Object-oriented metrics: a review on theory and practice, NRC/ERB 1085. National Research Council CanadaGoogle Scholar
  26. Fenton N, Pfleeger S (1997) Software metrics: a rigorous approach, 2nd edn. Chapman & Hall, LondonGoogle Scholar
  27. Fioravanti F, Nesi P (2001) Estimation and prediction metrics for adaptive maintenance effort of object-oriented systems. IEEE Trans Software Eng 27(12):1062–1083CrossRefGoogle Scholar
  28. García F, Ruiz F, Cruz JA, Piattini M (2003) Integrated measurement for the evaluation and improvement of software processes. In: 9th European Workshop on Software Process Technology (EWSPT’9), Helsinki, Finland. Lecture notes in computer science, vol 2786. pp 129–145Google Scholar
  29. Genero M (2002) Defining and validating metrics for conceptual models, Ph.D. thesis, University of Castilla-La Mancha, SpainGoogle Scholar
  30. Genero M, Jiménez L, Piattini M (2002) A controlled experiment for validating class diagram structural complexity metrics. In: Bellahsene Z, Patel D, Rolland C (eds) The 8th International Conference on Object-oriented Information Systems (OOIS’2002), Lecture notes in computer science, vol 2425. Springer, Berlin Heidelberg New York, pp 372–383Google Scholar
  31. Genero M, Piattini M, Calero C (2000) Early measures for UML class diagrams. L’Objet. 6(4), Hermes Science Publications, pp 489–515Google Scholar
  32. Genero M, Olivas J, Piattini M, Romero F (2001) Using metrics to predict OO information systems maintainability. In: CAISE 2001, Lecture notes in computer science, vol 2068. Interlaken, Switzerland, pp 388–401Google Scholar
  33. Genero M, Manso MaE, Piattini M, Cantone G (2003a) Building UML class diagram maintainability prediction models based on early metrics. In: 9th International Symposium on Software Metrics (METRICS 2003), IEEE Computer Society, pp 263–275Google Scholar
  34. Genero M, Olivas J, Piattini M, Romero F (2003b) Assessing object oriented conceptual models maintainability. In: Poels G et al (eds) International Workshop on Conceptual Modeling Quality (IWCMQ’02), Tampere, Finland. Lecture notes in computer science, vol 2784. Springer, Berlin Heidelberg New York, pp 288–299Google Scholar
  35. Genero, M., Piattini M., and Calero, C. 2005. Metrics for high-level design UML class diagrams: an exploratory analysis. Journal of Object Technology, 4(9). Available at http://www.jot.fm
  36. Harrison R, Counsell S, Nithi R (1998) An investigation into the applicability and validity of object-oriented design metrics. Emp Softw Eng 3:255–273CrossRefGoogle Scholar
  37. Harrison R, Counsell S, Nithi R (2000) Experimental assessment of the effect of inheritance on the maintainability of object-oriented systems. J Syst Softw 52:173–179CrossRefGoogle Scholar
  38. Henderson-Sellers B (1996) Object-oriented metrics—measures of complexity. Prentice-Hall, Upper Saddle River, NJ, pp 489–515Google Scholar
  39. ISO/IEC 9126-1.2 (2001) Information technology—software product quality—part 1: quality modelGoogle Scholar
  40. Johnson DE (1998) Applied multivariate methods for data analyst. Duxbury Press, Brooks/Cole Publishing CompanyGoogle Scholar
  41. Juristo N, Moreno AMa (2001) Basics of software engineering experimentation. KluwerGoogle Scholar
  42. Kiewkanya M, Muenchaisiri P (2004) Predicting modifiability of UML class and sequence diagrams. In: The Second Workshop on Software Quality (26th International Conference on Software Engineering (ICSE 2004)), pp 53–57Google Scholar
  43. Kiewkanya M, Jindasawat N, Muenchaisiri P (2004) A methodology for constructing maintainability model of object-oriented design. In: Fourth International Conference on Quality Software (QSIC’ 04), pp 206–213Google Scholar
  44. Kim H, Boldyreff C (2002) Developing software metrics applicable to UML Models. In: Proceedings of the 6th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering, (QAOOSE 2002)Google Scholar
  45. Kitchenham B, Pfleeger S, Fenton N (1995) Towards a framework for software measurement validation. IEEE Trans Softw Eng 21(12):929–943CrossRefGoogle Scholar
  46. Kitchenham B, Pfleeger S, Pickard L, Jones P, Hoaglin D, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734CrossRefGoogle Scholar
  47. Kleinbaum D, Kupper L, Muller K (1987) Applied regression analysis and other multivariate methods, 2nd edn. DuxburyGoogle Scholar
  48. Li W, Henry S (1993) Object-oriented metrics that predict maintainability. J Syst Softw 23(2)Google Scholar
  49. Lorenz M, Kidd J (1994) Object-oriented software metrics: a practical guide. EnglewoodGoogle Scholar
  50. Manso, MaE, Genero M, Piattini M (2003) No-redundant metrics for UML class diagrams structural complexity. In: Eder JY, Missikoff M (eds) CAISE 2003, Lecture notes in computer science, vol 2681. Springer, Berlin Heidelberg New York, pp 127–142Google Scholar
  51. Marchesi M (1998) OOA Metrics for the unified modeling language. In Proceedings of the 2nd Euromicro Conference on Software Maintenance and Reengineering, Florence, Italy, pp 67–73Google Scholar
  52. Mendes E, Watson I, Mosley N, Counsell S (2002) A comparison of development effort estimation techniques for web hypermedia applications. In: 8th IEEE Symposium on Software Metrics (METRICS‘02), pp 21–30Google Scholar
  53. Miller J (2000) Applying meta-analytical procedures to software engineering experiments. J Syst Softw 54:29–39CrossRefGoogle Scholar
  54. OMG (2001) Unified Modeling Language (UML) Specification, Version 1.4. Object Management Group (OMG)Google Scholar
  55. OMG (2005) Object Management Group. UML 2.0, OMG Document. Available at http://www.omg.org
  56. Poels G, Dedene G (2000) Distance-based software measurement: necessary and sufficient properties for software measures. Inf Softw Technol 42(1):5–46CrossRefGoogle Scholar
  57. SAS Institute (1999) SAS/STAT Users Guide, Version 8. SAS Institute, Cary, NCGoogle Scholar
  58. Schneidewind N (1992) Methodology for validating software metrics. IEEE Trans Softw Eng 18(5):410–422CrossRefGoogle Scholar
  59. Schneidewind N (2002) Body of knowledge for software quality measurement. IEEE Computer 35(2):77–83Google Scholar
  60. Selic B (2003) The pragmatics of model-driven development. IEEE Software 20(5):19–25CrossRefGoogle Scholar
  61. Shapiro SS, Wilk MB (1965) An analysis of variance test for normality. Biometrika 52:591–611zbMATHMathSciNetGoogle Scholar
  62. Si-Said C, Akoka J, Comyn-Wattiau I (2002) Conceptual modelling quality—from EER to UML Schemas Evaluation. In: Spaccapietra S, March, S, Kambayashi Y (eds) 21st International Conference on Conceptual Modeling (ER 2002), Tampere, Finland. LNCS, vol 2503.d pp 499–512Google Scholar
  63. Snedecor GW, Cochran WG (1989) Statistical methods, 8th edn. Iowa State University PressGoogle Scholar
  64. SPSS (2002) SPSS 11.5. Syntax Reference Guide. Chicago. SPSS IncGoogle Scholar
  65. Van Solingen R, Berghout E (1999) The goal/question/metric method: a practical guide for quality improvement of software development. McGraw-Hill, New YorkGoogle Scholar
  66. Vokác M, Tichy W, Sjoberg DI, Arisholm E, Aldrin M (2004) A controlled experiment comparing the maintainability of programs designed with and without design patterns—a replication in a real programming environment. Emp Softw Eng 9:149–195CrossRefGoogle Scholar
  67. Wohlin C, Runeson P, Höst M, Ohlson M, Regnell B, Wesslén A (2000) Experimentation in software engineering: an introduction. KluwerGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Marcela Genero
    • 1
    Email author
  • Esperanza Manso
    • 2
  • Aaron Visaggio
    • 3
  • Gerardo Canfora
    • 3
  • Mario Piattini
    • 1
  1. 1.ALARCOS Research Group, Department of Technologies and Information SystemsUniversity of Castilla-La ManchaCiudad RealSpain
  2. 2.GIRO Research Group, Department of Computer ScienceUniversity of ValladolidValladolidSpain
  3. 3.RCOST—Research Centre on Software TechnologyUniversity of SannioBeneventoItaly

Personalised recommendations