Advertisement

Software Quality Journal

, Volume 19, Issue 3, pp 579–612 | Cite as

Assessing the maintainability of software product line feature models using structural metrics

  • Ebrahim Bagheri
  • Dragan Gasevic
Article

Abstract

A software product line is a unified representation of a set of conceptually similar software systems that share many common features and satisfy the requirements of a particular domain. Within the context of software product lines, feature models are tree-like structures that are widely used for modeling and representing the inherent commonality and variability of software product lines. Given the fact that many different software systems can be spawned from a single software product line, it can be anticipated that a low-quality design can ripple through to many spawned software systems. Therefore, the need for early indicators of external quality attributes is recognized in order to avoid the implications of defective and low-quality design during the late stages of production. In this paper, we propose a set of structural metrics for software product line feature models and theoretically validate them using valid measurement-theoretic principles. Further, we investigate through controlled experimentation whether these structural metrics can be good predictors (early indicators) of the three main subcharacteristics of maintainability: analyzability, changeability, and understandability. More specifically, a four-step analysis is conducted: (1) investigating whether feature model structural metrics are correlated with feature model maintainability through the employment of classical statistical correlation techniques; (2) understanding how well each of the structural metrics can serve as discriminatory references for maintainability; (3) identifying the sufficient set of structural metrics for evaluating each of the subcharacteristics of maintainability; and (4) evaluating how well different prediction models based on the proposed structural metrics can perform in indicating the maintainability of a feature model. Results obtained from the controlled experiment support the idea that useful prediction models can be built for the purpose of evaluating feature model maintainability using early structural metrics. Some of the structural metrics show significant correlation with the subjective perception of the subjects about the maintainability of the feature models.

Keywords

Software product line Feature model Quality attributes Maintainability Structural complexity Controlled experimentation Software prediction model 

Notes

Acknowledgments

The authors would like to acknowledge the many valuable suggestions made by the anonymous reviewers of the paper.

References

  1. Al-Kilidar, H., Cox, K., & Kitchenham, B. (2005). The use and usefulness of the ISO/IEC 9126 quality standard. In 2005 International Symposium on Empirical Software Engineering, 2005 (p. 7). IEEE.Google Scholar
  2. Babar, M., Chen, L., & Shull, F. (2010). Managing variability in software product lines. IEEE Software, 27(3), 89–91.Google Scholar
  3. Bagheri, E., Noia, T. D., Ragone, A., & Gasevic, D. (2010). Configuring software product line feature models based on stakeholders’ soft and hard requirements. In The 14th International Software Product Line Conference. Springer.Google Scholar
  4. Bansiya, J., & Davis, C. (2002). A hierarchical model for object-oriented design quality assessment. IEEE Transactions on Software Engineering, 28(1), 4–17.Google Scholar
  5. Barbacci, M., Klein, M., Longstaff, T., & Weinstock, C. (1995). Quality attributes. SEI, December.Google Scholar
  6. Basili, V., Shull, F., & Lanubile, F. (1999). Building knowledge through families of experiments. IEEE Transactions on Software Engineering, 25(4), 456–473.CrossRefGoogle Scholar
  7. Batory, D. (2005). Feature models, grammars, and propositional formulas. Lecture Notes in Computer Science, 3714, 7.Google Scholar
  8. Benavides, D., Segura, S., Trinidad, P., & Ruiz-Cortes, A. (2007). FAMA: Tooling a framework for the automated analysis of feature models. In Proceeding of the 1st International Workshop on Variability Modelling of Software-intensive Systems (VAMOS) (pp. 129–134).Google Scholar
  9. Benavides, D., Trinidad, P., & Ruiz-Cortes, A. (2005). Automated reasoning on feature models. In LNCS, Advanced Information Systems Engineering: 17th International Conference, CAiSE 2005 (Vol. 3520, pp. 491–503). Springer.Google Scholar
  10. Boskovic, M., Bagheri, E., Gasevic, D., Mohabbati, B., Kaviani, N., & Hatala, M. (2010). Automated staged configuration with semantic web technologies. In International Journal of Software Engineering and Knowledge Engineering. World Scientific.Google Scholar
  11. Briand, L., Bunse, C., & Daly, J. (2001). A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Transactions on Software Engineering, 27(6), 513–530.CrossRefGoogle Scholar
  12. Briand, L., Morasca, S., & Basili, V. (1996). Property-based software engineering measurement. IEEE Transactions on Software Engineering, 22(1), 68–86.CrossRefGoogle Scholar
  13. Briand, L., Wust, J., Daly, J., & VictorPorter, D. (2000). Exploring the relationships between design measures and software quality in object-oriented systems. Journal of Systems and Software, 51(3), 245–273.CrossRefGoogle Scholar
  14. Briand, L. C., Wüst, J., Ikonomovski, S. V., & Lounis, H. (1999). Investigating quality factors in object-oriented designs: An industrial case study. In ICSE’99: Proceedings of the 21st International Conference on Software engineering (pp. 345–354). New York, NY: ACM.Google Scholar
  15. Cant, S., Henderson-Sellers, B., & Jeffery, D. (1994). Application of cognitive complexity metrics to object-oriented programs. Journal of Object Oriented Programming, 7, 52–52.Google Scholar
  16. Chidamber, S., Kemerer, C., & MIT, C. (1994). A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 20(6), 476–493.CrossRefGoogle Scholar
  17. Cruz-Lemus, J. A., Genero, M., Manso, M. E., Morasca, S., & Piattini, M. (2009). Assessing the understandability of uml statechart diagrams with composite states—A family of empirical studies. Empirical Software Engineering, 14(6), 685–719.CrossRefGoogle Scholar
  18. Cruz-Lemus, J. A., Maes, A., Genero, M., Poels, G., & Piattini, M. (2010). The impact of structural complexity on the understandability of uml statechart diagrams. Information Science, 180(11), 2209–2220.MathSciNetCrossRefGoogle Scholar
  19. Czarnecki, K., Helsen, S., & Eisenecker, U. (2005). Formalizing cardinality-based feature models and their specialization. Software Process: Improvement and Practice, 10(1), 7–29.CrossRefGoogle Scholar
  20. de Oliveira, E. A. Jr., Gimenes, I. M. S., & Maldonado, J. C. (2008). A metric suite to support software product line architectures. In XXXIV Conferncia Latinoamericana de Informtica (CLEI 2008).Google Scholar
  21. Etxeberria, L., & Sagardui, G. (2008). Variability driven quality evaluation in software product lines. In Proceedings of the 12th International Software Product Line Conference (pp. 243–252). IEEE.Google Scholar
  22. Fenton, N. (1994). Software measurement: A necessary scientific basis. IEEE Transactions on software engineering, 20(3), 199–206.Google Scholar
  23. Fenton, N., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5), 675–689.CrossRefGoogle Scholar
  24. Fenton, N., & Neil, M. (1999). Software metrics: Successes, failures and new directions. Journal of Systems and Software, 47(2–3), 149–157.CrossRefGoogle Scholar
  25. Fenton, N., & Pfleeger, S. L. (1997). Software metrics: A rigorous and practical approach (2nd ed.). Boston, MA: PWS Publishing Co.Google Scholar
  26. Finkelstein, L. (2003). Widely, strongly and weakly defined measurement. Measurement, 34(1), 39–48.CrossRefGoogle Scholar
  27. Garner, S. (1995). Weka: The waikato environment for knowledge analysis. In Proceedings of the New Zealand Computer Science Research Students Conference (pp. 57–64). Citeseer.Google Scholar
  28. Genero, M., Olivas, J., Piattini, M., & Romero, F. (2001). Using metrics to predict OO information systems maintainability. In Advanced Information Systems Engineering (pp. 388–401). Springer.Google Scholar
  29. Genero, M., Poels, G., & Piattini, M. (2008). Defining and validating metrics for assessing the understandability of entity-relationship diagrams. Data and Knowledge Engineering, 64(3), 534–557.CrossRefGoogle Scholar
  30. Griss, M., Favaro, J., & dAlessandro, M. (1998). Integrating feature modeling with the RSEB. In Proceedings of the 5th International Conference on Software Reuse (pp. 76–85). Citeseer.Google Scholar
  31. Hall, M., & Smith, L. (1997). Feature subset selection: A correlation based filter approach. In Proceedings of the 4th International Conference on Neural Information Processing Systems (pp. 855–858).Google Scholar
  32. Herrmannsdoerfer, M., Ratiu, D., & Koegel, M. (2010). Metamodel usage analysis for identifying metamodel improvements. In Proceedings of the 3rd International Conference on Software Language Engineering.Google Scholar
  33. Hershey, J., & Olsen, P. (2007). Approximating the Kullback Leibler divergence between Gaussian mixture models. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2007, Vol. 4, ICASSP 2007.Google Scholar
  34. Hitz, M., & Montazeri, B. (1995). Measuring coupling and cohesion in object-oriented systems. Proceedings of the International Symposium on Applied Corporate Computing, 50, 75–76.Google Scholar
  35. Hosmer, D., & Lemeshow, S. (2000). Applied logistic regression. New York: Wiley.zbMATHCrossRefGoogle Scholar
  36. Hubbard, D., & Evans, D. (2010). Problems with scoring methods and ordinal scales in risk assessment. IBM Journal of Research and Development, 54(3), 2–10.CrossRefGoogle Scholar
  37. Janota, M., & Kiniry, J. (2007). Reasoning about feature models in higher-order logic. In Software Product Line Conference, 2007. SPLC 2007. 11th International (pp. 13–22).Google Scholar
  38. Jarzabek, S., Yang, B., & Yoeun, S. (2006). Addressing quality attributes in domain analysis for product lines. IEE Proceedings-Software, 153, 61.Google Scholar
  39. Jones, C. (2008). Applied software measurement. New York: McGraw-Hill Osborne Media.Google Scholar
  40. Kang, K., Cohen, S., Hess, J., Novak, W., & Peterson, A., C.-M. U. P. P. S. E. INST. (1990). Feature-oriented domain analysis (FODA) feasibility study. Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute.Google Scholar
  41. Kazman, R., Klein, M., Barbacci, M., Longstaff, T., Lipson, H., & Carriere, J. (1998). The architecture tradeoff analysis method. In iceccs. Published by the IEEE Computer Society, p. 0068.Google Scholar
  42. Kent, J. (1983). Information gain and a general measure of correlation. Biometrika, 70(1), 163.MathSciNetzbMATHCrossRefGoogle Scholar
  43. Khoshgoftaar, T. M., & Seliya, N. (2003). Fault prediction modeling for software quality estimation: Comparing commonly used techniques. Empirical Software Engineering, 8(3), 255–283.CrossRefGoogle Scholar
  44. Khoshgoftaar, T. M., Munson, J. C., Bhattacharya, B. B., & Richardson, G. D. (1992). Predictive modeling techniques of software quality from software measures. IEEE Transactions on Software Engineering, 18(11), 979–987.CrossRefGoogle Scholar
  45. Khoshgoftaar, T. M., Rebours, P., & Seliya, N. (2009). Software quality analysis by combining multiple projects and learners. Software Quality Journal, 17(1), 25–49.CrossRefGoogle Scholar
  46. Khoshgoftaar, T., Bhattacharyya, B., & Richardson, G. (1992). Predicting software errors, during development, using nonlinear regression models: A comparative study. IEEE Transactions on Reliability, 41(3), 390–395.zbMATHCrossRefGoogle Scholar
  47. Kitchenham, B., & Pfleeger, S. (1996). Software quality: The elusive target. IEEE Software, 13(1), 12–21.CrossRefGoogle Scholar
  48. Kleppe, A. G., Warmer, J., & Bast, W. (2003). MDA explained: The model driven architecture: Practice and promise. Boston, MA: Addison-Wesley Longman Publishing Co Inc.Google Scholar
  49. Kocaguneli, E., Kultur, Y., & Bener, A. (2009). Combining Multiple Learners Induced on Multiple Datasets for Software Effort Prediction. ISSRE.Google Scholar
  50. Korson, T., & McGregor, J. D. (1990). Understanding object-oriented: A unifying paradigm. Communications of the ACM, 33(9), 40–60.CrossRefGoogle Scholar
  51. Lee, K., Kang, K., & Lee, J. (2002). Concepts and guidelines of feature modeling for product line software engineering. Lecture Notes in Computer Science, 2319, 62–77.CrossRefGoogle Scholar
  52. Liu, Y., Khoshgoftaar, T. M., & Seliya, N. (2010). Evolutionary optimization of software quality modeling with multiple repositories. In IEEE Transactions on Software Engineering, 99, no. PrePrints.Google Scholar
  53. Lopez-Herrejon, R., Batory, D. (2001). A standard problem for evaluating product-line methodologies. In Lecture Notes in Computer Science, pp. 10–24.Google Scholar
  54. Lozano, L. M., García-Cueto E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 4(2), 73–79. [Online]. Available: http://dx.doi.org/10.1027/1614-2241.4.2.73.
  55. Manso, M., Genero, M., & Piattini, M. (2010). No-redundant metrics for UML class diagram structural complexity. In Advanced Information Systems Engineering (pp. 1029–1029). Springer.Google Scholar
  56. McGregor, J. D., Muthig, D., Yoshimura, K., & Jensen, P. (2010). Guest editors’ introduction: Successful software product line practices. IEEE Software, 27(3), 16–21.CrossRefGoogle Scholar
  57. Mendonca, M., Branco, M., & Cowan, D. (2009). S.p.l.o.t.: Software product lines online tools. In OOPSLA’09: Proceeding of the 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications (pp. 761–762). New York, NY: ACM.Google Scholar
  58. Mendonca, M., Wasowski, A., Czarnecki, K., & Cowan, D. (2008). Efficient compilation techniques for large scale feature models. In Proceedings of the 7th International Conference on Generative Programming and Component Engineering (pp. 13–22). New York, NY: ACM.Google Scholar
  59. Olumofin, F. G., & Mišić, V. B. (2007). A holistic architecture assessment method for software product lines. Information and Software Technology, 49(4), 309–323.CrossRefGoogle Scholar
  60. Poels, G., & Dedene, G. (1999). DISTANCE: A framework for software measure construction. Technical Report, KU Leuven-Departement toegepaste economische wetenschappen.Google Scholar
  61. Poels, G., & Dedene, G. (2000). Distance-based software measurement: Necessary and sufficient properties for software measures. Information and Software Technology, 42(1), 35–46.CrossRefGoogle Scholar
  62. Pohl, K., & Metzger, A. (2006). Software product line testing. Communications of the ACM, 49(12), 78–81.CrossRefGoogle Scholar
  63. Pohl, K., Böckle, G., & Van DerLinden, F. (2005). Software product line engineering: Foundations, principles, and techniques. Berlin, Heidelberg, New York: Springer.zbMATHGoogle Scholar
  64. Quinlan, J. (1986). Induction of decision trees. Machine learning, 1(1), 81–106.Google Scholar
  65. Scott, W. (1998). Organizations: Rational, natural, and open systems. Upper Saddle River, NJ: Prentice hall.Google Scholar
  66. Selic, B. (2006). Model-driven development: Its essence and opportunities. In ISORC’06: Proceedings of the 9th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (pp. 313–319). Washington, DC: IEEE Computer Society.Google Scholar
  67. Serrano, M. A., Calero, C., Sahraoui, H. A., & Piattini, M. (2008). Empirical studies to assess the understandability of data warehouse schemas using structural metrics. Software Quality Journal, 16(1), 79–106.CrossRefGoogle Scholar
  68. Siau, K., & Tan, X. (2005). Improving the quality of conceptual modeling using cognitive mapping techniques. Data and Knowledge Engineering, 55(3), 343–365.CrossRefGoogle Scholar
  69. Simon, H. (1978). Rationality as process and as product of thought. The American Economic Review, 68(2), 1–16.Google Scholar
  70. Tessier, P., Gérard, S., Terrier, F., & Geib, J. (2005). “Using variation propagation for model-driven management of a system family.” In Software Product Lines, pp. 222–233.Google Scholar
  71. Wang, H., Li, Y., Sun, J., Zhang, H., & Pan, J. (2007). Verifying feature models using OWL. Web Semantics: Science, Services and Agents on the World Wide Web, 5(2), 117–129.CrossRefGoogle Scholar
  72. Weiss, D. M., Clements, P. C., Kang, K., & Krueger, C. (2006). Software product line hall of fame. In SPLC’06: Proceedings of the 10th International on Software Product Line Conference (p. 237). Washington, DC: IEEE Computer Society.Google Scholar
  73. Weyuker, E. (1988). Evaluating software complexity measures. IEEE Transactions on Software Engineering, 14, 1357–1365.Google Scholar
  74. Yu, E., & Mylopoulos, J. (1994). Understanding “why” in software process modelling, analysis, and design. In Proceedings of the 16th international conference on Software engineering (p. 168). IEEE Computer Society Press.Google Scholar
  75. Zhang, T., Deng, L., Wu, J., Zhou, Q., & Ma, C. (2008). Some metrics for accessing quality of product line architecture. In CSSE’08: Proceedings of the 2008 International Conference on Computer Science and Software Engineering (pp. 500–503). Washington, DC: IEEE Computer Society.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Athabasca UniversityAthabascaCanada
  2. 2.National Research CouncilOttawaCanada

Personalised recommendations