Detection Strategies for Modularity Anomalies: An Evaluation with Software Product Lines

  • Eduardo Fernandes
  • Priscila Souza
  • Kecia Ferreira
  • Mariza Bigonha
  • Eduardo Figueiredo
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 558)

Abstract

A Software Product Line (SPL) is a configurable set of systems that share common and varying features. SPL requires a satisfactory code modularity for effective use. Therefore, modularity anomalies make software reuse difficult. By detecting and solving an anomaly, we may increase the software quality and ease reuse. Different detection strategies support the identification of modularity anomalies. However, we lack an investigation of their effectiveness in the SPL context. In this paper, after an evaluation of existing strategies, we compared four strategies from the literature for two modularity anomalies that affect SPLs: God Class and God Method. In addition, we proposed two novel detection strategies and compared them with the existing ones, using three SPLs. As a result, existing strategies showed high recall but low precision. In addition, when compared to detection strategies from the literature, our strategies presented comparable or higher recall and precision rates for some SPLs.

Keywords

Detection Strategies Modularity Anomalies Software Product Lines 

Notes

Acknowledgements

This work was partially supported by CAPES, CNPq (grant 424340/2016-0), and FAPEMIG (grant PPM-00382-14).

References

  1. 1.
    Abilio, R., Padilha, J., Figueiredo, E., Costa, H. (2015). Detecting code smells in software product lines. In Proceedings of the 12th ITNG (p. 433–438).Google Scholar
  2. 2.
    Abilio, R., Vale, G., Figueiredo, E., Costa, H. (2016). Metrics for feature-oriented programming. In Proceedings of the 7th WETSoM (p. 36–42).Google Scholar
  3. 3.
    Almeida, E., Alvaro, A., Lucrédio, D., Garcia, V., Meira, S. (2004). RiSE Project. In Proceedings of the 5th IRI (pp. 48–53).Google Scholar
  4. 4.
    Apel, S., Batory, D., Kästner, C., Saake, G. (2013). Feature-oriented software product lines. Berlin Heidelberg: Springer Science & Business Media.Google Scholar
  5. 5.
    Chidamber, S., & Kemerer, C. (1994). A metrics suite for object oriented design. Transactions on Software Engineering (TSE), 20(6), 476–493.CrossRefGoogle Scholar
  6. 6.
    Fard, A., & Mesbah, A. (2013). JSNose. In Proceedings of the 13th SCAM (pp. 116–125).Google Scholar
  7. 7.
    Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874.MathSciNetCrossRefGoogle Scholar
  8. 8.
    Fenske, W., Schulze, S. (2015). Code smells revisited. In Proceedings of the 9th VaMoS (pp. 3–10).Google Scholar
  9. 9.
    Fernandes, E., Oliveira, J., Vale, G., Paiva, T., & Figueiredo, E. (2016). A review-based comparative study of bad smell detection tools. In Proceedings of the 20th EASE.Google Scholar
  10. 10.
    Figueiredo, E., Cacho, N., Sant’Anna, C., Monteiro, M., Kulesza, U., Garcia, A., Soares, S., Ferrari, F., Khan, S., Castor Filho, F., Dantas, F. (2008). Evolving software product lines with aspects. In Proceedings of the 30th ICSE (pp. 261–270).Google Scholar
  11. 11.
    Filo, T., Bigonha, M., Ferreira, K. (2015). A catalogue of thresholds for object-oriented software metrics. In Proceedings of the 1st SOFTENG (pp. 48–55).Google Scholar
  12. 12.
    Fontana, F., Braione, P., & Zanoni, M. (2012). Automatic detection of bad smells in code. Journal of Object Technology (JOT), 11(2), 5–1.Google Scholar
  13. 13.
    Fowler, M. (1999). Refactoring: Improving the design of existing programs. Reading: Addison-Wesley Publishing.MATHGoogle Scholar
  14. 14.
    Lanza, M., & Marinescu, R. (2007). Object-oriented metrics in practice. Berlin Heidelberg: Springer Science & Business Media.Google Scholar
  15. 15.
    Lorenz, M., & Kidd, J. (1994). Object-Oriented software metrics: A practical guide. Englewood Cliffs: Prentice-Hall.Google Scholar
  16. 16.
    McCabe, T. (1976). A complexity measure. Transactions on Software Engineering (TSE), SE-2(4), 308–320.MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Miller, B., Hsia, P., Kung, C. (1999). Object-oriented architecture measures. In Proceedings of the 32nd HICSS (pp. 8069–8086).Google Scholar
  18. 18.
    Moha, N., Gueheneuc, Y.-G., Duchien, L., & Le Meur, A.-F. (2010). DECOR. Transactions on Software Engineering (TSE), 36(1), 20–36.CrossRefGoogle Scholar
  19. 19.
    Paiva, T., Damasceno, A., Padilha, J., Figueiredo, E.,Sant’Anna, C. (2015). Experimental evaluation of code smell detection tools. In Proceedings of the 3rd VEM (pp. 17–24).Google Scholar
  20. 20.
    Pohl, K., Böckle, G., van der Linden, F. (2005). Software product line engineering. Berlin Heidelberg: Springer Science & Business Media.Google Scholar
  21. 21.
    Vale, G., & Figueiredo, E. (2015). A method to derive metric thresholds for software product lines. In Proceedings of the 29th SBES (pp. 110–119).Google Scholar
  22. 22.
    Vidal, S., Marcos, C., & Díaz-Pace, J. (2014). An approach to prioritize code smells for refactoring. Automated Software Engineering (ASE), 23, 1–32.Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Eduardo Fernandes
    • 1
  • Priscila Souza
    • 1
  • Kecia Ferreira
    • 2
  • Mariza Bigonha
    • 1
  • Eduardo Figueiredo
    • 1
  1. 1.Federal University of Minas GeraisBelo HorizonteBrazil
  2. 2.Federal Center for Technological Education of Minas GeraisBelo HorizonteBrazil

Personalised recommendations