Advertisement

Software Quality Journal

, Volume 25, Issue 2, pp 529–552 | Cite as

Multi-objective code-smells detection using good and bad design examples

  • Usman Mansoor
  • Marouane Kessentini
  • Bruce R. Maxim
  • Kalyanmoy Deb
Article

Abstract

Code-smells are identified, in general, by using a set of detection rules. These rules are manually defined to identify the key symptoms that characterize a code-smell using combinations of mainly quantitative (metrics), structural, and/or lexical information. We propose in this work to consider the problem of code-smell detection as a multi-objective problem where examples of code-smells and well-designed code are used to generate detection rules. To this end, we use multi-objective genetic programming (MOGP) to find the best combination of metrics that maximizes the detection of code-smell examples and minimizes the detection of well-designed code examples. We evaluated our proposal on seven large open-source systems and found that, on average, most of the different five code-smell types were detected with an average of 87 % of precision and 92 % of recall. Statistical analysis of our experiments over 51 runs shows that MOGP performed significantly better than state-of-the-art code-smell detectors.

Keywords

Search-based software engineering Software maintenance Software metrics 

References

  1. Abbes, M., Khomh, F., Gueheneuc, Y.-G., & Antoniol, G. (2011). An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In Software maintenance and reengineering (CSMR), 2011 15th European conference on (pp. 181–190). IEEE.Google Scholar
  2. Abreu, F., Goulão, M., & Esteves, R. (1995). Toward the design quality evaluation of object-oriented software systems. In Proceedings of 5th ICSQ.Google Scholar
  3. Aghezzaf, B., & Hachimi, M. (2000). Generalized invexity and duality in multiobjective programming problems. Journal of Global Optimization, 18(1), 91–101.MathSciNetCrossRefzbMATHGoogle Scholar
  4. Al Dallal, J. (2014). Identifying refactoring opportunities in object-oriented code: A systematic literature review. Information and Software Technology, 58, 231–249.CrossRefGoogle Scholar
  5. Al Dallal, J. (2015). Identifying refactoring opportunities in object-oriented code: A systematic literature review. Information and Software Technology, 58, 231–249.CrossRefGoogle Scholar
  6. Arcuri, A., & Briand, L. C. (2011). A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceedings of the 33rd international conference on software engineering (ICSE) (pp. 1–10).Google Scholar
  7. Bavota, G., De Lucia, A., Di Penta, M., Oliveto, R., & Palomba, F. (2015). An experimental investigation on the innate relationship between quality and refactoring. Journal of Systems and Software, 107, 1–14.CrossRefGoogle Scholar
  8. Brown, W. J., Malveau, R. C., Brown, W. H., & Mowbray, T. J. (1998). Anti-patterns: Refactoring software, architectures, and projects in crisis. Hoboken: Wiley.Google Scholar
  9. Chidamber, S. R., & Kemerer, C. F. (1994). A metrics suite for object-oriented design. IEEE Transactions on Software Engineering, 20(6), 293–318.CrossRefGoogle Scholar
  10. Concas, G., Destefanis, G., Marchesi, M., Ortu, M., & Tonelli, R. (2013). Micro patterns in agile software. Berlin: Springer.CrossRefGoogle Scholar
  11. Deb, K. (2001). Multiobjective optimization using evolutionary algorithms. New York: Wiley.zbMATHGoogle Scholar
  12. Deb, K., Agrawal, S., Pratap, A., & Meyarivan, T. (2002). A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182–197.CrossRefGoogle Scholar
  13. Destefanis, G., Tonelli, R., Tempero, E., Concas, G., & Marchesi, M. (2012, September). Micro pattern fault-proneness. In Software engineering and advanced applications (SEAA), 2012 38th EUROMICRO conference on (pp. 302–306). IEEE.Google Scholar
  14. Dhambri, K., Sahraoui, H. A., & Poulin, P. (2008). Visual detection of design anomalies. In CSMR. IEEE (pp. 279–283).Google Scholar
  15. Fenton, N., & Pfleeger, S. L. (1998). Software metrics: A rigorous and practical approach (2nd ed.). London: International Thomson Computer Press.Google Scholar
  16. Fontana, F. A., Mäntylä, M. V., Zanoni, M., & Marino, A. (2015). Comparing and experimenting machine learning techniques for code smell detection. In Empirical Software Engineering (pp. 1–49).Google Scholar
  17. Fowler, M., Beck, K., Brant, J., Opdyke, W., & Roberts, D. (1999). Refactoring—Improving the design of existing code. Boston: Addison-Wesley Professional.Google Scholar
  18. Gil, J. Y., & Maman, I. (2005). Micro patterns in Java code. In ACM SIGPLAN Notices (Vol. 40, no. 10). ACM.Google Scholar
  19. Gong, M., Jiao, L., Du, H., & Bo, L. (2008). Multiobjective immune algorithm with nondominated neighbor-based selection. Evolutionary Computation, 6(2), 225–255.CrossRefGoogle Scholar
  20. Hall, T., Zhang, M., Bowes, D., & Sun, Y. (2014). Some code smells have a significant but small effect on faults. ACM Transactions on Software Engineering and Methodology (TOSEM), 23(4), 33.CrossRefGoogle Scholar
  21. Harman, M., Mansouri, S. A., & Zhang, Y. (2012). Search-based software engineering: Trends, techniques and applications. ACM Computing Surveys, 45, 11.CrossRefGoogle Scholar
  22. Kessentini, M., Kessentini, W., Sahraoui, H., Boukadoum, M., & Ouni, A. (2011a). Design defects detection and correction by example. In Proceedings of the 19th IEEE international conference on program comprehension (ICPC’11) (pp. 81–90).Google Scholar
  23. Kessentini, M., Kessentini, W., Sahraoui, H., Boukadoum, M., & Ouni, A. (2011b). Design defects detection and correction by example. In 19th IEEE international conference on program comprehension (ICPC), (22–24 June 2011), Kingston, Canada (pp. 81–90).Google Scholar
  24. Kessentini, M., Vaucher, S., & Sahraoui, H. (2010). Deviance from perfection is a better criterion than closeness to evil when identifying risky code. In Proceedings of the 25th IEEE/ACM international conference on automated software engineering (ASE) (pp. 141–151).Google Scholar
  25. Khomh, F., Vaucher, S., Guéhéneuc, Y.-G., & Sahraoui, H. (2009). A Bayesian approach for the detection of code and design smells. In Proceedings of the ICQS’09.Google Scholar
  26. Kothari, S. C., Bishop, L., Sauceda, J., & Daugherty, G. (2004). A pattern-based framework for software anomaly detection. Software Quality Journal, 12(2), 99–120.CrossRefGoogle Scholar
  27. Kreimer, J. (2005). Adaptive detection of design flaws. Electronic Notes in Theoretical Computer Science, 141(4), 117–136.CrossRefGoogle Scholar
  28. Langdon, W. B., Poli, R., McPhee, N. F., & Koza, J. R. (2008). Genetic programming: An introduction and tutorial, with a survey of techniques and applications. In J. Fulcher & L. C. Jain (Eds.), Computational intelligence: A compendium (pp. 927–1028). Berlin: Springer.CrossRefGoogle Scholar
  29. Langelier, G., Sahraoui, H. A., & Poulin, P. (2005). Visualization-based analysis of quality for large-scale software systems. In T. Ellman & A. Zisma (Eds.), Proceedings of the 20th international conference on automated software engineering. New York: ACM Press.Google Scholar
  30. Maggioni, S., & Arcelli, F. (2010). Metrics-based detection of micro patterns. In Proceedings of the 2010 ICSE workshop on emerging trends in software metrics. ACM.Google Scholar
  31. Maiga, A., Ali, N., Bhattacharya, N., Sabane, A., Guéhéneuc, Y. G., & Aimeur, E. (2012, October). Smurf: A svm-based incremental anti-pattern detection approach. In Reverse engineering (WCRE), 2012 19th working conference on (pp. 466–475). IEEE.Google Scholar
  32. Mäntylä, M. V. (2010). Empirical software evolvability—code smells and human evaluations. In ICSM (pp. 1–6).Google Scholar
  33. Mäntylä, M., & Lassenius, C. (2006). Subjective evaluation of software evolvability using code smells: An empirical study. Empirical Software Engineering, 11(3), 395–431.CrossRefGoogle Scholar
  34. Marinescu, R. (2004). Detection strategies: Metrics-based rules for detecting design flaws. In Proceedings of ICM’04 (pp. 350–359).Google Scholar
  35. Mkaouer, M. W., Kessentini, M., Bechikh, S., Cinnéide, M. Ó., & Deb, K. (2015). On the use of many quality attributes for software refactoring: A many-objective search-based software engineering approach. In Empirical Software Engineering (pp. 1–43).Google Scholar
  36. Moha, N., Guéhéneuc, Y. G., Duchien, L., & Le Meur, A. F. (2010). DECOR: A method for the specification and detection of code and design smells. IEEE Transactions on Software Engineering, 36(1), 20–36.CrossRefzbMATHGoogle Scholar
  37. Munro, M. J. (2005). Product metrics for automatic identification of “Bad Smell” design problems in java source-code. In Proceedings of the 11th international software metrics symposium.Google Scholar
  38. Palomba, F., Bavota, G., Di Penta, M., Oliveto, R., & De Lucia, A. (2014). Do they really smell bad? A study on developers’ perception of bad code smells. In Software maintenance and evolution (ICSME), 2014 IEEE international conference on (pp. 101–110). IEEE.Google Scholar
  39. Palomba, F., Bavota, G., Di Penta, M., Oliveto, R., De Lucia, A., & Poshyvanyk, D. (2013). Detecting bad smells in source code using change history information. In Automated software engineering (ASE), 2013 IEEE/ACM 28th international conference on (pp. 268–278). IEEE.Google Scholar
  40. Rasool, G., & Arshad, Z. (2015). A review of code smell mining techniques. Journal of Software: Evolution and Process, 27(11), 867–895.Google Scholar
  41. Sahin, D., Kessentini, M., Bechikh, S., & Deb, K. (2014). Code-smell detection as a bilevel problem. ACM Transactions on Software Engineering and Methodology (TOSEM), 24(1), 6.CrossRefGoogle Scholar
  42. Salehie, M., Li, S., & Tahvildari, L. (2006). A metric-based heuristic framework to detect object-oriented design flaws. In Proceedings of the 14th IEEE ICPC’06.Google Scholar
  43. Sjøberg, D. I. K., Yamashita, A. F., Anda, B. C. D., Mockus, A., & Dybå, T. (2013). Quantifying the effect of code smells on maintenance effort. IEEE Transactions on Software Engineering, 39(8), 1144–1156.CrossRefGoogle Scholar
  44. Travassos, G., Shull, F., Fredericks, M., & Basili, V. R. (1999). Detecting defects in object-oriented designs: Using reading techniques to increase software quality. In Proceedings of the 14th conference on object-oriented programming, systems, languages, and applications (pp. 47–56). New York: ACM Press.Google Scholar
  45. Tufano, M., Palomba, F., Bavota, G., Oliveto, R., Di Penta, M., De Lucia, A., et al. (2015). When and why your code starts to smell bad. In ICSE.Google Scholar
  46. Van Emden, V. & Moonen, L. (2002). Java quality assurance by detecting code smells. In Proceedings of the ninth working conference on reverse engineering (WCRE’02). IEEE computer society, Washington, DC, USA (p. 97).Google Scholar
  47. Vidal, S. A., Marcos, C., & Díaz-Pace, J. A. (2014). An approach to prioritize code smells for refactoring. In Automated Software Engineering (pp. 1–32).Google Scholar
  48. Yamashita, A. F. & Moonen, L. (2012) Do code smells reflect important maintainability aspects? In ICSM, pp. 306–315.Google Scholar
  49. Yamashita, A. F., & Moonen, L. (2013a). To what extent can maintenance problems be predicted by code smell detection? An empirical study. Information & Software Technology, 55(12), 2223–2242.CrossRefGoogle Scholar
  50. Yamashita, A. F., & Moonen, L. (2013b). To what extent can maintenance problems be predicted by code smell detection? An empirical study. Information and Software Technology, 55(12), 2223–2242.CrossRefGoogle Scholar
  51. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., & da Fonseca, V. G. (2003). Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transaction on Evolutionary Computation, 7(2), 117–132.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Usman Mansoor
    • 1
  • Marouane Kessentini
    • 1
  • Bruce R. Maxim
    • 1
  • Kalyanmoy Deb
    • 2
  1. 1.University of MichiganDearbornUSA
  2. 2.Michigan State UniversityEast LansingUSA

Personalised recommendations