Software Quality Journal

, Volume 25, Issue 3, pp 641–670 | Cite as

Investigating the relation between lexical smells and change- and fault-proneness: an empirical study

  • Latifa Guerrouj
  • Zeinab Kermansaravi
  • Venera Arnaoudova
  • Benjamin C. M. Fung
  • Foutse Khomh
  • Giuliano Antoniol
  • Yann-Gaël Guéhéneuc
Article
  • 309 Downloads

Abstract

Past and recent studies have shown that design smells which are poor solutions to recurrent design problems make object-oriented systems difficult to maintain, and that they negatively impact the class change- and fault-proneness. More recently, lexical smells have been introduced to capture recurring poor practices in the naming, documentation, and choice of identifiers during the implementation of an entity. Although recent studies show that developers perceive lexical smells as impairing program understanding, no study has actually evaluated the relationship between lexical smells and software quality as well as their interaction with design smells. In this paper, we detect 29 smells consisting of 13 design smells and 16 lexical smells in 30 releases of three projects: ANT, ArgoUML, and Hibernate. We analyze to what extent classes containing lexical smells have higher (or lower) odds to change or to be subject to fault fixing than other classes containing design smells. Our results show and bring empirical evidence on the fact that lexical smells can make, in some cases, classes with design smells more fault-prone. In addition, we empirically demonstrate that classes containing design smells only are more change- and fault-prone than classes with lexical smells only.

Keywords

Lexical smells Design smells Change-proneness Fault-proneness Empirical study 

References

  1. Abebe, S. L., & Tonella, P. (2013) Automated identifier completion and replacement. In: CSMR’13 (pp. 263–272).Google Scholar
  2. Abebe, S. L., Haiduc, S., Tonella, P., & Marcus, A. (2011). The effect of lexicon bad smells on concept location in source code. In SCAM (pp. 125–134). IEEE.Google Scholar
  3. Arnaoudova, V., Penta, M. D., & Antoniol, G. (2016). Linguistic antipatterns: What they are and how developers perceive them. Empirical Software Engineering, 21(1), 104–158.CrossRefGoogle Scholar
  4. Arnaoudova, V., Penta, M. D., Antoniol, G., & Guéhéneuc, Y. (2013). A new family of software anti-patterns: Linguistic anti-patterns. In Proceedings of the 17th IEEE European Conference on Software Maintenance and Reengineering, CSMR (pp. 187–196). Genova, Italy.Google Scholar
  5. Bavota, G., Carluccio, B. D., Lucia, A. D., Penta, M. D., Oliveto, R., & Strollo, O. (2012). When does a refactoring induce bugs? An empirical study. In SCAM (pp. 104–113).Google Scholar
  6. Brown, W. J., Malveau, R. C., McCormick, H. W. S., & Mowbray, T. J. (1998). AntiPatterns: Refactoring software, architectures, and projects in crisis (1st ed.). New York: Wiley.Google Scholar
  7. Cardoso, B., & Figueiredo, E. (2015). Co-occurrence of design patterns and bad smells in software systems: An exploratory study. In Proceedings of the annual conference on Brazilian symposium on information systems: Information systems: A computer socio-technical perspective (pp. 347–354). Brazilian Computer Society.Google Scholar
  8. De Lucia, A., Di Penta, M., & Oliveto, R. (2010). Improving source code lexicon via traceability and information retrieval. IEEE Transactions on Software Engineering, 37(2), 205–227.CrossRefGoogle Scholar
  9. Fontana, F. A., Mäntylä, M. V., Zanoni, M., & Marino, A. (2015). Comparing and experimenting machine learning techniques for code smell detection. Empirical Software Engineering, 1–49. doi: 10.1007/s10664-015-9378-4.
  10. Fowler, M. (1999). Refactoring: Improving the design of existing code. Boston, MA: Addison-Wesley.MATHGoogle Scholar
  11. Fischer, M., Pinzger, M., & Gall, H. (2003) Populating a release history database from version control and bug tracking systems. In Proceedings of the international conference on software maintenance (pp. 23–32).Google Scholar
  12. Hall, T., Zhang, M., Bowes, D., & Sun, Y. (2014). Some code smells have a significant but small effect on faults. ACM Transactions on Software Engineering and Methodology, 23(4), 33.CrossRefGoogle Scholar
  13. Kamei, Y., Shihab, E., Adams, B., Hassan, A. E., Mockus, A., Sinha, A., et al. (2013). A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering, 39(6), 757–773.CrossRefGoogle Scholar
  14. Khomh, F., Penta, M. D., & Guéhéneuc, Y. (2009). An exploratory study of the impact of code smells on software change-proneness. In WCRE (pp. 75–84). IEEE Computer Society.Google Scholar
  15. Khomh, F., Penta, M. D., Guéhéneuc, Y., & Antoniol, G. (2012). An exploratory study of the impact of antipatterns on class change- and fault-proneness. Empirical Software Engineering, 17(3), 243–275.CrossRefGoogle Scholar
  16. Kim, S., Whitehead, E. J, Jr., & Zhang, Y. (2008). Classifying software changes: Clean or buggy? IEEE Transactions on Software Engineering, 34(2), 181–196.CrossRefGoogle Scholar
  17. Lawrie, D., Morrell, C., Feild, H., & Binkley, D. (2007). Effective identifier names for comprehension and memory. Innovations in Systems and Software Engineering, 3(4), 303–318.CrossRefGoogle Scholar
  18. Lemma, A. S., Venera, A., Paolo, T., Giuliano, A., & Guéhéneuc, Y. (2012). Can lexicon bad smells improve fault prediction? In WCRE (pp. 235–244).Google Scholar
  19. Li, W., & Shatnawi, R. (2007). An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of Systems and Software, 80(7), 1120–1128.CrossRefGoogle Scholar
  20. Mayrhauser, A., & Vans, A. M. (1995). Program comprehension during software maintenance and evolution. IEEE Computer, 28(8) 44–55.Google Scholar
  21. Marwen, A., Foutse, K., Guéhéneuc, Y., & Giuliano, A. (2011). An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In Proceedings of the 15th IEEE European Conference on Software Maintenance and Reengineering (CSMR), (pp. 181–190). IEEE Computer Society.Google Scholar
  22. McIntosh, S., Kamei, Y., Adams, B., & Hassan, A. E. (2014). The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects. In Proceedings of the 11th working conference on mining software repositories, ser. MSR 2014, (pp. 192–201).Google Scholar
  23. McIntosh, S., Kamei, Y., Adams, B., & Hassan, A. E. (2015). An empirical study of the impact of modern code review practices on software quality, Empirical Software Engineering, 1–44. doi: 10.1007/s10664-015-9381-9.
  24. Moha, N., Guéhéneuc, Y., Laurence, D., & Anne-Franccoise, L. M. (2010). Decor: A method for the specification and detection of code and design smells. IEEE Transactions on Software Engineering (TSE), 36(1), 20–36.CrossRefGoogle Scholar
  25. Olbrich, S. M., Cruzes, D., Basili, V. R., & Zazworka, N. (2009). The evolution and impact of code smells: A case study of two open source systems. In ESEM (pp. 390–400).Google Scholar
  26. Palomba, F., Bavota, G., Penta, M. D., Oliveto, R., Lucia, A. D., & Poshyvanyk, D. (2013). Detecting bad smells in source code using change history information. In ASE (pp. 268–278).Google Scholar
  27. Palomba, F., Bavota, G., Penta, M. D., Oliveto, R., & Lucia, A. D. (2014). Do they really smell bad? A study on developers’ perception of bad code smells. In ICSME’14 (pp. 101–110).Google Scholar
  28. Palomba, F., Bavota, G., Penta, M. D., Oliverto, R., Poshyvanyk, D., & Lucia, A. D. (2015). Mining version histories for detecting code smells. IEEE Transactions on Software Engineering, 41(5), 462–489.CrossRefGoogle Scholar
  29. Peters, R., & Zaidman, A. (2012). Evaluating the lifespan of code smells using software repository mining. In CSMR (pp. 411–416). IEEE.Google Scholar
  30. Sheskin, D. J. (2007). Handbook of parametric and nonparametric statistical procedures (4th ed.). London: Chapman & Hall.MATHGoogle Scholar
  31. Śliwerski, J., Zimmermann, T., & Zeller, A. (2005). When do changes induce fixes? ACM Sigsoft Software Engineering Notes, 30(4), 1–5.CrossRefGoogle Scholar
  32. Soloway, E., Bonar, J., & Ehrlich, K. (1983). Cognitive strategies and looping constructs: An empirical study. Communications of the ACM, 26(11), 853–860.CrossRefGoogle Scholar
  33. Suryanarayana, G. (2014). Refactoring for software design smells: Managing technical debt (1st ed.). Los Altos, CA: Morgan Kaufmann.Google Scholar
  34. Taba, S. E. S., Khomh, F., Zou, Y., Hassan, A. E., & Nagappan, M. (2013). Predicting bugs using antipatterns. In ICSM (pp. 270–279). IEEE.Google Scholar
  35. Tan, L., Yuan, D., Krishna, G., & Zhou, Y. (2007). iComment: Bugs or bad comments? In Proceedings of the 21st ACM symposium on operating systems principles (SOSP07).Google Scholar
  36. Tan, L., Zhou, Y., & Padioleau, Y. (2011). aComment: Mining annotations from comments and code to detect interrupt-related concurrency bugs. In Proceedings of the 33rd international conference on software engineering (ICSE11).Google Scholar
  37. Tan, S. H., Marinov, D., Tan, L., & Leavens, G. T. (2012). @tComment: Testing javadoc comments to detect comment-code inconsistencies. In Proceedings of the 5th international conference on software testing, verification and validation (ICST).Google Scholar
  38. Takang, A., Grubb, P. A., & Macredie, R. D. (1996). The effects of comments and identifier names on program comprehensibility: An experiential study. Journal of Program Languages, 4(3), 143–167.Google Scholar
  39. Toutanova, K., & Manning, C. D. (2000). Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), (pp. 63–70).Google Scholar
  40. Webster, B. F. (1995). Pitfalls of object-oriented development. New York, NY: M & T Books.Google Scholar
  41. Yamashita, A. F., & Moonen, L. (2013). Do developers care about code smells? An exploratory survey. In WCRE (pp. 242–251). IEEE.Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Latifa Guerrouj
    • 1
  • Zeinab Kermansaravi
    • 2
  • Venera Arnaoudova
    • 3
  • Benjamin C. M. Fung
    • 4
  • Foutse Khomh
    • 2
  • Giuliano Antoniol
    • 2
  • Yann-Gaël Guéhéneuc
    • 2
  1. 1.École de Technologie SupérieureMontrealCanada
  2. 2.École Polytechnique de MontréalMontrealCanada
  3. 3.Washington State UniversityPullmanUSA
  4. 4.McGill UniversityMontrealCanada

Personalised recommendations