A Replicated Study on Relationship Between Code Quality and Method Comments

  • Yuto Miyake
  • Sousuke Amasaki
  • Hirohisa Aman
  • Tomoyuki Yokogawa
Chapter

Abstract

Context: Recent studies empirically revealed a relationship between source code comments and code quality. Some studies showed well-written source code comments could be a sign of problematic methods. Other studies also show that source code files with comments confessing a technical debt (called self-admitted technical debt, SATD) could be fixed more times. The former studies only considered the amount of comments, and their findings might be due to a specific type of comments, namely, SATD comments used in the latter studies. Objective: To clarify the relationship between comments other than SATD comments and code quality. Method: Replicate a part of the latter studies with such comments of methods on four OSS projects. Results: At both the file-level and the method-level, the presence of comments could be related to more code fixings even if the comments were not SATD comments. However, SATD comments were more effective to spot fix-prone files and methods than the non-SATD comments. Conclusions: Source code comments other than SATD comments could still be a sign of problematic code. This study demonstrates a need for further analysis on the contents of comments and its relation to code quality.

Keywords

Source code comment Software quality Self-admitted technical debt 

References

  1. 1.
    Aman, H.: An Empirical Analysis on Fault-Proneness of Well-Commented Modules. In: Fourth International Workshop on Empirical Software Engineering in Practice (IWESEP). IEEE (2012)Google Scholar
  2. 2.
    Aman, H., Amasaki, S., Sasaki, T., Kawahara, M.: Empirical Analysis of Change-Proneness in Methods Having Local Variables with Long Names and Comments. In: ACM IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pp. 1–4 (2015)Google Scholar
  3. 3.
    Aman, H., Amasaki, S., Sasaki, T., Kawahara, M.: Lines of Comments as a Noteworthy Metric for Analyzing Fault-Proneness in Methods. IEICE - Transactions on Information and Systems E98.D(12), 2218–2228 (2015)Google Scholar
  4. 4.
    Buse, R., Weimer, W.: Learning a Metric for Code Readability. IEEE Transactions on Software Engineering 36(4), 546–558 (2010)CrossRefGoogle Scholar
  5. 5.
    Cliff, N.: Ordinal Methods for Behavioral Data Analysis (1996)Google Scholar
  6. 6.
    Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D.: Refactoring. Improving the Design of Existing Code. Addison-Wesley (2012)Google Scholar
  7. 7.
  8. 8.
    Kim, S., Whitehead Jr., E.J., Zhang, Y.: Classifying Software Changes: Clean or Buggy? IEEE Transactions on Software Engineering 34(2), 181–196 (2008)CrossRefGoogle Scholar
  9. 9.
    Lawrie, D.J., Feild, H., Binkley, D.: Leveraged Quality Assessment using Information Retrieval Techniques. In: the 14th IEEE International Conference on Program Comprehension (ICPC). IEEE (2006)Google Scholar
  10. 10.
    Mann, H.B., Whitney, D.R.: On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. The annals of mathematical statistics (1947)Google Scholar
  11. 11.
    MISRA: MISRA C. http://www.misra-c.com
  12. 12.
    Potdar, A., Shihab, E.: An Exploratory Study on Self-Admitted Technical Debt. In: IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 91–100. IEEE (2014)Google Scholar
  13. 13.
    da S. Maldonado, E., Shihab, E.: Detecting and quantifying different types of self-admitted technical Debt. In: IEEE 7th International Workshop on Managing Technical Debt (MTD), pp. 9–15. IEEE (2015)Google Scholar
  14. 14.
    Scanniello, G., Gravino, C., Risi, M., Tortora, G., Dodero, G.: Documenting Design-Pattern Instances: A Family of Experiments on Source-Code Comprehensibility. ACM Transactions on Software Engineering and Methodology 24(3) (2015)Google Scholar
  15. 15.
    Śliwerski, J., Zimmermann, T., Zeller, A.: When do changes induce fixes? In: International Workshop on Mining Software Repositories (MSR), pp. 1–5. ACM (2005)Google Scholar
  16. 16.
    Steidl, D., Hummel, B., Juergens, E.: Quality analysis of source code comments. In: IEEE 21st International Conference on Program Comprehension (ICPC), pp. 83–92. IEEE (2013)Google Scholar
  17. 17.
    Tenny, T.: Program Readability: Procedures Versus Comments. IEEE Transactions on Software Engineering 14(9), 1271–1279 (1988)CrossRefGoogle Scholar
  18. 18.
    Wehaibi, S., Shihab, E., Guerrouj, L.: Examining the Impact of Self-Admitted Technical Debt on Software Quality. In: 2016 IEEE 23rd International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 179–188. IEEE (2016)Google Scholar
  19. 19.
    Woodfield, S.N., Dunsmore, H.E., Shen, V.Y.: The effect of modularization and comments on program comprehension. In: 5th international conference on Software engineering (ICSE), pp. 215–223. IEEE (1981)Google Scholar
  20. 20.
    Zimmermann, T., Premraj, R., Zeller, A.: Predicting Defects for Eclipse. In: International Workshop on Predictor Models in Software Engineering (PROMISE). IEEE (2007)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Yuto Miyake
    • 1
  • Sousuke Amasaki
    • 1
  • Hirohisa Aman
    • 2
  • Tomoyuki Yokogawa
    • 1
  1. 1.Faculty of Computer Science and Systems EngineeringOkayama Prefectural UniversitySojaJapan
  2. 2.Center for Information TechnologyEhime UniversityMatsuyamaJapan

Personalised recommendations