Automatic identifier inconsistency detection using code dictionary

Abstract

Inconsistent identifiers make it difficult for developers to understand source code. In particular, large software systems written by several developers can be vulnerable to identifier inconsistency. Unfortunately, it is not easy to detect inconsistent identifiers that are already used in source code. Although several techniques have been proposed to address this issue, many of these techniques can result in false alarms since such techniques do not accept domain words and idiom identifiers that are widely used in programming practice. This paper proposes an approach to detecting inconsistent identifiers based on a custom code dictionary. It first automatically builds a Code Dictionary from the existing API documents of popular Java projects by using an Natural Language Processing (NLP) parser. This dictionary records domain words with dominant part-of-speech (POS) and idiom identifiers. This set of domain words and idioms can improve the accuracy when detecting inconsistencies by reducing false alarms. The approach then takes a target program and detects inconsistent identifiers of the program by leveraging the Code Dictionary. We provide CodeAmigo, a GUI-based tool support for our approach. We evaluated our approach on seven Java based open-/proprietary- source projects. The results of the evaluations show that the approach can detect inconsistent identifiers with 85.4 % precision and 83.59 % recall values. In addition, we conducted an interview with developers who used our approach, and the interview confirmed that inconsistent identifiers frequently and inevitably occur in most software projects. The interviewees then stated that our approach can help to better detect inconsistent identifiers that would have been missed through manual detection.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Notes

  1. 1.

    http://goo.gl/p6Gzmd and http://goo.gl/7cCV8n

  2. 2.

    http://www.dlib.vt.edu/projects/MarianJava/edu/vt/marian/server/status.java

  3. 3.

    https://github.com/tangmatt/word-scramble/blob/master/system/Status.java

  4. 4.

    Note that an identifier can include multiple inconsistencies. The total number of unique identifiers containing at least one inconsistency is 1,952.

  5. 5.

    https://sites.google.com/site/detectinginconsistency/

  6. 6.

    Apache Directory Project: https://issues.apache.org/jira/browse/DIRSERVER-1140

  7. 7.

    Apache Commons Math: https://issues.apache.org/jira/browse/MATH-707

  8. 8.

    Synonyms Definition: http://en.wikipedia.org/wiki/Synonym

  9. 9.

    Oxford Dictionary, http://www.oxforddictionaries.com/

  10. 10.

    Collins Cobuild Dictionary: http://www.collinsdictionary.com/dictionary/english

  11. 11.

    Dictionary.com: http://dictionary.reference.com/

  12. 12.

    To define this map, any English dictionary can be used. In this paper, we used WordNet (2014) as described in Section 3.2.2.

  13. 13.

    https://bugs.eclipse.org/bugs/show_bug.cgi?id=369942

  14. 14.

    https://github.com/Chassis/memcache/issues/2

  15. 15.

    https://bugs.eclipse.org/bugs/show_bug.cgi?id=108384

  16. 16.

    https://github.com/scrom/Experiments/issues/32

  17. 17.

    https://github.com/scrom/Experiments/commit/04dfbf7818626f9818379eb20e4c87e755407687

  18. 18.

    https://github.com/morrisonlevi/Ardent/issues/17

  19. 19.

    Although there are some of the researches on POS-tagging of source code elements (Abebe and Tonella 2010; Binkley et al. 2011; Guapa et al. 2013), they are not publicly available or also used natural language parser such as Minipar (2014), Stanford Log-linear Part-Of-Speech Tagger Toutanova et al. (2003). In this paper, we have adopted Stanford Parser (2014) because it is highly accurate for parsing natural language sentences and broadly used for NLP. In addition, it is publicly available, well-documented and stable.

  20. 20.

    Decision of this threshold is carried out in the preliminary study.

  21. 21.

    https://sites.google.com/site/detectinginconsistency/

  22. 22.

    The Stanford Parser: A statistical parser (2014) has 86 % parsing precision for a sentence consisting of 40 English words.

  23. 23.

    https://issues.apache.org/jira/browse/HBASE-584

  24. 24.

    Oxford Dictionary, http://www.oxforddictionaries.com/

  25. 25.

    Collins Cobuild Dictionary: http://www.collinsdictionary.com/dictionary/english

  26. 26.

    SCOWL: http://wordlist.aspell.net/

  27. 27.

    Lexicon BadSmell Wiki: http://selab.fbk.edu/LexiconBadSmellWiki

References

  1. Deiβenböck F, Pizka M (2005) Concise and Consistent Naming. In: Proceedings of International Workshop on Program Comprehension(IWPC), St. Louis, pp 261–282

  2. Lawrie D, Field H, Binkley D (2006) Syntactic Identifier Conciseness and Consistency. In: Proceedings of IEEE International Workshop on Source Code Analysis and Manipulation(SCAM). Philadelphia, Pennsylvania, pp 139–148

  3. Martin RC (2008) Clean Code: A Handbook of Agile Software Craftsmanship, 1st edn. Prentice Hall

  4. Higo Y, Kusumoto S (2012) How Often Do Unintended Inconsistencies Happen?-Deriving Modification Pattern and Detecting Overlooked Code Fragments-. In: Proceedings of the 28th international conference on software maintenance, Trento, pp 222–231

  5. Abebe SF, Haiduc S, Tonella P, Marcus A (2008) Lexicon Bad Smells in Software. In: Proceedings of working conference on reverse engineering, Antwerp Belgium, pp 95–99

  6. Hughes E (2004) Checking Spelling in Source Code. IEEE Software, ACM SIGPLAN Not 39(12):32–38

    Article  Google Scholar 

  7. Delorey DP, Kutson CD, Davies M (2009) Mining Programming Language Vocabularies from Source Code. In: Proceedings of the 21st conference of the psychology of programming group(PPIG), London

  8. Lawire D, Binkley D, Morrel C (2010) Normalizaing Source Code Vocabulary. In: Proceedings of the 17th working conference on reverse engineering, Boston, pp 3–12

  9. Abebe SL, Tonella P (2010) Natural Language Parsing of Program Element Names for Concept Extraction. In: proceedings of international conference on program comprehension(ICPC), Minho, pp 156–159

  10. Falleri J, Lafourcade M, Nebut C, Prince V, Dao M (2010) Automatic Extraction of a WordNet-like Identifier Network from Software. In: Proceedings of international conference on Program comprehension(ICPC), Minho, pp 4–13

  11. Abebe S, Tonella P (2013) Automated identifier completion and replacement. In: Proceedings of the european conference on software maintenance and reengineering (CSMR), Genova, pp 263–272

  12. Host EW, Ostvold BM (2009) Debugging Method Names, Proceedings of the 23rd European Conference on Object-Oriented Programming. Lect. Notes Comput. Sci 5653(1):294–317

    Article  Google Scholar 

  13. Lee S, Kim S, Kim J, Park S (2012) Detecting Inconsistent Names of Source Code Using NLP. Computer Applications for Database, Education, and Ubiquitous Computing Communications in Computer and Information Science 352(1):111–115

    Article  Google Scholar 

  14. Code Conventions for the Java Programming Language: Why Have Code Conventions Sun Microsystems (1999). http://www.oracle.com/technetwork/java/index-135089.html

  15. Lawrie D, Feild H, Binkley D (2007) Quantifying identifier quality: an analysis of trends. Empir Softw Eng 12(4):359–388

    Article  Google Scholar 

  16. Madani N, Guerroju L, Penta MD, Gueheneuc Y, Antoniol G (2010) Recognizing Words from Source Code Identifiers using Speech Recognition Techniques. In: Proceedings of 14th european conference on software maintenance and reengineering(CSMR), Madrid, pp 68–77

  17. Goodliffe P (2006) Code Craft: The Practice of Writing Excellent Code. No Starch Press

  18. WordNet: A lexical database for English Home page (2014). http://wordnet.princeton.edu/

  19. Haber RN, Schindler RM (1981) Errors in proofreading: Evidence of Syntactic Control of Letter Processing. J Exp Psychol Hum Percept Perform 7(1):573–579

    Article  Google Scholar 

  20. Monk AF, Hulme C (1983) Errors in proofreading: Evidence for the Use of Word Shape in Word Recognition. Mem Cogn 11(1):16–23

    Article  Google Scholar 

  21. Caprile B, Tonella P (1999) Nomen Est Omen: Analyzing the Language of Funtion Identifiers. In: Proceedings of working conference on reverse engineering, Altanta, pp 112–122

  22. The Stanford Parser: A statistical parser Home page (2014). http://nlp.stanford.edu/software/lex-parser.shtml

  23. Apache OpenNLP Homepage (2014). http://opennlp.apache.org/

  24. Binkley D, Hearn M, Lawrie D (2011) Improving Identifier Informativeness using Part of Speech Information. In: Proceedings of the 8th working conference on mining software repositories, New York, pp 203–2006

  25. Guapa S, Malik S, Pollock L, Vijay-Shanker K (2013) Part-of-Speech Tagging of Program Identifiers for Improved Text-Based Software Engineering Tools. In: Proceedings of 21st international conference on program comprehension (ICPC), San Francisco, pp 3–12

  26. MINIPAR Homepage (2014). http://webdocs.cs.ualberta.ca/lindek/minipar.htm

  27. Toutanova K, Klein D, Manning C, Singer Y (2003) Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. In: Proceedings of HLT-NAACL, pp 252–259

  28. The Penn Treebank Project (2013). http://www.cis.upenn.edu/treebank/

  29. Budanitsky A, Hirst G (2006) Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Comput Linguis 32(1):13–47

    Article  MATH  Google Scholar 

  30. Levenshtein VI (1966) Binary codes capable of correcting deletions, insertions and reversals. Sov Phys Doklady 10(8):707–710

    MathSciNet  MATH  Google Scholar 

  31. Frakes WB, Baeza-Yates R (1992) Information Retrival : Data Structures and Algorithms. J.J.: Prentice-Hall, Englewood Cliffs

    Google Scholar 

  32. Apache Lucene Homegage (2013). http://lucene.apache.org/core/

  33. Apache Ant Homepage (2013). http://ant.apache.org/

  34. Apache JMeter Homepage (2013). http://jmeter.apache.org/

  35. JUnit Homepage (2013). http://www.junit.org/

  36. JHotDraw 7 Homepage (2013). http://www.randelshofer.ch/oop/jhotdraw/

  37. Sweet Home 3D Homepage (2013). http://sourceforge.net/projects/sweethome3d

  38. Klein D, Manning CD (2003) Accurate Unlexicalized Parsing. In: Proceedings of the meeting of the association for computational linguistics, Sapporo, pp 423–430

  39. Code Amigo Validation WebPage (2014). http://54.250.194.210/

  40. Powers DM (2011) Evaluation: From Precision, Recall and F-Factor to ROC, Informedness, Markedness & Correlation. J Mach Learn Technol 1(1):37–63

    Google Scholar 

  41. Eclipse-CS Check Style Homepage (2013). http://eclipse-cs.sourceforge.net/

  42. Find Bugs in Java Programs Homepage (2013). http://findbugs.sourceforge.net/

  43. Bloch J (2001) Effective Java Programming Language Guide. Sun Microsystems

  44. Bolch J (2008) Effective Java (2nd Edition), 2nd edn. Addison-Wesley

  45. Arnaoudova V, Penta MD, Antoniol G, Gueheneuc Y (2013) A New Family of Software Anti-Patterns: Linguistic Anti-Patterns. In: Proceedings of the european conference on software maintenance and reengineering (CSMR), Genova, pp 187–196

Download references

Acknowledgments

This paper was supported by research funds of Chonbuk National University in 2014. This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2014M3C4A7030505).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Dongsun Kim.

Additional information

Communicated by: Giulio Antoniol

Appendix A: List of Domain Word POSes and Idioms

Appendix A: List of Domain Word POSes and Idioms

Table 13 Domain words with the dominant POS information extracted from the API document of projects with the parameter T W O = 100 and T P R = 0.8 ( indicates a word evaluated as invalid in the preliminary study. The precision is computed as 176/191 = 0.921)
Table 14 Idiom identifiers extracted from the API document of projects listed in Table 1, where T(F O f m w ) = 2, T(F O c l s ) = 2, T(F O a t t ) = 2, and T(F O m e t ) = 10

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kim, S., Kim, D. Automatic identifier inconsistency detection using code dictionary. Empir Software Eng 21, 565–604 (2016). https://doi.org/10.1007/s10664-015-9369-5

Download citation

Keywords

  • Inconsistent identifiers
  • Code dictionary
  • Source code
  • Refactoring
  • Code readability
  • Part-of-speech analysis