Is the Most Frequent Sense of a Word Better Connected in a Semantic Network?

  • Hiram Calvo
  • Alexander Gelbukh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9227)


In this paper we show several experiments motivated by the hypothesis that counting the number of relationships each synset has in WordNet 2.0 is related to the senses that are the most frequent (MFS), because MFS usually has a longer gloss, more examples of usage, more relationships with other words (synonyms, hyponyms), etc. We present a comparison of finding the MFS through the relationships in a semantic network (WordNet) versus measuring only the number of characters, words and other features in the gloss of each sense. We found that counting only inbound relationships is different to counting both inbound and outbound relationships, and that second order relationships are not so helpful, despite restricting them to be of the same kind. We analyze the contribution of each different kind of relationship in a synset; and finally, we present an analysis of the different cases where our algorithm is able to find the correct sense in SemCor, being different from the MFS listed in WordNet.


  1. Calvo, H., Gelbukh, A.: Finding the most frequent sense of a word by the length of its definition. In: Gelbukh, A., Espinoza, F.C., Galicia-Haro, S.N. (eds.) MICAI 2014, Part I. LNCS, vol. 8856, pp. 1–8. Springer, Heidelberg (2014)Google Scholar
  2. Hawker, T., Honnibal, M.: Improved default sense selection for word sense disambiguation. In: Proceedings of the 2006 Australasian Language Technology Workshop (ALTW2006), pp 11–17 (2006)Google Scholar
  3. Lesk, M.: Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In: Proceedings of the 5th annual International Conference on Systems Documentation, pp. 24–26. ACM (1986)Google Scholar
  4. Lin, D.: An information-theoretic definition of similarity. Int. Conf. Mach. Learn. 98, 296–304 (1998)Google Scholar
  5. Marcus, M.P., Marcinkiewicz, M.A., Santorini, B.: Building a large annotated corpus of English: the Penn treebank. Comput. Linguist. 19(2), 313–330 (1993)Google Scholar
  6. Màrquez, L., Taulé, M., Martí, M.A., García, M., Artigas, N., Real, F.J., Ferrés, D.: Senseval-3: the Spanish lexical sample task. In: Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, Barcelona, Spain, Association for Computational Linguistics (2004)Google Scholar
  7. McCarthy, D., Koeling, R., Weeds, J.: Carroll unsupervised acquisition of predominant word senses. Comput. Linguist. 33(4), 553–590 (2007)CrossRefGoogle Scholar
  8. Mihalcea, R., Chklovski, T., Kilgarriff, A.: The Senseval-3 English lexical sample task. In: Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pp. 25–28 (2004)Google Scholar
  9. Miller, G., Leacock, C., Tengi, R., Bunker, R.T.: A semantic concordance. In: Proceedings of ARPA Workshop on Human Language Technology, pp. 303–308 (1993)Google Scholar
  10. Miller, G.A., Chodorow, M., Landes, S., Leacock, C., Thomas, R.G.: Using a semantic concordance for sense identification. In: Proceedings of the ARPA Human Language Technology Workshop, pp. 240–243 (1994)Google Scholar
  11. Snyder, B., Palmer, M.: The English all-words task. In: ACL 2004 Senseval-3 Workshop, Barcelona, Spain (2004)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Centro de Investigación en ComputaciónInstituto Politécnico NacionalMexicoMexico

Personalised recommendations