The Naïve Bayes Model in the Context of Word Sense Disambiguation

Chapter
Part of the SpringerBriefs in Statistics book series (BRIEFSSTATIST)

Abstract

This chapter discusses the Naïve Bayes model strictly in the context of word sense disambiguation. The theoretical model is presented and its implementation is discussed. Special attention is paid to parameter estimation and to feature selection, the two main issues of the model’s implementation. The EM algorithm is recommended as suitable for parameter estimation in the case of unsupervised WSD. Feature selection will be surveyed in the following chapters.

Keywords

Bayesian classification Expectation-Maximization algorithm Naïve Bayes classifier 

References

  1. Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. B 39(1), 1–38 (1977)MATHMathSciNetGoogle Scholar
  2. Eberhardt, F., Danks, D.: Confirmation in the cognitive sciences: the problematic case of Bayesian models. Mind. Mach. 21, 389–410 (2011)CrossRefGoogle Scholar
  3. Gale, W., Church, K., Yarowsky, D.: A method for disambiguating word senses in a large corpus. Comput. Humanit. 26(5–6), 415–439 (1992)Google Scholar
  4. Gale, W.A., Church, K.W., Yarowsky, D.: Discrimination decisions for 100,000-dimensional space. Ann. Oper. Res. 55(2), 323–344 (1995)Google Scholar
  5. Manning, C., Schütze, H.: Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge (1999)Google Scholar
  6. Pedersen, T., Bruce, R.: Knowledge lean word-sense disambiguation. In: Proceedings of the 15th National Conference on Artificial Intelligence, pp. 800–805. Madison, Wisconsin (1998)Google Scholar

Copyright information

© The Author(s) 2013

Authors and Affiliations

  1. 1.Faculty of Mathematics and Computer Science, Department of Computer ScienceUniversity of BucharestBucharestRomania

Personalised recommendations