Model-Based Estimation of Word Saliency in Text

  • Xin Wang
  • Ata Kabán
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4265)


We investigate a generative latent variable model for model-based word saliency estimation for text modelling and classification. The estimation algorithm derived is able to infer the saliency of words with respect to the mixture modelling objective. We demonstrate experimental results showing that common stop-words as well as other corpus-specific common words are automatically down-weighted and this enhances our ability to capture the essential structure in the data, ignoring irrelevant details. As a classifier, our approach improves over the class prediction accuracy of the Naive Bayes classifier in all our experiments. Compared with a recent state of the art text classification method (Dirichlet Compound Multinomial model) we obtained improved results in two out of three benchmark text collections tested, and comparable results on one other data set.


Common Word Fundamental Freedom Essential Structure Irrelevant Detail Pluralistic Democracy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Francis, W.N., Kucera, H.: Frequency analysis of English usage (1982)Google Scholar
  2. 2.
    Madsen, R.E., Kauchak, D., Elkan, C.: Modeling word burstiness using the Dirichlet distribution. In: ICML 2005: Proceedings of the 22nd international conference on Machine learning, pp. 545–552. ACM Press, New York (2005)CrossRefGoogle Scholar
  3. 3.
    Figueiredo, M.A.T., Law, M.H.C., Jain, F.-A.K.: Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1154–1166 (2004)CrossRefGoogle Scholar
  4. 4.
    McCallum, A., Nigam, K.: A comparison of event models for Naive Bayes text classification. In: AAAI 1998 Workshop on Learning for Text Categorization (1998)Google Scholar
  5. 5.
    Joachims, T.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In: Proceedings of the European Conference on Machine Learning. Springer, Heidelberg (1998)Google Scholar
  6. 6.
    McCallum, A., Rosenfeld, R., Mitchell, T.M., Ng, A.Y.: Improving text classification by shrinkage in a hierarchy of classes. In: ICML 1998: Proceedings of the Fifteenth International Conference on Machine Learning, San Francisco, CA, USA, pp. 359–367. Morgan Kaufmann Publishers Inc., San Francisco (1998)Google Scholar
  7. 7.
    Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. 34(1), 1–47 (2002)CrossRefGoogle Scholar
  8. 8.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)MATHGoogle Scholar
  9. 9.
    Wang, X., Kabán, A.: Finding uninformative features in binary data. In: Gallagher, M., Hogan, J.P., Maire, F. (eds.) IDEAL 2005. LNCS, vol. 3578, pp. 40–47. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  10. 10.
    Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: Fisher, D.H. (ed.) Proceedings of ICML 1997. 14th International Conference on Machine Learning, Nashville, US, pp. 412–420. Morgan Kaufmann Publishers, San Francisco (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Xin Wang
    • 1
  • Ata Kabán
    • 1
  1. 1.School of Computer ScienceThe University of BirminghamBirminghamUK

Personalised recommendations