A Neural Network for Text Representation

  • Mikaela Keller
  • Samy Bengio
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3697)

Abstract

Text categorization and retrieval tasks are often based on a good representation of textual data. Departing from the classical vector space model, several probabilistic models have been proposed recently, such as PLSA. In this paper, we propose the use of a neural network based, non-probabilistic, solution, which captures jointly a rich representation of words and documents. Experiments performed on two information retrieval tasks using the TDT2 database and the TREC-8 and 9 sets of queries yielded a better performance for the proposed neural network model, as compared to PLSA and the classical TFIDF representations.

Keywords

Latent Dirichlet Allocation Hide Unit Latent Semantic Analysis Vector Space Model Retrieval Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sebastiani, F.: Machine learning in automated text categorization. ACM Computing Surveys 34, 1–47 (2002)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Hofmann, T.: Unsupervised learning by Probabilistic Latent Semantic Analysis. Machine Learning 42, 177–196 (2001)CrossRefMATHGoogle Scholar
  3. 3.
    Le Cun, Y., Huang, F.J.: Loss functions for discriminative training of energy-based models. In: Proc. of AIStats (2005)Google Scholar
  4. 4.
    Salton, G., Wong, A., Yang, C.: A Vector Space Model for Automatic Indexing. Communication of the ACM 18 (1975)Google Scholar
  5. 5.
    Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Information Processing and Management 24, 513–523 (1988)CrossRefGoogle Scholar
  6. 6.
    Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by Latent Semantic Analysis. Journal of the American Society of Information Science 41, 391–407 (1990)CrossRefGoogle Scholar
  7. 7.
    Gaussier, E., Goutte, C., Popat, K., Chen, F.: A hierarchical model for clustering and categorising documents. In: Crestani, F., Girolami, M., van Rijsbergen, C.J.K. (eds.) ECIR 2002. LNCS, vol. 2291, pp. 229–247. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Blei, D., Ng, A., Jordan, M.: Latent Dirichlet Allocation. JMLR 3, 993–1022 (2003)CrossRefMATHGoogle Scholar
  9. 9.
    Buntine, W.: Variational extensions to em and multinomial pca. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 23–34. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Keller, M., Bengio, S.: Theme topic mixture model: A graphical model for document representation. In: PASCAL Workshop on Learning Methods for Text Understanding and Mining (2004)Google Scholar
  11. 11.
    Cristianini, N., Shawe-Taylor, J., Lodhi, H.: Latent semantic kernels. J. Intell. Inf. Syst. 18, 127–152 (2002)CrossRefGoogle Scholar
  12. 12.
    Bengio, Y., Ducharme, R., Vincent, P., Gauvin, C.: A Neural Probabilistic Language Model. JMLR 3, 1137–1155 (2003)CrossRefMATHGoogle Scholar
  13. 13.
    Collobert, R., Bengio, S.: Links between perceptrons, MLPs and SVMs. In: Proceedings of ICML (2004)Google Scholar
  14. 14.
    Lewis, D.D.: The trec-4 filtering track. In: TREC (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Mikaela Keller
    • 1
  • Samy Bengio
    • 1
  1. 1.IDIAP Research InstituteMartignySwitzerland

Personalised recommendations