Advertisement

Word Folding: Taking the Snapshot of Words Instead of the Whole

  • Jin-Dong Kim
  • Jun’ichi Tsujii
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3248)

Abstract

The snapshot of a word means the most informative fragment of the word. By taking the snapshot instead of the whole, the value space of lexical features can be significantly reduced. From the perspective of machine learning, a small space of feature values implies a loss of information but less data-spareness and less unseen data. The snapshot of words can be taken by using the word folding technique, the goal of which is to reduce the value space of lexical features while minimizing the loss of information.

Keywords

Natural Language Processing Information Gain Relative Entropy Target Feature Word Sense Disambiguation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brill, E.: Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging. Computational Linguistics 21(4), 543–565 (1995)Google Scholar
  2. 2.
    Daelamans, W.: Memory-Based Language Processing. Introduction to the Special Issue. Journal of Experimental and Theoretical AI 11(3), 287–292 (1999)Google Scholar
  3. 3.
    Charniak, E.: Eauqtions for part-of-speech tagging. In: Proceedings of the Eleventh National Conference on Artificial Intelligence, pp. 784–789 (1993)Google Scholar
  4. 4.
    Berger, A., Pietra, S.D., Pietra, V.D.: A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics 22(1), 39–71 (1996)Google Scholar
  5. 5.
    Vapnik, V.: Statistical Learning Theory. John Wiley and Sons Inc., New York (1998)zbMATHGoogle Scholar
  6. 6.
    Porter, M.: An algorithm for suffix stripping. Program 14(3), 130–137 (1980)Google Scholar
  7. 7.
    Marcus, M., Santorini, B., Marcinkiewicz, M.A.: Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics 19(2), 310–330 (1993)Google Scholar
  8. 8.
    Kim, J.-D., Ohta, T., Tateisi, Y., Tsujii, J.: GENIA corpus – a semantically annotated corpus for bio-textmining. Bioinformatics 19(suppl.), i180–i182 (2003)Google Scholar
  9. 9.
    Tjong, E.F., Sang, K.: Representing Text Chunks. In: Proceedings of 19th Conference of the European Chapter of the Association for Computational Linguistics, pp. 173–179 (1999)Google Scholar
  10. 10.
    Kim, J.-D., Rim, H.-C., Tsujii, J.: Self-Organizing Markov Models and Their Application to Part-of-Speech Tagging. In: Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 296–302 (2003)Google Scholar
  11. 11.
    Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jin-Dong Kim
    • 1
  • Jun’ichi Tsujii
    • 1
  1. 1.University of TokyoTokyoJapan

Personalised recommendations