Advertisement

Building an Optimal WSD Ensemble Using Per-Word Selection of Best System

  • Harri M. T. Saarikoski
  • Steve Legrand
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4225)

Abstract

In Senseval workshops for evaluating WSD systems [1,4,9], no one system or system type (classifier algorithm, type of system ensemble, extracted feature set, lexical knowledge source etc.) has been discovered that resolves all ambiguous words into their senses in a superior way. This paper presents a novel method for selecting the best system for target word based on readily available word features (number of senses, average amount of training per sense, dominant sense ratio). Applied to Senseval-3 and Senseval-2 English lexical sample state-of-art systems, a net gain of approximately 2.5 – 5.0% (respectively) in average precision per word over the best base system is achieved. The method can be applied to any base system or target word in any language.

Keywords

Support Vector Machine Target Word Test Word Ambiguous Word Word Sense Disambiguation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Edmonds, P., Kilgarriff, A.: Introduction to the Special Issue on evaluating word sense disambiguation programs. Journal of Natural Language Engineering 8(4) (2002)Google Scholar
  2. 2.
    Grozea, C.: Finding optimal parameter settings for high performance word sense disambiguation. In: SENSEVAL-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, Barcelona, Spain (2004)Google Scholar
  3. 3.
    Hoste, V., Hendrickx, I., Daelemans, W., van den Bosch, A.: Parameter optimization for machine-learning of word sense disambiguation. Journal of Natural Language Engineering 8(4), 311–327 (2002)CrossRefGoogle Scholar
  4. 4.
    Kilgarriff, A.: SENSEVAL: An Exercise in Evaluating Word Sense Disambiguation Programs. In: Proceedings of LREC, Granada, pp. 581–588 (1998)Google Scholar
  5. 5.
    Lee, Y.-K., Ng, H.-T., Chia, T.-K.: Supervised Word Sense Disambiguation with Support Vector Machines and Multiple Knowledge Sources. In: Proceedings of SENSEVAL-3 workshop (2004)Google Scholar
  6. 6.
    Legrand, S., Pulido, J.G.R.: A Hybrid Approach to Word Sense Disambiguation: Neural Clustering with Class Labeling. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, Springer, Heidelberg (2004)Google Scholar
  7. 7.
    Luo, F., Khan, L., Bastani, F., Yen, I.-L., Zhou, J.: A dynamically growing self-organizing tree (DGSOT) for hierarchical clustering gene expression profiles. Bioinformatics 20(16), 2605–2617 (2004)CrossRefGoogle Scholar
  8. 8.
    Manning, C., Tolga Ilhan, H., Kamvar, S., Klein, D., Toutanova, K.: Combining Heterogeneous Classifiers for Word-Sense Disambiguation. In: Proceedings of SENSEVAL-2, Second International Workshop on Evaluating WSD Systems, pp. 87–90 (2001)Google Scholar
  9. 9.
    Mihalcea, R.: Word sense disambiguation with pattern learning and automatic feature selection. Journal of Natural Language Engineering 8(4), 343–359 (2002)CrossRefGoogle Scholar
  10. 10.
    Mihalcea, R., Kilgarriff, A., Chklovski, T.: The SENSEVAL-3 English lexical sample task. In: Proceedings of SENSEVAL-3 Workshop at ACL (2004)Google Scholar
  11. 11.
    Montoyo, A., Suárez, A.: The University of Alicante word sense disambiguation system. In: Proceedings of SENSEVAL-2 Workshop, pp. 131–134 (2001)Google Scholar
  12. 12.
    Mooney, R.: Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (1996)Google Scholar
  13. 13.
    Saarikoski, H.: mySENSEVAL: Explaining WSD System Performance Using Target Word Features. In: Montoyo, A., Muńoz, R., Métais, E. (eds.) NLDB 2005. LNCS, vol. 3513, pp. 369–371. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  14. 14.
    Seo, H.-C., Rim, H.-C., Kim, S.-H.: KUNLP system in Senseval-3. In: Proceedings of SENSEVAL-2 Workshop, pp. 222–225 (2001)Google Scholar
  15. 15.
    Strapparava, C., Gliozzo, A., Giuliano, C.: Pattern abstraction and term similarity for Word Sense Disambiguation: IRST at Senseval-3. In: Proceedings of SENSEVAL-3 workshop (2004)Google Scholar
  16. 16.
    Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)MATHGoogle Scholar
  17. 17.
    Yarowsky, D., Cucerzan, S., Florian, R., Schafer, C., Wicentowski, R.: The Johns Hopkins SENSEVAL2 System Descriptions. In: Proceedings of SENSEVAL-2 workshop (2002)Google Scholar
  18. 18.
    Yarowsky, D., Florian, R.: Evaluating sense disambiguation across diverse parameter spaces. Journal of Natural Language Engineering 8(4), 293–311 (2002)CrossRefGoogle Scholar
  19. 19.
    Zavrel, J., Degroeve, S., Kool, A., Daelemans, W., Jokinen, K.: Diverse Classifiers for NLP Disambiguation Tasks. Comparisons, Optimization, Combination, and Evolution. TWLT 18. Learning to Behave. CEvoLE 2, 201–221 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Harri M. T. Saarikoski
    • 1
  • Steve Legrand
    • 2
  1. 1.KIT Language Technology Doctorate SchoolHelsinki UniversityFinland
  2. 2.Department of Computer ScienceUniversity of JyväskyläFinland

Personalised recommendations