Case-Sensitivity of Classifiers for WSD: Complex Systems Disambiguate Tough Words Better

  • Harri M. T. Saarikoski
  • Steve Legrand
  • Alexander Gelbukh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4394)

Abstract

We present a novel method for improving disambiguation accuracy by building an optimal ensemble (OE) of systems where we predict the best available system for target word using a priori case factors (e.g. amount of training per sense). We report promising results of a series of best-system prediction tests (best prediction accuracy is 0.92) and show that complex/simple systems disambiguate tough/easy words better. The method provides the following benefits: (1) higher disambiguation accuracy for virtually any base systems (current best OE yields close to 2% accuracy gain over Senseval-3 state of the art) and (2) economical way of building more effective ensembles of all types (e.g. optimal, weighted voting and cross-validation based). The method is also highly scalable in that it utilizes readily available factors available for any ambiguous word in any language for estimating word difficulty and defines classifier complexity using known properties only.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aha, D.W.: Generalizing from case studies: A case study. In: Proceedings of the Ninth International Conference on Machine Learning, Morgan Kaufmann, San Francisco (1992)Google Scholar
  2. 2.
    Aha, D., Kibler, D.: Instance-based learning algorithms. Machine Learning 6, 37–66 (1991)MATHGoogle Scholar
  3. 3.
    Bay, S.D., Pazzani, M.J.: Characterizing model errors and differences. In: 17th International Conference on Machine Learning (2000)Google Scholar
  4. 4.
    Edmonds, P., Kilgarriff, A.: Introduction to the Special Issue on evaluating word sense disambiguation programs. Journal of Natural Language Engineering 8(4) (2002)Google Scholar
  5. 5.
    Forman, G., Cohen, I.: Learning from Little: Comparison of Classifiers Given Little Training. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, Springer, Heidelberg (2004)Google Scholar
  6. 6.
    Hoste, V., Hendrickx, I., Daelemans, W., van den Bosch, A.: Parameter optimization for machine-learning of word sense disambiguation. Journal of Natural Language Engineering 8(4), 311–327 (2002)CrossRefGoogle Scholar
  7. 7.
    John, G., Langley, P.: Estimating Continuous Distributions in Bayesian Classifiers. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Mateo (1995)Google Scholar
  8. 8.
    Legrand, S., Pulido, J.G.R.: A Hybrid Approach to Word Sense Disambiguation: Neural Clustering with Class Labeling. In: Knowledge Discovery and Ontologies workshop at 15th European Conference on Machine Learning (ECML) (2004)Google Scholar
  9. 9.
    Luo, F., Khan, L., Bastani, F., Yen, I.-L., Zhou, J.A.: dynamically growing self-organizing tree (DGSOT) for hierarchical clustering gene expression profiles. Bioinformatics 20(16), 2605–2617 (2004)CrossRefGoogle Scholar
  10. 10.
    Manning, C., Tolga Ilhan, H., Kamvar, S., Klein, D., Toutanova, K.: Combining Heterogeneous Classifiers for Word-Sense Disambiguation. In: Proceedings of SENSEVAL-2, Second International Workshop on Evaluating WSD Systems, pp. 87–90 (2001)Google Scholar
  11. 11.
    Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., Euler, T.: YALE: Rapid Prototyping for Complex Data Mining Tasks. In: Proceedings of 12th ACM SIGKDD, ACM Press, New York (2006)Google Scholar
  12. 12.
    Mihalcea, R.: Word sense disambiguation with pattern learning and automatic feature selection. Journal of Natural Language Engineering 8(4), 343–359 (2002)CrossRefGoogle Scholar
  13. 13.
    Mihalcea, R., Kilgarriff, A., Chklovski, T.: The SENSEVAL-3 English lexical sample task. In: Proceedings of SENSEVAL-3 Workshop at ACL (2004)Google Scholar
  14. 14.
    Pedersen, T.: Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2. In: Proceedings of the SIGLEX/SENSEVAL Workshop on Word Sense Disambiguation (2002)Google Scholar
  15. 15.
    Pedersen, T.: Machine learning with lexical features: The Duluth approach to Senseval. In: Proceedings of the Senseval-2 Workshop (2001)Google Scholar
  16. 16.
    Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo (1993)Google Scholar
  17. 17.
    Saarikoski, H., Legrand, S.: Building an Optimal WSD Ensemble Using Per-Word Selection of Best System. In: Martínez-Trinidad, J.F., Carrasco Ochoa, J.A., Kittler, J. (eds.) CIARP 2006. LNCS, vol. 4225, Springer, Heidelberg (2006)Google Scholar
  18. 18.
    Seo, H-C., Rim, H-C., Kim, S-H.: KUNLP system in Senseval-3. In: Proceedings of SENSEVAL-2 Workshop, pp. 222–225 (2001)Google Scholar
  19. 19.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)CrossRefMATHGoogle Scholar
  20. 20.
    Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)MATHGoogle Scholar
  21. 21.
    Yarowsky, D., Cucerzan, S., Florian, R., Schafer, C., Wicentowski, R.: The Johns Hopkins SENSEVAL2 System Descriptions. In: Proceedings of SENSEVAL-2 workshop (2002)Google Scholar
  22. 22.
    Yarowsky, D., Florian, R.: Evaluating sense disambiguation across diverse parameter spaces. Journal of Natural Language Engineering 8(4), 293–311 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Harri M. T. Saarikoski
    • 1
  • Steve Legrand
    • 2
  • Alexander Gelbukh
    • 3
  1. 1.KIT Language Technology Doctorate School, Helsinki UniversityFinland
  2. 2.Department of Computer Science, University of JyväskyläFinland
  3. 3.Instituto Politecnico Nacional, Mexico CityMexico

Personalised recommendations