Robust Semi-supervised and Ensemble-Based Methods in Word Sense Disambiguation
Mihalcea  discusses self-training and co-training in the context of word sense disambiguation and shows that parameter optimization on individual words was important to obtain good results. Using smoothed co-training of a naive Bayes classifier she obtains a 9.8% error reduction on Senseval-2 data with a fixed parameter setting. In this paper we test a semi-supervised learning algorithm with no parameters, namely tri-training . We also test the random subspace method  for building committees out of stable learners. Both techniques lead to significant error reductions with different learning algorithms, but improvements do not accumulate. Our best error reduction is 7.4%, and our best absolute average over Senseval-2 data, though not directly comparable, is 12% higher than the results reported in Mihalcea .
KeywordsSupport Vector Machine Unlabeled Data Error Reduction Word Sense Disambiguation Random Subspace
Unable to display preview. Download preview PDF.
- 1.Mihalcea, R.: Co-training and self-training for word sense disambiguation. In: CONLL, Boston, MA (2004)Google Scholar
- 4.Abney, S.: Semi-supervised learning for computational linguistics. Chapman and Hall, Boca Raton (2008)Google Scholar
- 6.Nguyen, T., Nguyen, L., Shimazu, A.: Using semi-supervised learning for question classification. Journal of Natural Language Processing 15, 3–21 (2008)Google Scholar
- 9.Frank, E., Witten, I.: Generating accurate rule sets without global optimization. In: The 15th International Conference on Machine Learning (1995)Google Scholar
- 10.Sindhwani, V., Keerthi, S.: Large scale semi-supervised linear SVMs. In: ACM SIGIR, Seattle, WA (2006)Google Scholar