Advertisement

Parsing as Classification

  • Lidia Khmylko
  • Wolfgang Menzel
Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

Dependency parsing can be cast as a classification problem over strings of observations. Compared to shallow processing tasks like tagging, parsing is a dynamic classification problem as no statically predefined set of classes exists and any class to be distinguished is composed of pairs from a given label set (syntactic function) and the available attachment points in the sentence, so that even the number of “classes” varies with the length of the input sentence. A number of fundamentally different approaches have been pursued to solve this classification task. They differ in the way they consider the context, in whether they apply machine learning approaches or not, and in the means they use to enforce the tree property of the resulting sentence structure. These differences eventually result in a different behavior on the same data making the paradigm an ideal testbed to apply different information fusion schemes for combined decision making.

Keywords

Word Form Constraint Violation Computational Linguistics Label Accuracy Input Sentence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brants, T. (2000). TnT – A statistical part-of-speech tagger. In Proceedings of the 6th Conference on Applied Natural Language Processing (pp. 224–231).Google Scholar
  2. Brill, E. (1995). Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 543–565.Google Scholar
  3. Chu, Y. J., & Liu, T. H. (1965). On the shortest arborescence of a directed graph. In Science Sinica, 14, 1396–1400.Google Scholar
  4. Covington, M. A. (2001). A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference (pp. 95–102).Google Scholar
  5. Edmonds, J. (1967). Optimum branchings. In Journal of Research of the National Bureau of Standards, 71B, 233–240.Google Scholar
  6. Eisner, J. (1996). Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the International Conference on Computational Linguistics (pp. 340–345).Google Scholar
  7. Foth, K. A., Menzel, W., & Schröder, I. (2000). A transformation-based parsing technique with anytime properties. In 4th International Workshop on Parsing Technologies, IWPT-2000 (pp. 89–100).Google Scholar
  8. Foth, K. A., & Menzel, W. (2006). Hybrid parsing: Using probabilistic models as predictors for a symbolic parser. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (pp. 321–328).Google Scholar
  9. Henderson, J. C., & Brill, E. (1999). Exploiting diversity in natural language processing: Combining parsers. In Proceedings of the 4th Conference on EMNLP (pp. 187–194).Google Scholar
  10. Karlsson, F., Voutilainen, A., Heikkila, J., & Anttila, A. (1994). Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text. Berlin, New York: Mouton De Gruyter.Google Scholar
  11. McDonald, R., Lerman, K., & Pereira, F. (2006). Multilingual dependency analysis with a two-stage discriminative parser. In Proceedings of CoNLL-X 2006 (pp. 216–220).Google Scholar
  12. McDonald, R., & Nivre, J. (2007). Characterizing the errors of data-driven dependency parsing models. In Proceedings of EMNLP-CoNLL 2007 (pp. 122–131).Google Scholar
  13. Nivre, J., Hall, J., Nilsson, J., Eryiǧit, G., & Marinov, S. (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2), 95–135.Google Scholar
  14. Nivre, J., & McDonald, R. (2008). Integrating graph-based and transition-based dependency parsers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics with the Human Language Technology Conference (pp. 950–958).Google Scholar
  15. Nivre, J., & Nilsson, J. (2005). Pseudo-projective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (pp. 99–106).Google Scholar
  16. Sagae, K., & Lavie, A. (2006). Parser combinations by reparsing. In Proceedings of HLT-NAACL (pp. 129–132).Google Scholar
  17. Schröder, I. (2002). Natural language parsing with graded constraints. PhD thesis, Department of Computer Science, University of Hamburg, Germany.Google Scholar
  18. Taskar, B. (2004). Learning structured prediction models: A large margin approach, PhD Dissertation, Stanford University.Google Scholar
  19. Zeman, D., & Žabokrtský, Z. (2005). Improving parsing accuracy by combining diverse dependency parsers. In Proceedings of the 9th International Workshop on Parsing Technologies (pp. 171–178), Vancouver, BC, Canada.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.Natural Language Systems GroupUniversity of HamburgHamburgGermany

Personalised recommendations