Iterative CKY Parsing for Probabilistic Context-Free Grammars

  • Yoshimasa Tsuruoka
  • Jun’ichi Tsujii
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3248)


This paper presents an iterative CKY parsing algorithm for probabilistic context-free grammars (PCFG). This algorithm enables us to prune unnecessary edges produced during parsing, which results in more efficient parsing. Since pruning is done by using the edge’s inside Viterbi probability and the upper-bound of the outside Viterbi probability, this algorithm guarantees to output the exact Viterbi parse, unlike beam-search or best-first strategies. Experimental results using the Penn Treebank II corpus show that the iterative CKY achieved more than 60% reduction of edges compared with the conventional CKY algorithm and the run-time overhead is very small. Our algorithm is general enough to incorporate a more sophisticated estimation function, which should lead to more efficient parsing.


Parse Tree Annotate Corpus Dynamic Programming Table Chart Parsing Output Parse 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Roark, B.: Probabilistic top-down parsing and language modeling. Computational Linguistics 27, 249–276 (2001)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Ratnaparkhi, A.: Learning to parse natural language with maximum entropy models. Machine Learning 34, 151–175 (1999)zbMATHCrossRefGoogle Scholar
  3. 3.
    Charniak, E., Goldwater, S., Johnson, M.: Edge-based best-first chart parsing. In: Proceedings of the Sixth Workshop on Very Large Corpora (1998)Google Scholar
  4. 4.
    Caraballo, S.A., Charniak, E.: New figures of merit for best-first probabilistic chart parsing. Computational Linguistics 24, 275–298 (1998)Google Scholar
  5. 5.
    Klein, D., Manning, C.D.: A* parsing: Fast exact viterbi parse selection. In: Proceedings of the HLT-NAACL, pp. 119–126 (2003)Google Scholar
  6. 6.
    Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms. The MIT Press, Cambridge (2001)zbMATHGoogle Scholar
  7. 7.
    Ney, H.: Dynamic programming parsing for context-free grammars in continuous speech recognition. IEEE Transactions on Signal Processing 39, 336–340 (1991)zbMATHCrossRefGoogle Scholar
  8. 8.
    Jurafsky, D., Martin, J.H.: Speech and Language Processing. Prentice-Hall, Englewood Cliffs (2000)Google Scholar
  9. 9.
    Marcus, M.P., Santorini, B., Marcinkiewicz, M.A.: Building a large annotated corpus of english: The penn treebank. Computational Linguistics 19, 313–330 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Yoshimasa Tsuruoka
    • 1
    • 2
  • Jun’ichi Tsujii
    • 1
    • 2
  1. 1.CRESTJST (Japan Science and Technology Agency)Saitama
  2. 2.University of TokyoTokyo

Personalised recommendations