Skip to main content

Evaluating the Performance of the Survey Parser with the NIST Scheme

  • Conference paper
Computational Linguistics and Intelligent Text Processing (CICLing 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3878))

Abstract

Different metrics have been proposed for the estimation of how good a parser-produced syntactic tree is when judged by a correct tree from the treebank. The emphasis of measurement has been on the number of correct constituents in terms of constituent labels and bracketing accuracy. This article proposes the use of the NIST scheme as a better alternative for the evaluation of parser output in terms of correct match, substitution, deletion, and insertion. It describes an experiment to measure the performance of the Survey Parser that was used to complete the syntactic annotation of the International Corpus of English. This article will finally report empirical scores for the performance of the parser and outline some future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sutcliffe, R.F.E., Koch, H.-D., McElligott, A. (eds.): Industrial Parsing of Software Manuals. Rodopi, Amsterdam (1996)

    Google Scholar 

  2. Carroll, J.A., Briscoe, E.J., Sanfilippo, A.: Parser evaluation: a survey and a new proposal. In: Proceedings of the First International Conference on Language Resources and Evaluation, Granada, Spain, May 28-30, pp. 447–54 (1998)

    Google Scholar 

  3. Gaizauskas, R., Hepple, M., Huyck, C.: A scheme for comparative evaluation of diverse parsing systems. In: Proceedings of the First International Conference on Language Resources and Evaluation, Granada, Spain, May 28-30 (1998)

    Google Scholar 

  4. Lin, D.: A dependency-based method for evaluating broad-coverage parsers. Natural Language Engineering 4, 97–114 (1998)

    Article  Google Scholar 

  5. Srinivasan, B., Sarkar, A., Doran, C., Hockey, B.A.: Grammar and parser evaluation in the XTAG project. In: Carroll, Basili, et al. (eds.) Proceedings of the Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation, Granada, Spain, May 26(1998)

    Google Scholar 

  6. Pallett, D.S., Fisher, W.M., Fiscus, J.G.: Tools for the Analysis of Benchmark Speech Recognition Tests. In: Proceedings of ICASSP 1990, pp. 97–100 (1990)

    Google Scholar 

  7. Pallet, D.S., Garofolo, J.S., Fiscus, J.G.: Measurements in Support of Research Accomplishments. Communications of the ACM 43(2), 75–79 (2000)

    Article  Google Scholar 

  8. ftp://jaguar.ncsl.nist.gov/current_docs/sctk/doc/sclite.htm

  9. Greenbaum, S.: A new corpus of English: ICE. In: Svartvik, J. (ed.) Directions in Corpus Linguistics: Proceedings of Nobel Symposium, Stockholm, August 4-8, 1991, vol. 82, pp. 171–179. Mouton de Gruyter, Berlin (1992)

    Google Scholar 

  10. Greenbaum, S.: The International Corpus of English. Oxford University Press, Oxford (1996)

    Google Scholar 

  11. Fang, A.C.: The Survey Parser: Design and development. In: Greenbaum, S. (ed.), pp. 142–160 (1996)

    Google Scholar 

  12. Fang, A.C.: From Cases to Rules and Vice Versa: Robust Practical Parsing with Analogy. In: Proceedings of the Sixth International Workshop on Parsing Technologies, Trento, Italy, February 23-25, pp. 77–88 (2000)

    Google Scholar 

  13. Fang, A.C.: The Syntactically Annotated ICE Corpus and the Automatic Induction of a Formal Grammar. In: Proceedings of 6th International Workshop on Linguistically Interpreted Corpora, Jeju Island, Korea, October 15, vol. 15 (2005)

    Google Scholar 

  14. Charniak, E.: A maximum-entropy-inspired parser. In: Proceedings of the 1st Annual Meeting of the North American Chapter of the ACL, Seattle, Washington (2000)

    Google Scholar 

  15. Gildea, D.: Corpus variation and parser performance. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (2001)

    Google Scholar 

  16. Klein, D., Manning, C.: Accurate unlexicalized parsing. In: Proceedings of the 41st Annual Meeting of the ACL, July 2003, pp. 423–430 (2003)

    Google Scholar 

  17. Vilares, M., Ribadas, F.J., Vilares, J.: Phrase Similarity through the Edit Distance. In: Galindo, F., Takizawa, M., Traunmüller, R. (eds.) DEXA 2004. LNCS, vol. 3180, pp. 306–317. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fang, A.C. (2006). Evaluating the Performance of the Survey Parser with the NIST Scheme. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2006. Lecture Notes in Computer Science, vol 3878. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11671299_19

Download citation

  • DOI: https://doi.org/10.1007/11671299_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-32205-4

  • Online ISBN: 978-3-540-32206-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics