Skip to main content

Test-driving tanka: Evaluating a semi-automatic system of text analysis for knowledge acquisition

  • Applications
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1418))

Abstract

The evaluation of a large implemented natural language processing system involves more than its application to a common performance task. Such tasks have been used in the message understanding conferences (MUCs), text retrieval conferences (TRECs) as well as in speech technology and machine translation workshops. It is useful to compare the performance of different systems in a predefined application, but a detailed evaluation must take into account the specificity of the system.

We have carried out a systematic performance evaluation of our text analysis system tanka. Since it is a semi-automatic, trainable system, we had to measure the user's participation (with a view to decreasing it gradually) and the rate at which the system learns from preceding analyses. This paper discusses the premises, the design and the execution of an evaluation of tanka. The results confirm the basic assumptions of our supervised text analysis procedures, namely, that the system learns to make better analyses, that knowledge acquisition is possible even from erroneous or fragmentary parses and that the process is not too onerous for the user.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atkinson, Henry F. (1990). Mechanics of Small Engines. New York: Gregg Division, McGraw-Hill.

    Google Scholar 

  2. Barker, Ken (1996). “The Assessment of Semantic Cases Using English Positional, Prepositional and Adverbial Case Markers.” TR-96-08, Department of Computer Science, University of Ottawa.

    Google Scholar 

  3. Barker, Ken (1997). “Noun Modifier Relationship Analysis in the TANKA System.” TR-9702, Department of Computer Science, University of Ottawa.

    Google Scholar 

  4. Barker, Ken (1998). “A Trainable Bracketer for Noun Modifiers.” Proceedings of the Twelfth Canadian Conference on Artificial Intelligence, Vancouver.

    Google Scholar 

  5. Barker, Ken & Sylvain Delisle (1996). “Experimental Validation of a Semi-Automatic Text Analyzer.” TR-96-01, Department of Computer Science, University of Ottawa.

    Google Scholar 

  6. Barker, Ken & Stan Szpakowicz (1995). “Interactive Semantic analysis of Clause-Level Relationships.” Proceedings of the Second Conference of the Pacific Association for Computational Linguistics, Brisbane, 22–30.

    Google Scholar 

  7. Barker, Ken, Terry Copeck, Sylvain Delisle & Stan Szpakowicz (1997). “Systematic Construction of a Versatile Case System.” Journal of Natural Language Engineering (in print).

    Google Scholar 

  8. Cole, Ronald A., Joseph Mariani, Hans Uszkoreit, Annie Zaenen & Victor Zue (1996). Survey of the State of the Art in Human Language Technology. http://www.cse.ogi.edu/CSLU/HLTSurvey/

    Google Scholar 

  9. Copeck, Terry, Ken Barker, Sylvain Delisle, Stan Szpakowicz & Jean-François Delannoy (1997). “What is Technical Text?” Language Sciences 19(4), 391–424.

    Article  Google Scholar 

  10. Delisle, Sylvain (1994). “Text processing without A-Priori Domain Knowledge: Semi-Automatic Linguistic analysis for Incremental Knowledge Acquisition.” Ph.D. thesis, TR94-02, Department of Computer Science, University of Ottawa.

    Google Scholar 

  11. Delisle, Sylvain & Stan Szpakowicz (1995). “Realistic Parsing: Practical Solutions of Difficult Problems.” Proceedings of the Second Conference of the Pacific Association for Computational Linguistics, Brisbane, 59–68.

    Google Scholar 

  12. Delisle, Sylvain, Ken Barker, Terry Copeck & Stan Szpakowicz (1996). “Interactive Semantic analysis of Technical Texts.” Computational Intelligence 12(2), May, 1996 273–306.

    Google Scholar 

  13. Grishman R & B. Sundheim (1996). “Message Understanding Conference — 6: A Brief History.” Proceedings of COLING-96, 466–471.

    Google Scholar 

  14. Hirschman, Lynette & Henry S. Thompson (1996). “Overview of Evaluation in Speech and Natural Language Processing.” in [8].

    Google Scholar 

  15. Larrick, Nancy. (1961). Junior Science Book of Rain, Hail, Sleet & Snow. Champaign: Garrard Publishing Company.

    Google Scholar 

  16. MUC-6 (1996). Proceedings of the Sixth Message Understanding Conference. Morgan Kaufmann.

    Google Scholar 

  17. Quirk, Randolph, Sidney Greenbaum, Geoffrey Leech & Jan Svartvik (1985). A Comprehensive Grammar of the English Language. London: Longman.

    Google Scholar 

  18. Sparck Jones, Karen (1994). “Towards Better NLP System Evaluation.” Proceedings of the Human Language Technology Workshop 1994, San Francisco: Morgan Kaufmann, 102–107.

    Google Scholar 

  19. Sparck Jones, Karen & Julia R. Galliers (1996). “Evaluating Natural Language Processing Systems: An Analysis and Review.” Lecture Notes in Artificial Intelligence 1083, New York: Springer-Verlag.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Robert E. Mercer Eric Neufeld

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barker, K., Delisle, S., Szpakowicz, S. (1998). Test-driving tanka: Evaluating a semi-automatic system of text analysis for knowledge acquisition. In: Mercer, R.E., Neufeld, E. (eds) Advances in Artificial Intelligence. Canadian AI 1998. Lecture Notes in Computer Science, vol 1418. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64575-6_40

Download citation

  • DOI: https://doi.org/10.1007/3-540-64575-6_40

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64575-7

  • Online ISBN: 978-3-540-69349-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics