Machine Translation

, Volume 8, Issue 1–2, pp 39–47 | Cite as

Module-level testing for natural language understanding

  • Sharon Flank
  • Aaron Temin
  • Hatte Blejer
  • Andrew Kehler
  • Sherman Greenstein
Article
  • 26 Downloads

Conclusion

We have examined a module-level evaluation methodology for natural language understanding, illustrating insights gained during implementation. It is possible to produce useful quantitative and qualitative results using such a methodology, provided certain pitfalls are avoided.

Keywords

Artificial Intelligence Natural Language Qualitative Result Computational Linguistic Evaluation Methodology 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Flank, Sharon, Aaron Temin, Hatte Blejer, Andrew Kehler, and Sherman Greenstein Module-Level Testing: Lessons Learned. Presented at the Workshop on Evaluation of Natural Language Processing Systems, at the Association for Computational Linguistics Annual Meeting, Berkeley, CA, 18 June 1991. (Rome Laboratory Technical Report)Google Scholar
  2. Greenstein, Sherman and Hatte Blejer (1990) A Test and Evaluation Plan for Natural Language Understanding. SRA Working Paper.Google Scholar
  3. Palmer, Martha and Tim Finin (1990) Workshop on the Evaluation of Natural Language Processing Systems.Computational Linguistics 16(3): 175–181.Google Scholar
  4. Sundheim, Beth M. (1989) Plans for a Task-Oriented Evaluation of Natural Language Understanding Systems.Proceedings of the DARPA Speech and Natural Language Workshop: 197–202.Google Scholar

Copyright information

© Kluwer Academic Publishers 1993

Authors and Affiliations

  • Sharon Flank
    • 1
  • Aaron Temin
    • 1
  • Hatte Blejer
    • 1
  • Andrew Kehler
    • 1
  • Sherman Greenstein
    • 1
  1. 1.Systems Research and Applications Corp. (SRA)ArlingtonUSA

Personalised recommendations