Skip to main content

How Are You Doing? A Look at MT Evaluation

Part of the Lecture Notes in Computer Science book series (LNAI,volume 1934)

Abstract

Machine Translation evaluation has been more magic and opinion than science. The history of MT evaluation is long and checkered - the search for objective, measurable, resource-reduced methods of evaluation continues. A recent trend towards task-based evaluation inspires the question - can we use methods of evaluation of language competence in language learners and apply them reasonably to MT evaluation? This paper is the first in a series of steps to look at this question. In this paper, we will present the theoretical framework for our ideas, the notions we ultimately aim towards and some very preliminary results of a small experiment along these lines.

Keywords

  • Natural Language Processing
  • Machine Translation
  • Language Acquisition
  • Syntactic Error
  • Second Language Acquisition

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/3-540-39965-8_11
  • Chapter length: 8 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   74.99
Price excludes VAT (USA)
  • ISBN: 978-3-540-39965-0
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   99.00
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asher, A.: Learning Another Language Through Actions: The Complete Teachers Guidebook. Los Gatos, CA: Sky Oaks Productions (1977)

    Google Scholar 

  2. Child, J., Clifford, R., and Pardee, L., Jr..:. Proficiency and Performance in Language Testing. Applied Language Learning. 4:1-2. (1993) 19–54

    Google Scholar 

  3. Church, K. and Hovy, E.: Good Applications for Crummy Machine Translation. Machine Translation 8 (1993) 239–258

    CrossRef  Google Scholar 

  4. Connor-Linton, J. 1995. Cross-cultural comparison of writing standards: American ESL and Japanese EFL. World Englishes, 14.1. Basil, Oxford (1995) 99–115

    Google Scholar 

  5. Crystal, D.: An Encyclopedic Dictionary of Language and Languages. Blackwell Publishers, Oxford, UK (1992)

    Google Scholar 

  6. Hovy, E.: Why Core Technology Evaluation Doesn’t Work. Talk given at the Second Conference of the Association for Machine Translation in the Americas. Montreal, Quebec, Canada (1996)

    Google Scholar 

  7. Levy, M.: Computer Assisted Language Learning: Context and Conceptualization. Oxford University Press (1997)

    Google Scholar 

  8. Michaud, L. & McCoy, K.: Modeling User Language Proficiency in a Writing Tutor for Deaf Learners of English. In Olsen, M. (ed.): Computer-Mediated Language Assessment and Evaluation Natural Language Processing, Proceedings of a Symposium by ACL/IALL. University of Maryland (1999) 47–54

    Google Scholar 

  9. Pierce, J. (roChair).: Language and Machines: Computers in Translation and Linguistics. Report by the Automatic Language Processing Advisory Committee (ALPAC). Publication 1416. National Academy of Sciences National Research Council (1966)

    Google Scholar 

  10. Povlsen, C., Underwood, N., Music, B., and Neville, A.: Evaluating Text-Type Suitability for Machine Translation a Case Study on an English-Danish System. Proceedings of Language Resources and Evaluation Conference, LREC-98, Volume I. Granada, Spain (1998) 21–27

    Google Scholar 

  11. Taylor, K.B. and White, J.S.: Predicting what MT is Good for: User Judgments and Task Performance. Proceedings of Third Conference of the Association for Machine Translation in the Americas, AMTA98. Philadelphia, PA (1998)

    Google Scholar 

  12. Vanni, M.: Evaluating MT Systems: Testing and Researching the Feasibility of a Task-Diagnostic Approach. Proceedings of the Conference of the Association for Information Management (ASLIB): Translating and the Computer 20, London, England (1998)

    Google Scholar 

  13. White, J.S. and Taylor, K.B.: A Task-Oriented Evaluation Metric for Machine Translation. Proceedings of Language Resources and Evaluation Conference, LREC-98, Volume I. Granada, Spain (1998) 21–27

    Google Scholar 

  14. White, J.S.: Approaches to Black Box MT Evaluation. Proceedings of MT Summit V (1995)

    Google Scholar 

  15. White, J.S. et al.: ARPA Workshops on Machine Translation. Series of 4 workshops oncomparative evaluation. PRC Inc. McLean, VA (1992-1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vanni, M., Reeder, F. (2000). How Are You Doing? A Look at MT Evaluation. In: White, J.S. (eds) Envisioning Machine Translation in the Information Future. AMTA 2000. Lecture Notes in Computer Science(), vol 1934. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-39965-8_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-39965-8_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41117-8

  • Online ISBN: 978-3-540-39965-0

  • eBook Packages: Springer Book Archive