Meta-evaluation of Machine Translation Using Parallel Legal Texts

  • Billy Tak-Ming Wong
  • Chunyu Kit
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5459)

Abstract

In this paper we report our recent work on the evaluation of a number of popular automatic evaluation metrics for machine translation using parallel legal texts. The evaluation is carried out, following a recognized evaluation protocol, to assess the reliability, the strengths and weaknesses of these evaluation metrics in terms of their correlation with human judgment of translation quality. The evaluation results confirm the reliability of the well-known evaluation metrics, BLEU and NIST for English-to-Chinese translation, and also show that our evaluation metric ATEC outperforms all others for Chinese-to-English translation. We also demonstrate the remarkable impact of different evaluation metrics on the ranking of online machine translation systems for legal translation.

Keywords

Machine Translation Evaluation Legal Text BLIS BLEU ATEC 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Doyon, J., Taylor, K., White, J.: The DARPA Machine Translation Evaluation Methodology: Past and Present. In: AMTA 1998, Philadelphia, PA (1998)Google Scholar
  2. 2.
    Tomita, M., Shirai, M., Tsutsumi, J., Matsumura, M., Yoshikawa, Y.: Evaluation of MT Systems by TOEFL. In: TMI 1993: The Fifth International Conference on Theoretical and Methodological Issues in Machine Translation, Kyoto, Japan, pp. 252–265 (1993)Google Scholar
  3. 3.
    Yu, S.: Automatic Evaluation of Quality for Machine Translation Systems. Machine Translation 8, 117–126 (1993)CrossRefGoogle Scholar
  4. 4.
    Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a Method for Automatic Evaluation of Machine Translation. IBM Research Report, RC22176 (2001)Google Scholar
  5. 5.
    Doddington, G.: Automatic Evaluation of Machine Translation Quality Using N-gram Co-occurrence Statistics. In: Second International Conference on Human Language Technology Research, San Diego, California, pp. 138–145 (2002)Google Scholar
  6. 6.
    Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. In: AMTA 2006, Cambridge, Massachusetts, USA, pp. 223–231 (2006)Google Scholar
  7. 7.
    Banerjee, S., Lavie, A.: METEOR: an Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In: ACL 2005: Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 65–72. University of Michigan, Ann Arbor (2005)Google Scholar
  8. 8.
    Liu, Q., Hou, H., Lin, S., Qian, Y., Zhang, Y., Isahara, H.: Introduction to China’s HTRDP Machine Translation Evaluation. In: MT Summit X, pp. 18–22. Phuket, Thailand (2005)Google Scholar
  9. 9.
    Choukri, K., Hamon, O., Mostefa, D.: MT evaluation & TC-STAR. In: MT Summit XI Workshop: Automatic Procedures in MT Evaluation, Copenhagen, Denmark (2007)Google Scholar
  10. 10.
    NIST Open MT Evaluation, http://www.nist.gov/speech/tests/mt/
  11. 11.
    Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: Further Meta-evaluation of Machine Translation. In: ACL 2008: HLT - Third Workshop on Statistical Machine Translation, pp. 70–106. Ohio State University, Columbus (2008)Google Scholar
  12. 12.
    Culy, C., Riehemann, S.Z.: The Limits of N-gram Translation Evaluation Metrics. In: MT Summit IX, New Orleans, USA (2003)Google Scholar
  13. 13.
    Callison-Burch, C., Osborne, M., Koehn, P.: Re-evaluating the Role of Bleu in Machine Translation Research. In: EACL 2006, Trento, Italy, pp. 249–256 (2006)Google Scholar
  14. 14.
    Babych, B., Hartley, A., Elliott, D.: Estimating the Predictive Power of N-gram MT Evaluation Metrics across Language and Text Types. In: MT Summit X, Phuket, Thailand, pp. 412–418 (2005)Google Scholar
  15. 15.
    Kit, C., Wong, T.M.: Comparative Evaluation of Online Machine Translation Systems with Legal Texts. Law Library Journal 100-2, 299–321 (2008)Google Scholar
  16. 16.
    Kit, C., Liu, X., Sin, K.K., Webster, J.J.: Harvesting the Bitexts of the Laws of Hong Kong from the Web. In: 5th Workshop on Asian Language Resources, Jeju Island, pp. 71–78 (2005)Google Scholar
  17. 17.
    Estrella, P., Hamon, O., Popescu-Belis, A.: How much Data is Needed for Reliable MT Evaluation? Using Bootstrapping to Study Human and Automatic Metrics. In: MT Summit XI, Copenhagen, Denmark, pp. 167–174 (2007)Google Scholar
  18. 18.
    NIST’s Guideline of Machine Translation Assessment, http://projects.ldc.upenn.edu/TIDES/Translation/TransAssess04.pdf
  19. 19.
    Wong, T.M., Kit, C.: Word Choice and Word Position for Automatic MT Evaluation. In: AMTA 2008 Workshop: Metrics for Machine Translation Challenge, Waikiki, Hawaii (2008)Google Scholar
  20. 20.
    Zhao, H., Huang, C.N., Li, M.: An Improved Chinese Word Segmentation System with Conditional Random Field. In: Fifth SIGHAN Workshop on Chinese Language Processing, Sydney, Australia, pp. 162–165 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Billy Tak-Ming Wong
    • 1
  • Chunyu Kit
    • 1
  1. 1.Department of Chinese, Translation and LinguisticsCity University of Hong KongHong Kong

Personalised recommendations