Skip to main content

A Dependency-Inspired Semantic Evaluation of Machine Translation Systems

  • Conference paper
Information Access Evaluation. Multilinguality, Multimodality, and Visualization (CLEF 2013)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8138))

Abstract

The goal of translation is to preserve the original text meaning. However, lexical-based machine translation (MT) evaluation metrics count the similar terms in MT output with the human translated reference rather than measuring the similarity in meaning. In this paper, we developed an MT evaluation metric to assess the output of MT systems, semantically. Inspiring by the dependency grammar, we consider to what extent the headword and its dependents contribute in preserving the meaning of the original input text. Our experimental results show that this metric is significantly better correlated with human judgment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: A Method for Automatic Evaluation of Machine Translation. In: Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, pp. 311–318 (2002)

    Google Scholar 

  2. Snover, M., Dorr, B., Schwartz, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. In: Proc. of the 7th Conference of the Association for Machine Translation in the Americas, Cambridge, MA, pp. 223–231 (2006)

    Google Scholar 

  3. Denkowski, M., Lavie, A.: METEOR-NEXT and the METEOR Paraphrase Tables: Improve Evaluation Support for Five Target Languages. In: Proc. of the ACL WMT/MetricsMATR 2010 (2010)

    Google Scholar 

  4. Koehn, P.: What is a Better Translation? Reflections on Six Years of Running Evaluation Campaigns. Tralogy (2011)

    Google Scholar 

  5. Lo, C., Wu, D.: MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility via semantic frames. In: Proc. of the 49th Annual Meeting of the Association for Computational Linguistics, pp. 220–229 (2011)

    Google Scholar 

  6. Lo, C., Wu, D.: Evaluating Machine Translation Utility via Semantic Role Labels (2010)

    Google Scholar 

  7. Kingsbury, P., Palmer, M.: Propbank: the next level of treebank. In: Proc. of Treebanks and Lexical Theories (2003)

    Google Scholar 

  8. Baker, C.F., Fillmore, C.J., Lowe, J.B.: The Berkeley FrameNet project. In: Proc. of the COLING-ACL Conference, Montreal, Canada, pp. 86–90 (1998)

    Google Scholar 

  9. Targoman, http://targoman.com/smt.php

  10. Farazin, 217.218.62.239

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mirsarraf, M.R., Dehghani, N. (2013). A Dependency-Inspired Semantic Evaluation of Machine Translation Systems. In: Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B. (eds) Information Access Evaluation. Multilinguality, Multimodality, and Visualization. CLEF 2013. Lecture Notes in Computer Science, vol 8138. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40802-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-40802-1_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40801-4

  • Online ISBN: 978-3-642-40802-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics