Assessing PRESEMT

  • George Tambouratzis
  • Marina Vassiliou
  • Sokratis Sofianopoulos
Chapter
Part of the SpringerBriefs in Statistics book series (BRIEFSSTATIST)

Abstract

The topic of the current chapter is the evaluation of the performance of PRESEMT both per se as well as in comparison with other MT systems, the performance relating to the translation quality being achieved. While it is possible to employ humans for this task (subjective evaluation), who assess an MT system in terms of fluency (i.e. grammaticality) and adequacy (i.e. fidelity to the original text) (van Slype 1979), this being a laborious and time-consuming process, evaluation normally relies on automatic metrics (objective evaluation) that calculate the similarity between what an MT system produces (system output) and what it should have produced (reference translation).

References

  1. Banerjee S, Lavie A (2005) METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In: Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan, pp 65–72Google Scholar
  2. Biberauer T, Holmberg A, Roberts I, Sheehan M (2010) Parametric variation: null subjects in minimalist theory. Cambridge University PressGoogle Scholar
  3. Denkowski M, Lavie A (2011) Meteor 1.3: automatic metric for reliable optimization and evaluation of machine translation systems. In: Proceedings of the EMNLP 2011 Workshop on Statistical Machine Translation, Edinburgh, Scotland, pp 85–91Google Scholar
  4. Levenshtein VI (1966) Binary codes capable of correcting deletions, insertions, and reversals. Sov Phys Dokl 10:707–710MathSciNetMATHGoogle Scholar
  5. NIST (2002) Automatic evaluation of machine translation quality using n-gram co-occurrences statistics (available at: http://www.itl.nist.gov/iad/mig/tests/mt/doc/ngram-study.pdf)
  6. Papineni K, Roukos S, Ward T, Zhu WJ (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, U.S.A., pp 311–318Google Scholar
  7. Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the 7th AMTA Conference, Cambridge, MA, USA, pp 223–231Google Scholar
  8. Sofianopoulos S, Vassiliou M, Tambouratzis G (2012) Implementing a language-independent MT methodology. In: Proceedings of the 1st Workshop on Multilingual Modeling (held within ACL-2012), Jeju, Republic of Korea, pp 1–10Google Scholar
  9. Tambouratzis G, Vassiliou M, Sofianopoulos S (2016) Language-independent hybrid MT: comparative evaluation of translation quality. Chapter. In: Hybrid Approaches to Machine Translation, Costa-jussà, M.R., Rapp, R., Lambert, P., Eberle, K., Banchs, R.E., Babych, B. (Eds.). Springer-Verlag, pp 131–157. ISBN 978-3-319-21311-8.Google Scholar
  10. van Slype G (1979) Critical study of methods for evaluating the quality of machine translation. Technical Report BR19142, Bureau Marcel van Dijk/European Commission (DG XIII), Brussels (available at: http://issco-www.unige.ch/projects/isle/van-slype.pdf)

Copyright information

© The Author(s) 2017

Authors and Affiliations

  • George Tambouratzis
    • 1
  • Marina Vassiliou
    • 1
  • Sokratis Sofianopoulos
    • 1
  1. 1.Institute for Language and Speech ProcessingAthensGreece

Personalised recommendations