Machine Translation

, Volume 24, Issue 1, pp 39–50

Machine translation evaluation versus quality estimation

Article

DOI: 10.1007/s10590-010-9077-2

Cite this article as:
Specia, L., Raj, D. & Turchi, M. Machine Translation (2010) 24: 39. doi:10.1007/s10590-010-9077-2
  • 902 Views

Abstract

Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.

Keywords

Machine translation evaluationQuality estimationConfidence estimation

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.Research Group in Computational LinguisticsUniversity of WolverhamptonWolverhamptonUK
  2. 2.Indian Institute of Information TechnologyAllahabadIndia
  3. 3.European Commission – JRC (IPSC)IspraItaly