Machine Translation

, Volume 23, Issue 2, pp 181-193

First online:

Measuring machine translation quality as semantic equivalence: A metric based on entailment features

  • Sebastian PadóAffiliated withStuttgart University Email author 
  • , Daniel CerAffiliated withStanford University
  • , Michel GalleyAffiliated withStanford University
  • , Dan JurafskyAffiliated withStanford University
  • , Christopher D. ManningAffiliated withStanford University

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


Current evaluation metrics for machine translation have increasing difficulty in distinguishing good from merely fair translations. We believe the main problem to be their inability to properly capture meaning: A good translation candidate means the same thing as the reference translation, regardless of formulation. We propose a metric that assesses the quality of MT output through its semantic equivalence to the reference translation, based on a rich set of match and mismatch features motivated by textual entailment. We first evaluate this metric in an evaluation setting against a combination metric of four state-of-the-art scores. Our metric predicts human judgments better than the combination metric. Combining the entailment and traditional features yields further improvements. Then, we demonstrate that the entailment metric can also be used as learning criterion in minimum error rate training (MERT) to improve parameter estimation in MT system training. A manual evaluation of the resulting translations indicates that the new model obtains a significant improvement in translation quality.


MT evaluation Automated metric MERT Semantics Entailment Linguistic analysis Paraphrase