, Volume 25, Issue 4, pp 317-339,
Open Access This content is freely available online to anyone, anywhere at any time.
Date: 01 Sep 2011

Syntactic discriminative language model rerankers for statistical machine translation

Abstract

This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical Machine Translation output and human translations. Our approach uses discriminative language modelling to rerank the n-best translations generated by a statistical machine translation system. The performance is evaluated for Arabic-to-English translation using NIST’s MT-Eval benchmarks. While deep features extracted from parse trees do not consistently help, we show how features extracted from a shallow Part-of-Speech annotation layer outperform a competitive baseline and a state-of-the-art comparative reranking approach, leading to significant BLEU improvements on three different test sets.

This work is a revised and substantially expanded version of (Carter and Monz 2009) and (Carter and Monz 2010).