Advertisement

Machine Translation

, 25:197 | Cite as

Towards predicting post-editing productivity

  • Sharon O’BrienEmail author
Article

Abstract

Machine translation (MT) quality is generally measured via automatic metrics, producing scores that have no meaning for translators who are required to post-edit MT output or for project managers who have to plan and budget for translation projects. This paper investigates correlations between two such automatic metrics (general text matcher and translation edit rate) and post-editing productivity. For the purposes of this paper, productivity is measured via processing speed and cognitive measures of effort using eye tracking as a tool. Processing speed, average fixation time and count are found to correlate well with the scores for groups of segments. Segments with high GTM and TER scores require substantially less time and cognitive effort than medium or low-scoring segments. Future research involving score thresholds and confidence estimation is suggested.

Keywords

Post-editing Productivity Cognitive effort Automatic metrics for MT Eye tracking 

References

  1. Alves F, Pagano A, da Silva I (2009) A new window on translators’ cognitive activity. In: Mees I, Alves F, Göpferich S (eds) Methodology, technology and innovation in translation process research, Copenhagen studies in language (38). Samfundslitteratur, Copenhagen, pp 267–291Google Scholar
  2. Bach N, Gao Q, Vogel S (2008) Improving word alignment with language model based confidence scores. In: Proceedings of the third workshop on statistical machine translation, Columbus, Ohio, 19 June. Association for Computational Linguistics, New Jersey, pp 151–154Google Scholar
  3. Banerjee S, Lavie A (2005) METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: ACL-2005, workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, University of Michigan, Ann Arbor, 29 June, pp 65–72Google Scholar
  4. Blatz J, Fitzgerald E, Foster G, Gandrabur S, Goutte C, Kulesza A, Sanchis A, Ueffing N (2004) Confidence estimation for machine translation. In: Coling 2004: Proceedings of the 20th international conference on computational linguistics, 23–27 August, University of Geneva, Switzerland, pp 315–321Google Scholar
  5. Callison-Burch C, Fordyce C, Koehn P, Monz C, Schroeder J (2008) Further meta-evaluation of machine translation. In: Proceedings of ACL-08: HLT. Third workshop on statistical machine translation, June 19, The Ohio State University, Columbus, Ohio (ACL WMT-08), pp 70–106Google Scholar
  6. de Almeida G, O’Brien S (2010) Analysing post-editing performance: correlations with years of translation experience. In: Proceedings of the 14th annual conference of the European association for machine translation, St. Raphaël, France, 27–28 MayGoogle Scholar
  7. Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: Proceedings of the second international conference on human language technology research—HLT 2002, March 24–27, San Diego, CA, pp 138–145Google Scholar
  8. Du J, He Y, Penkale S, Way A (2009) MaTrEx: the DCU MT system for WMT 2009. In: Proceedings of the fourth workshop on statistical machine translation. Association for Computational Linguistics, Athens, Greece, pp 95–99Google Scholar
  9. Duchowski A (2003) Eye tracking methodology: theory and practice. Springer, New YorkzbMATHGoogle Scholar
  10. Fellbaum, C (ed) (1998) WordNet: an electronic lexical database. MIT Press, Cambridge, MAzbMATHGoogle Scholar
  11. Forcada M (2010) http://www.computing.dcu.ie/~mforcada/fosmt.html. Accessed May 7, 2010
  12. Garcia I (2009) Beyond translation memory: computers and the professional translator. J Spec Trans 12: 199–214Google Scholar
  13. Gdaniec C (1994) The logos translatability index. In: Proceedings of the first conference of the association for machine translation in the Americas, 5th–8th October, Columbia, MA, pp 97–105Google Scholar
  14. Göpferich S, Jakobsen AL, Mees I (eds) (2008) Looking at eyes: eye-tracking studies of reading and translation processing. Copenhagen studies in language 36. Samfundslitteratur, CopenhagenGoogle Scholar
  15. Groves D, Schmidtke D (2009) Identification and analysis of post-editing patterns for MT. In: Proceedings of the twelfth machine translation Summit, August 26–30, Ottawa, ON, pp 429–436Google Scholar
  16. Guerberof A, (2009) Productivity and quality in MT post-editing, MT Summit XII—workshop: beyond translation memories: new tools for translators August 29, Ottawa, ON, pp 8Google Scholar
  17. Jensen K, Sjørup A, Winther Balling L (2009) Effects of L1 syntax on L2 translation. In: Mees I, Alves F, Göpferich S (eds) Methodology, technology and innovation in translation process research, Copenhagen studies in language (38). Samfundslitteratur, Copenhagen, pp 319–336Google Scholar
  18. Just M, Carpenter P (1980) A theory of reading: from eye fixation to comprehension. Psychol Rev 87: 329–354CrossRefGoogle Scholar
  19. King M, Popescu-Belis A, Hovy E (2003) FEMTI—creating and using a framework for MT evaluation. In: Proceedings of the ninth machine translation Summit, 23–27 September, New Orleans, pp 224–231Google Scholar
  20. Koehn P (2010) Enabling monolingual translators: post-editing vs. options. In: Proceedings of NAACL HLT 2010: human language technologies—the 2010 annual conference of the North American chapter of the association for computational linguistics, June 2–4, Los Angeles, CA, pp 537–545Google Scholar
  21. Krings HP (2001) Repairing texts: empirical investigations of machine translation post-editing processes, Trans. GS Koby. The Kent State University Press, Kent, OHGoogle Scholar
  22. Lavie, A, Przybocki, M (eds) (2009) Automated metrics for machine translation evaluation—special issue of machine translation, 23, 2/3. Springer, AmsterdamGoogle Scholar
  23. Ma X, Cieri C (2006) Corpus support for machine translation at LDC. In: Proceedings of the fifth international conference on language resources and evaluation, Genoa, Italy, 22–28 May, pp 859–864Google Scholar
  24. McElhaney T, Vasconcellos M (1988) The translator and the postediting experience. In: Vasconcellos M (ed) Technology as translation strategy, American translators association scholarly monograph series, vol II, State University of New York at Binghamton (SUNY), pp 140–148Google Scholar
  25. Mees I, Alves F, Göpferich S (eds) (2009) Methodology, technology and innovation in translation process research—a tribute to Arnt Lykke Jakobsen, Copenhagen studies in language 38. Samfundslitteratur, CopenhagenGoogle Scholar
  26. NIST (2010) The NIST metrics for machine translation 2010 challenge (MetricsMATR10). National Institute of Standards and Technology (America). http://www.nist.gov/itl/iad/mig/upload/NISTMetricsMATR10EvalPlan.pdf. Accessed: 20/04/2010
  27. O’Brien S (2009) Eye tracking in translation-process research: methodological challenges and solutions. In: Mees I, Alves F, Göpferich S (eds) Methodology, technology and innovation in translation process research—a tribute to Arnt Lykke Jakobsen, Copenhagen Studies in Language 38. Samfundslitteratur, Copenhagen, pp 251–266Google Scholar
  28. O’Brien S (2007) An empirical investigation of temporal and technical post-editing effort. Trans Interpret Stud (tis) II(I): 83–136Google Scholar
  29. O’Brien S (2003) Controlling controlled english—an analysis of several controlled language rule sets. In: Proceedings of the joint conference combining the 8th international workshop of the European association for machine translation and the 4th controlled language applications workshop (CLAW 2003), 15th–17th May. Dublin City University, Dublin, Ireland, pp 105–114Google Scholar
  30. Papineni K, Roukos S, Ward T, Zhu W (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of ACL-2002: 40th annual meeting of the association for computational linguistics, Philadelphia, July 2002, pp 311–318Google Scholar
  31. Plitt M, Masselot F (2010) A productivity test of statistical machine translation post-editing in a typical localization context. Prague Bull Math Linguist 93: 7–16CrossRefGoogle Scholar
  32. Radach R, Kennedy A, Rayner K (2004) Eye movements and information processing during reading. Psychology Press, HoveGoogle Scholar
  33. Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychol Bull 124: 372–422CrossRefGoogle Scholar
  34. Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the 7th conference of the association for machine translation in the Americas, August 8–12, Cambridge, MA, pp 223–231Google Scholar
  35. Specia L, Raj D, Turchi M (2010) Machine translation evaluation versus quality estimation. Mach Trans 24(1): 39–50CrossRefGoogle Scholar
  36. Specia L, Saunders C, Turchi M, Wang Z, Shawe-Taylor J (2009a). Improving the confidence of machine translation quality estimates. In: Proceedings of the twelfth machine translation Summit, August 26–30, Ottawa, ON, pp 136–143Google Scholar
  37. Specia L, Cancedda N, Dymetman M, Turchi M, Cristianini N (2009b) Estimating the sentence-level quality of machine translation systems. In: Proceedings of the thirteenth annual conference of the European association for machine translation, May 14–15, Barcelona, Spain, pp 28–35Google Scholar
  38. Takako A, Schwartz L, King R, Corston-Oliver M, Lozano M (2007) Impact of controlled language on translation quality and post-editing in a statistical machine translation environment. In: Proceedings of the eleventh machine translation Summit 10–14 September, Copenhagen, Denmark, pp 1–7Google Scholar
  39. Tatsumi M (2009) Correlation between automatic evaluation scores, post-editing speed and some other factors. In: Proceedings of MT Summit XII, Ottawa, 26–30 August 2009, pp 332–339Google Scholar
  40. Turian J, Shen L, Melamed ID (2003) Evaluation of machine translation and its evaluation. In: Proceedings of the MT Summit IX, New Orleans, 23–27 September 2003, pp 386–393Google Scholar
  41. Underwood N, Jongejan B (2001) Translatability checker: a tool to help decide whether to use MT. In: Maegaard B (ed) Proceedings of the MT Summit VIII: machine translation in the information age, 18–22 September, Santiago de Compostela, Spain, pp 363–368Google Scholar
  42. Way A (2009) A critique of statistical machine translation. In: Daelemans W, Hoste V (eds) Evaluation of translation technology—linguistica antverpiensia new series, themes in translation studies 8, pp 17–41Google Scholar
  43. White J (2003) How to evaluate machine translation. In: Somers H (ed) Computers and translation—a translator’s guide Amsterdam. John Benjamins, Philadelphia, pp 211–244Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.School of Applied Language and Intercultural Studies, Centre for Translation and Textual Studies, Centre for Next Generation LocalisationDublin City UniversityDublinIreland

Personalised recommendations