Skip to main content
Log in

Can college students be post-editors? An investigation into employing language learners in machine translation plus post-editing settings

  • Published:
Machine Translation

Abstract

Despite the pressure to reduce costs in the advent of machine translation plus post-editing (PE), many professional translators are reluctant to accept PE jobs, which are perceived as requiring less skill and yielding poorer quality products than human translation (HT). This trend in turn raises an issue in the industry, namely, a lack of post-editors. To meet the growing demand for PE, new populations—such as college language learners—should be assessed as potential post-editor candidates. This paper investigates this possibility through an experiment focusing on college language learners’ PE qualifications and resultant performance. Data collected on perceived ease of task, editing quantity, and quality of final product were correlated with the students’ course grades. The investigation found that over 74 % of students felt PE to be an easier task than HT, whereas 26 % did not. Those students who did not find PE easier were determined to be unqualified post-editors. Students who received poor grades in a traditional translation course were also confirmed to be unqualified, though A-students were not always qualified post-editors. The variable performance among A-students may be understood in terms of different approaches to PE, characterized as utilizing either analytic or integrated processing. An analysis using this framework tentatively concludes that A-students who apply an analytic approach, more typical of novice translators, may perform better as post-editors than those who take an integrated approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. We acknowledge here that it may to be entirely fair to compare perceived relaxed load between professionals and students, given that the former group are much more used to doing translations. Note, however, that while the individuals in this study had not performed PE this task before, the difference in skill-sets may skew the results a little.

  2. Test of English for International Communication. See http://www.toeic.or.jp/english.html. The maximum achievable score is 990. It would be interesting to track TOEIC score versus class grade, as there may be an interaction here. We leave this for future work.

  3. Note that while this is clearly an uncontrolled environment, we do take steps to investigate whether quantitative measures of PE performance reinforce or correlate with such qualitative measures.

  4. The reason for selecting GTM is related to Tatsumi’s research (2009) that investigates the correlation between automatic metric scales (textual similarity) and human PE effort in terms of time. Among the tested metrics (BLEU (Papineni et al. 2002), TER (Snover et al. 2006), NIST (Doddington 2002), and GTM), GTM shows the highest correlation with PE speed (ibid.). However, the correlation is still weak, and the level of correlation differs greatly depending on the structure of the sentence being translated.

  5. Mann-Whitney’s U test is applied.

  6. A model translation for Gil Amelio NeXT Computer, produced by a professional translator reads:

    figure h

    ..[Gil Amelio wa NeXT computer e no torikumi ni chakushu shi...] (Back-translation: Gil Amelio started to work on NeXT computer). As is apparent, the professional translator has used a sense-based translation approach.

References

  • Allen J (2003) Post-editing. In: Somers H (ed) Computers and translation: a translator’s guide. John Benjamins, Amsterdam, pp 297–317

    Chapter  Google Scholar 

  • Bowker L (2005) Productivity vs. quality: a pilot study on the impact of translation memory systems. Localisation Focus 4(1):13–20

    Google Scholar 

  • Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: HLT 2002: Human Language Technology Conference: proceedings of the second international conference on human language technology research. San Diego, California, pp 138–145

  • Dragsted B (2004) Segmentation in translation and translation memory systems: An empirical investigation of cognitive segmentation and effects of integrating a TM system into the translation process. PhD Thesis, Copenhagen Business School, Copenhagen

  • Fiederer R, O’Brien S (2009) Quality and machine translation: a realistic objective? J Special Transl 11: 52–74

    Google Scholar 

  • García I (2010) Is machine translation ready yet? Target 22(1):7–21

    Article  Google Scholar 

  • Groves D, Schmidtke D (2009) Identification and analysis of post-editing patterns for MT. In: Proceedings of MT Summit XII. Ottawa, pp 429–436

  • Guerberof A (2008) Productivity and quality in the post-editing of outputs from translation memories and machine translation (Unpublished minor dissertation). Universitat Rovira i Virgili, Tarragona

    Google Scholar 

  • Krings HP (2001) Repairing texts: Empirical investigations of machine translation post-editing processes, Trans. G.S. Koby. The Kent State University Press, Kent

    Google Scholar 

  • Mossop B (2001) Revising and editing for translators. St Jerome, Manchester

    Google Scholar 

  • O’Brien S (2002) Teaching post-editing: a proposal for course content. In: Proceedings of the 6th EAMT Workshop on “Teaching Machine Translation”. Manchester, pp 99–106

  • O’Brien S (2007) An empirical investigation of temporal and technical post-editing effort. Transl Interpret Stud II(I):83–136

    Article  Google Scholar 

  • Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a method for automatic evaluation of machine translation. In: ACL-2002: 40th Annual meeting of the Association for Computational Linguistics, Philadelphia, PA, pp 311–318

  • Plitt M, Masselot F (2010) A productivity test of statistical machine translation post-editing in a typical localization context. Prague Bull Math Linguist 93:7–16

    Article  Google Scholar 

  • Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: AMTA 2006: Proceedings of the 7th conference of the Association for Machine Translation in the Americas, “Visions for the Future of Machine Translation’, Cambridge, MA, pp 223–231

  • Tatsumi M (2009) Correlation between automatic evaluation scores, post-editing speed and some other factors. In: Proceedings of MT Summit XII. Ottawa, pp 332–339

  • TAUS (2010) Machine translation postediting guidelines. http://www.translationautomation.com/postediting/machine-translation-post-editing-guidelines. Accessed 10 Jan 2014

  • Turian J, Shen L, Melamed D (2003) Evaluation of machine translation and its evaluation. In: Proceedings of the MT Summit IX, New Orleans, pp 386–393

  • Veale T, Way A (1997) Gaijin: A bootstrapping approach to example-based machine translation. In: International conference on recent advances in natural language processing, Tzigov Chark, pp 239–244

  • Wagner E (1985) Post-editing Systran: a challenge for commission translators. Terminol Trad 3:1–7

    Google Scholar 

  • Way A (2013) Traditional and emerging use-cases for machine translation. In: Proceedings of translating and the computer 35, London

  • Yamada M (2012) Revising text: An empirical investigation of revision and the effects of integrating a TM and MT system into the translation process. PhD Thesis, Rikkyo University, Tokyo

  • Yamada M (2013) Dare ga post-editor ni naruno ka? [Who will be post editors]. Honyaku Kenkyuu e no Shootai [Introducing Translation Studies], 10. http://honyakukenkyu.sakura.ne.jp/shotai_vol10/No_10-004-Yamada.pdf Accessed 10 January, 2014

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masaru Yamada.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yamada, M. Can college students be post-editors? An investigation into employing language learners in machine translation plus post-editing settings. Machine Translation 29, 49–67 (2015). https://doi.org/10.1007/s10590-014-9167-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-014-9167-7

Keywords

Navigation