The Impact of Crowdsourcing Post-editing with the Collaborative Translation Framework

  • Takako Aikawa
  • Kentaro Yamamoto
  • Hitoshi Isahara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7614)


This paper presents a preliminary report on the impact of crowdsourcing post-editing through the so-called ”Collaborative Translation Framework” (CTF) developed by the Machine Translation team at Microsoft Research. We first provide a high-level overview of CTF and explain the basic functionalities available from CTF. Next, we provide the motivation and design of our crowdsourcing post-editing project using CTF. Last, we present the results from the project and our observations. Crowdsourcing translation is an increasingly popular-trend in the MT community, and we hope that our paper can shed new light on the research into crowdsourcing translation.


Crowdsourcing post-editing Collaborative Translation Framework 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Allen, J.: Post-editing. In: Somers, H. (ed.) Computers and Translation: A Translator’s Guide, pp. 297–317. John Benjamins Publishing Company, Amsterdam (2003)Google Scholar
  2. 2.
    Allen, J.: What is post-editing? Translation Automation 4, 1–5 (2005)Google Scholar
  3. 3.
    O’Brien, S.: Methodologies for measuring the correlations between post-editing effortand machine translatability. Machine Translation 19(1), 37–58 (2005)CrossRefGoogle Scholar
  4. 4.
    Guerberof, A.: Productivity and quality in MT post-editing. In: MT Summit XII – Workshop: Beyond Translation Memories: New Tools for Translators MT, Ottawa, Ontario, Canada, p. 8 (2009a)Google Scholar
  5. 5.
    Guerberof, A.: Productivity and quality in the post-editing of outputs from translation memories and machine translation. Localisation Focus. The International Journal of Localisation 7(1) (2009b)Google Scholar
  6. 6.
    Koehn, P., Haddow, B.: Interactive Assistance to Human Translators using Statistical Machine Translation Methods. In: MT Summit XII (2009)Google Scholar
  7. 7.
    Ambati, V., Vogel, S.: Can crowds build parallel corpora for machine translation systems? In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, Los Angeles, CA, pp. 62–65 (2010)Google Scholar
  8. 8.
    Zaidan, O.F., Callison-Burch, C.: Crowdsourcing translation: professional quality from non-professionals. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pp. 1220–1229 (2011)Google Scholar
  9. 9.
    Ambati, V., Vogel, S., Carbonell, J.: Active learning and crowd-sourcing for machine translation. Language Resources and Evaluation (LREC)Google Scholar
  10. 10.
    Callison-Burch, C.: Fast, cheap, and creative: Evaluating translation quality using Amazon’s Mechanical Turk. In: Proceeds of the 2009 Conference on Empirical Methods in Natural Language Processing, Singapore, pp. 286–295 (2009)Google Scholar
  11. 11.
    Higgins, C., McGrath, E., Moretto, L.: MTurk crowdsourcing: A viable method for rapid discovery of Arabic nicknames? In: NAACL Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk, Los Angeles, CA, pp. 89–92 (2010)Google Scholar
  12. 12.
    Yamamoto, K., Aikawa, T., Isahara, H.: Impact of collective intelligence on the post-editing machine translation output (機械翻訳出力の後編集の集合知による省力化). In: Proceedings of NLP 2012, Hiroshima, Japan (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Takako Aikawa
    • 1
  • Kentaro Yamamoto
    • 2
  • Hitoshi Isahara
    • 2
  1. 1.Machine Translation TeamMicrosoft ResearchUSA
  2. 2.Toyohashi University of TechnologyJapan

Personalised recommendations