Skip to main content

Quality Expectations of Machine Translation

  • Chapter
  • First Online:
Translation Quality Assessment

Part of the book series: Machine Translation: Technologies and Applications ((MATRA,volume 1))

Abstract

Machine Translation (MT) is being deployed for a range of use-cases by millions of people on a daily basis. There should, therefore, be no doubt as to the utility of MT. However, not everyone is convinced that MT can be useful, especially as a productivity enhancer for human translators. In this chapter, I address this issue, describing how MT is currently deployed, how its output is evaluated and how this could be enhanced, especially as MT quality itself improves. Central to these issues is the acceptance that there is no longer a single ‘gold standard’ measure of quality, such that the situation in which MT is deployed needs to be borne in mind, especially with respect to the expected ‘shelf-life’ of the translation itself.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This concept is also applied to crowdsourced translation by Jiménez-Crespo in this volume.

  2. 2.

    https://www.gala-global.org/industry/industry-facts-and-data

  3. 3.

    https://www.taus.net/think-tank/news/press-release/size-machine-translation-market-is-250-million-taus-publishes-new-market-report

  4. 4.

    Technavio estimate that the MT market will grow at a CAGR rate of 23.53% during 2015–19 (http://www.slideshare.net/technavio/global-machine-translation-market-20152019)

  5. 5.

    https://googleblog.blogspot.ie/2012/04/breaking-down-language-barriersix-years.html

  6. 6.

    https://events.google.com/io2016/

  7. 7.

    https://www.quora.com/Is-Facebooks-machine-translation-MT-based-on-principles-common-to-other-statistical-MT-systems-or-is-it-somehow-different

  8. 8.

    https://www.bing.com/translator

  9. 9.

    https://www.kantanmt.com/

  10. 10.

    https://www.microsoft.com/en-us/translator/hub.aspx

  11. 11.

    In its original exposition in Papineni et al. (2002), the BLEU (“Bilingual Evaluation Understudy”) score for a document was a figure between 0 and 1, the higher the better indicator of the quality of the MT system being evaluated. Here, and more commonly used nowadays in the field, this score is multiplied by 100 so that ‘BLEU points’ can be used to indicate progress compared to some benchmark.

  12. 12.

    We omit a lengthy discussion here on ‘round trip’ translation as an evaluation method (but cf. footnote 25), as it has been demonstrated by Somers (2005) to be an untrusted means of MT evaluation. In Way (2013), I note that in order to show that MT is error-prone, “sites like Translation Party (http://www.translationparty.com/) have been set up to demonstrate that continuous use of ‘back translation’ – that is, start with (say) an English sentence, translate it into (say) French, translate that output back into English, ad nauseum – ends up with a string that differs markedly from that which you started out with”. I quickly show that such websites have the opposite effect, and observe that “It’s easy to show MT to be useless; it’s just as easy to show it to be useful, but some people don’t want to”.

  13. 13.

    Indeed, the results from the ALPAC evaluation demonstrated there to be considerable correlation between intelligibility and fidelity.

  14. 14.

    http://www.itl.nist.gov/iad/mig/tests/mt/2006/doc/mt06eval_official_results.html

  15. 15.

    Minimally, in an SMT system these would be the “translation model” inferred from the parallel data, which essentially suggests which target-language words and phrases might best be used to try to create a translation of the source string; and the “language model” inferred from large collections of monolingual data, and used to try to create the most likely target-language ordering of those suggested target words and phrases.

  16. 16.

    See Sect. 3.2.2 for discussion of document-level versus sentence-level MT evaluation.

  17. 17.

    METEOR rewards MT output composed of fewer chunks. Output containing bigram (or longer) matches compared to the reference translation is penalised less than that comprising unigram matches only.

  18. 18.

    Nonetheless, more recent papers (Agarwal and Lavie 2008; Farrús et al. 2012) have also demonstrated that BLEU correlates extremely well with human judgement of translation quality.

  19. 19.

    This was introduced to prevent systems from outputting very short target-language strings (such as “the”) but nonetheless obtaining a high score. Accordingly, the shorter the translation compared to the reference translation, the more punitive the brevity penalty.

  20. 20.

    See Popović (this volume) for a discussion of the evolution of diagnostic MT error typologies.

  21. 21.

    ‘Phrases’ in phrase-based SMT refer only to n-gram sequences, i.e. contiguous sequences of surface words, not to the linguistic “constituent” sense of the word.

  22. 22.

    The Workshop (now Conference) on Machine Translation runs annual competitive MT system evaluations for a range of tasks. See http://www.statmt.org/wmt17/ for the latest in the series.

  23. 23.

    Over the past 10 years or so, SMT system developers have been incorporating more and more linguistic features. It is interesting to ponder whether BLEU (and similar metrics) disadvantages such linguistically enhanced systems compared to ‘pure’ SMT engines, in much the same way as RBMT output was penalised compared to pure n-gram-based systems.

  24. 24.

    Note, however, that the NMT system of Luong and Manning (2015) was more than 5 BLEU points better than a range of SMT systems for English to German. This sort of difference in BLEU score is more like what we might expect given the huge improvements in quality noted by Bentivogli et al. (2016) in their study. In this regard, both Shterionov et al. (2018) and Way (2018) note that BLEU may be under-reporting the difference in quality seen when using NMT systems, with the former attempting to measure the level of under-reporting using a set of novel metrics.

  25. 25.

    Without further comment, we merely note here that the ‘round trip’ (or ‘back’) translation discredited by Somers (2005) – cf. footnote 12 – has been demonstrated to be very useful in NMT as a means of generating additional ‘synthetic’ parallel training material (e.g. Sennrich et al. 2016b).

  26. 26.

    The subfield of quality estimation (see Specia and Shah in this volume) attempts to predict whether a new source string will result in a good or bad translation. This is different from MT evaluation, where we have a reference translation to compare the MT hypothesis against post hoc.

References

  • Agarwal A, Lavie A (2008) METEOR, M-BLEU and M-TER: evaluation metrics for high-correlation with human rankings of machine translation output. In: Proceedings of the third workshop on Statistical Machine Translation, Columbus, pp 115–118

    Google Scholar 

  • Albrecht J, Hwa R (2007) Regression for sentence-level MT evaluation with pseudo references. In: Proceedings of the 45th annual meeting of the Association of Computational Linguistics, Prague, pp 296–303

    Google Scholar 

  • Arnold D, Moffat D, Sadler L, Way A (1993) Automatic generation of test suites. Mach Transl 8:29–38

    Article  Google Scholar 

  • Arnold D, Balkan L, Meijer S, Humphreys L, Sadler L (1994) Machine translation: an introductory guide. Blackwells-NCC, London

    Google Scholar 

  • Babych B, Hartley A (2004) Extending the BLEU MT evaluation method with frequency weightings. In: Proceedings of ACL 2004: 42nd annual meeting of the Association for Computational Linguistics, Barcelona, pp 621–628

    Google Scholar 

  • Balkan L, Jäschke M, Humphreys L, Meijer S, Way A (1991) Declarative evaluation of an MT system: practical experiences. In: Proceedings of the evaluators’ forum, Les Rasses, Vaud, pp 85–97

    Google Scholar 

  • Balkan L, Arnold D, Meijer S (1994) Test suites for natural language processing. In: Proceedings of translating and the computer 16, London, pp 51–58

    Google Scholar 

  • Banerjee S, Lavie A (2005) METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: Proceedings of ACL 2005, Proceedings of the workshop on intrinsic and extrinsic evaluation measures for MT and/or summarization at the 43rd annual meeting of the Association for Computational Linguistics, Ann Arbor, pp 65–72

    Google Scholar 

  • Bellos D (2011) Is that a fish in your ear: translation and the meaning of everything. Particular Books, London

    Google Scholar 

  • Bentivogli L, Bisazza A, Cettolo M, Federico M (2016) Neural versus phrase-based machine translation quality: a case study. In: Proceedings of the 2016 conference on empirical methods in natural language processing, Austin, pp 257–267

    Google Scholar 

  • Biçici E, Dymetman M (2008) Dynamic translation memory: using statistical machine translation to improve translation memory fuzzy matches. In: Proceedings of the 9th international conference on computational linguistics and intelligent text processing, Haifa, pp 454–465

    Google Scholar 

  • Callison-Burch C, Osborne M, Koehn P (2006) Re-evaluating the role of BLEU in machine translation research. In: Proceedings of EACL 2006, 11th conference of the European chapter of the Association for Computational Linguistics, Trento, pp 249–256

    Google Scholar 

  • Callison-Burch C, Fordyce C, Koehn P, Monz C, Schroeder J (2008) Further meta-evaluation of machine translation. In: Proceedings of the third workshop on Statistical Machine Translation, Columbus, pp 70–106

    Google Scholar 

  • Chatterjee R, Turchi M, Negri M (2015) The FBK participation in the WMT15 automatic post-editing shared task. In: Proceedings of the tenth workshop on Statistical Machine Translation, Lisbon, pp 210–215

    Google Scholar 

  • Chung J, Cho K, Bengio Y (2016) A character-level decoder without explicit segmentation for neural machine translation. In: Proceedings of the 54th annual meeting of the Association for Computational Linguistics, vol 1: Long Papers. Berlin, pp 1693–1703

    Google Scholar 

  • Coughlin D (2003) Correlating automated and human assessments of machine translation quality. In: Proceedings of MT Summit IX, New Orleans, pp 63–70

    Google Scholar 

  • Cuong H, Frank S, Sima’an K (2016) ILLC-UvA adaptation system (Scorpio) at WMT’16 IT-DOMAIN Task. In: Proceedings of the first conference on Machine Translation, Berlin, pp 423–427

    Google Scholar 

  • de Almeida G (2013) Translating the post-editor: an investigation of post-editing changes and correlations with professional experience across two Romance languages. Dissertation, Dublin City University

    Google Scholar 

  • Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: Proceedings of HLT 2002: human language technology conference, San Diego, pp 138–145

    Google Scholar 

  • Doyon J, White J, Taylor K (1999) Task-based evaluation for machine translation. In: Proceedings of MT Summit VII “MT in the Great Translation Era”, Singapore, pp 574–578

    Google Scholar 

  • Farrús M, Costa-Jussà M, Popović M (2012) Study and correlation analysis of linguistic, perceptual and automatic machine translation evaluations. J Am Soc Inf Sci Technol 63(1):174–184

    Article  Google Scholar 

  • Font Llitjós A, Carbonell J, Lavie A (2005) A framework for interactive and automatic refinement of transfer-based machine translation. In: 10th EAMT conference “Practical applications of machine translation”, Budapest, pp 87–96

    Google Scholar 

  • Ha T-L, Niehues J, Cho E, Mediani M, Waibel A (2015) The KIT translation systems for IWSLT 2015. In: Proceedings of international workshop on spoken language translation, Da Nang, pp 62–69

    Google Scholar 

  • He Y, Way A (2009a) Improving the objective function in minimum error rate training. In: Proceedings of Machine Translation Summit XII, Ottawa, pp 238–245

    Google Scholar 

  • He Y, Way A (2009b) Metric and reference factors in minimum error rate training. Mach Transl 24(1):27–38

    Article  Google Scholar 

  • He Y, Way A (2009c) Learning labelled dependencies in machine translation evaluation. In: Proceedings of EAMT-09, the 13th annual meeting of the European Association for Machine Translation, Barcelona, pp 44–51

    Google Scholar 

  • He Y, Ma Y, van Genabith J, Way A (2010a) Bridging SMT and TM with translation recommendation. In: Proceedings of the 48th annual meeting of the Association for Computational Linguistics, Uppsala, pp 622–630

    Google Scholar 

  • He Y, Ma Y, Way A, van Genabith J (2010b) Integrating n-best SMT outputs into a TM system. In: Proceedings of the 23rd international conference on computational linguistics, Beijing, pp 374–382

    Google Scholar 

  • Heyn M (1998) Translation memories – insights & prospects. In: Bowker L, Cronin M, Kenny D, Pearson J (eds) Unity in diversity? Current trends in translation studies. St Jerome, Manchester, pp 123–136

    Google Scholar 

  • Hofmann N (2015) MT-enhanced fuzzy matching with Transit NXT and STAR Moses. EAMT-2015: Proceedings of the eighteenth annual conference of the European Association for Machine Translation, Antalya, p 215

    Google Scholar 

  • Hovy Y, Ravichandran D (2003) Holy and unholy grails. Panel discussion at MT Summit IX, New Orleans. Available from http://www.mt-archive.info/MTS-2003-Hovy-1.pdf. Accessed 12 Nov 2017

  • Huck M, Birch A (2015) The Edinburgh machine translation systems for IWSLT 2015. In: Proceedings of the international workshop on spoken language translation, Da Nang, pp 31–38

    Google Scholar 

  • Humphreys L, Jäschke M, Way A, Balkan L, Meyer S (1991) Operational evaluation of MT, draft research proposal. Working papers in language processing 22, University of Essex

    Google Scholar 

  • Isozaki H, Hirao T, Duh K, Sudoh K, Tsukada H (2010) Automatic evaluation of translation quality for distant language pairs. In: Proceedings of the 2010 conference on empirical methods in natural language processing, Cambridge, pp 944–952

    Google Scholar 

  • Jean S, Firat O, Cho K, Memisevic R, Bengio Y (2015) Montreal neural machine translation systems for WMT15. In: Proceedings of the tenth workshop on Statistical Machine Translation, Lisbon, pp 134–140

    Google Scholar 

  • Jehl L, Simianer P, Hitschler J, Riezler S (2015) The Heidelberg University English-German translation system for IWSLT 2015. In: Proceedings of the international workshop on spoken language translation, Da Nang, pp 45–49

    Google Scholar 

  • King M, Falkedal K (1990) Using test suites in evaluation of MT systems. In: Proceedings of COLING-90, Papers presented to the 13th international conference on computational linguistics, vol 2, Helsinki, pp 211–216

    Google Scholar 

  • Koehn P, Senellart J (2010) Convergence of translation memory and statistical machine translation. In: Proceedings of AMTA workshop on MT Research and the Translation Industry, Denver, pp 21–31

    Google Scholar 

  • Koehn P, Och F, Marcu D (2003) Statistical phrase-based translation. In: Proceedings of HLT-NAACL 2003: conference combining Human Language Technology conference series and the North American chapter of the Association for Computational Linguistics conference series, Edmonton, pp 48–54

    Google Scholar 

  • Koehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, Dyer C, Bojar O, Constantin A, Herbst E (2007) Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th annual meeting of the Association of Computational Linguistics, Prague, pp 177–180

    Google Scholar 

  • Levenshtein V (1966) Binary codes capable of correcting deletions, insertions, and reversals. Sov Phys Dokl 10:707–710

    MathSciNet  Google Scholar 

  • Lewis W, Quirk C (2013) Controlled ascent: imbuing statistical MT with linguistic knowledge. In: Proceedings of the second workshop on Hybrid Approaches to Translation, Sofia, pp 51–66

    Google Scholar 

  • Li L, Way A, Liu Q (2014) A discriminative framework of integrating translation memory features into SMT. In: Proceedings of the 11th conference of the Association for Machine Translation in the Americas, vol 1: MT Researchers Track, Vancouver, pp 249–260

    Google Scholar 

  • Liang P, Bouchard-Côté A, Klein D, Taskar B (2006) An end-to-end discriminative approach to machine translation. In: Proceedings of the 21st international conference on computational linguistics and 44th annual meeting of the Association for Computational Linguistics, Sydney, pp 761–768

    Google Scholar 

  • Lin C-Y, Och F (2004) ORANGE: a Method for evaluating automatic evaluation metrics for machine translation. In: COLING 2004: Proceedings of the 20th international conference on Computational Linguistics, Geneva, pp 501–507

    Google Scholar 

  • Liu D, Gildea D (2005) Syntactic features for evaluation of machine translation. In: Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, Ann Arbor, pp 25–32

    Google Scholar 

  • Luong M-T, Manning C (2015) Stanford neural machine translation systems for spoken language domains. In: Proceedings of the international workshop on spoken language translation, Da Nang, pp 76–79

    Google Scholar 

  • Luong M-T, Manning C (2016) Achieving open vocabulary neural machine translation with hybrid word-character models. In: Proceedings of the 54th annual meeting of the Association for Computational Linguistics, vol 1: Long Papers, Berlin, pp 1054–1063

    Google Scholar 

  • Ma Y, He Y, Way A, van Genabith J (2011) Consistent translation using discriminative learning – a translation memory-inspired approach. In: Proceedings of the 49th annual meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, pp 1239–1248

    Google Scholar 

  • Miller G, Beckwith R, Fellbaum C, Gross D, Miller K (1990) Introduction to WordNet: an on-line lexical database. Int J Lexicogr 3(4):235–244

    Article  Google Scholar 

  • Moorkens J, Way A (2016) Comparing translator acceptability of TM and SMT outputs. Balt J Mod Comput 4(2):141–151

    Google Scholar 

  • Naskar S, Toral A, Gaspari F, Way A (2011) Framework for diagnostic evaluation of MT based on linguistic checkpoints. In: Proceedings of Machine Translation Summit XIII, Xiamen, pp 529–536

    Google Scholar 

  • Och F (2003) Minimum error rate training in statistical machine translation. In: ACL 2003, 41st annual meeting of the Association for Computational Linguistics, Sapporo, pp 160–167

    Google Scholar 

  • Owczarzak K, van Genabith J, Way A (2007) Labelled dependencies in machine translation evaluation. In: Proceedings of the second workshop on Statistical Machine Translation, Prague, pp 104–111

    Google Scholar 

  • Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a method for automatic evaluation of machine translation. In: ACL-2002: 40th annual meeting of the Association for Computational Linguistics, Philadelphia, pp 311–318

    Google Scholar 

  • Penkale S, Way A (2013) Tailor-made quality-controlled translation. In: Proceedings of translating and the computer 35, London, 7 pages

    Google Scholar 

  • Pierce J, Carroll J, Hamp E, Hays D, Hockett C, Oettinger A, Perlis A (1966) Language and machines – computers in translation and linguistics. ALPAC report, National Academy of Sciences, Washington, DC

    Google Scholar 

  • Popović M (2015) ChrF: character n-gram F-score for automatic MT evaluation. In: Proceedings of the tenth workshop on Statistical Machine Translation, Lisbon, pp 392–395

    Google Scholar 

  • Popović M, Ney H (2011) Towards automatic error analysis of machine translation output. Comput Linguist 37(4):657–688

    Article  MathSciNet  Google Scholar 

  • Riezler S, Maxwell J (2005) On some pitfalls in automatic evaluation and significance testing for MT. In: Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, Ann Arbor, pp 57–64

    Google Scholar 

  • Sennrich R, Haddow B, Birch A (2016a) Edinburgh neural machine translation systems for WMT 16. In: Proceedings of the first conference on Machine Translation, Berlin, pp 371–376

    Google Scholar 

  • Sennrich R, Haddow B, Birch A (2016b) Improving neural machine translation models with monolingual data. In: Proceedings of the 54th annual meeting of the Association for Computational Linguistics, vol 1, Berlin, pp 86–96

    Google Scholar 

  • Shterionov D, Nagle P, Casanellas L, Superbo R, O’Dowd T, Way A (2018) Human vs automatic quality evaluation of NMT and PBSMT. Mach Transl 32(3–4.) (in press)

    Google Scholar 

  • Sikes R (2007) Fuzzy matching in theory and practice. Multilingual 18(6):39–43

    Google Scholar 

  • Simard M, Isabelle P (2009) Phrase-based machine translation in a computer-assisted translation environment. In: Proceedings of the twelfth Machine Translation Summit (MT Summit XII), Ottawa, pp 120–127

    Google Scholar 

  • Smith A, Hardmeier C, Tiedemann J (2016) Climbing mount BLEU: the strange world of reachable high-BLEU translations. Balt J Mod Comput 4(2):269–281

    Google Scholar 

  • Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of AMTA 2006, the 7th conference of the Association for Machine Translation in the Americas, Cambridge, pp 223–231

    Google Scholar 

  • Somers H (2005) Round-trip translation: what is it good for? In: Proceedings of the Australasian Language Technology workshop 2005 (ALTW 2005), Sydney, pp 71–77

    Google Scholar 

  • Thomas K (1999) Designing a task-based evaluation methodology for a spoken machine translation system. In: Proceedings of 37th annual meeting of the Association for Computational Linguistics, College Park, pp 569–572

    Google Scholar 

  • Tillmann C, Vogel S, Ney H, Sawaf H, Zubiaga A (1997) Accelerated DP-based search for statistical translation. In: Proceedings of the 5th European conference on Speech Communication and Technology (EuroSpeech ’97), Rhodes, pp 2667–2670

    Google Scholar 

  • Vasconcellos M (1989) MT utilization at the Pan American Health Organization. In: IFTT’89: harmonizing human beings and computers in translation. International Forum for Translation Technology, Oiso, pp 56–58

    Google Scholar 

  • Vilar D, Xu J, D’Haro L, Ney H (2006) Error analysis of statistical machine translation output. In: Proceedings of the fifth international conference on Language Resources and Evaluation (LREC), Pisa, pp 697–702

    Google Scholar 

  • Voss C, Tate C (2006) Task-based evaluation of machine translation (MT) engines: measuring how well people extract who, when, where-type elements in MT output. In: EAMT-2006: 11th annual conference of the European Association for Machine Translation, Proceedings, Oslo, pp 203–212

    Google Scholar 

  • Wang K, Zong C, Su K-Y (2013) Integrating translation memory into phrase-based machine translation during decoding. In: Proceedings of the 51st annual meeting of the Association for Computational Linguistics, vol1, Sofia, pp 11–21

    Google Scholar 

  • Way A (2012) Is that a fish in your ear: translation and the meaning of everything – David Bellos, book review. Mach Transl 26(3):255–269

    Article  Google Scholar 

  • Way A (2013) Traditional and emerging use-cases for machine translation. In: Proceedings of translating and the computer 35, London

    Google Scholar 

  • Way A (2018) Machine translation: where are we at today? In: Angelone E, Massey G, Ehrensberger-Dow M (eds) The Bloomsbury companion to language industry studies. Bloomsbury, London. (in press)

    Google Scholar 

  • Ye Y, Zhou M, Lin C-Y (2007) Sentence level machine translation evaluation as a ranking. In: Proceedings of the second workshop on Statistical Machine Translation, Prague, pp 240–247

    Google Scholar 

  • Zhang J, Wu X, Calixto I, Hosseinzadeh Vahid A, Zhang X, Way A, Liu Q (2014) Experiments in medical translation shared task at WMT 2014. In: Proceedings of WMT 2014: the ninth workshop on Statistical Machine Translation, Baltimore, pp 260–265

    Google Scholar 

  • Zhou L, Lin C-Y, Munteanu D, Hovy E (2006) Paraeval: using paraphrases to evaluate summaries automatically. In: Proceedings of the Human Language Technology conference of the NAACL, main conference, New York City, pp 447–454

    Google Scholar 

Download references

Acknowledgments

This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andy Way .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Way, A. (2018). Quality Expectations of Machine Translation. In: Moorkens, J., Castilho, S., Gaspari, F., Doherty, S. (eds) Translation Quality Assessment. Machine Translation: Technologies and Applications, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-319-91241-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91241-7_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91240-0

  • Online ISBN: 978-3-319-91241-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics