Skip to main content

Human and Automatic Evaluation of English to Hindi Machine Translation Systems

  • Conference paper
Advances in Computer Science, Engineering & Applications

Part of the book series: Advances in Intelligent and Soft Computing ((AINSC,volume 166))

Abstract

Machine Translation Evaluation is the most formidable activity in Machine Translation Development. We present the MT evaluation results of some of the machine translators available online for English-Hindi machine translation. The systems are measured on automatic evaluation metrics and human subjectivity measures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Arnold, D., Balkan, L., Meijer, S., Humphreys, R., Sadler, L. (eds.): Machine Translation: An Introductory Guide. Blackwell-NCC, London (1994)

    Google Scholar 

  • Banerjee, S., Lavie, A.: METEOR: An automatic metric for MT Evaluation with improved correlation with human judgments. In: Proceedings Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and Summarization of Annual Meetings of Association of Computational Linguistics, pp. 65–72 (2005)

    Google Scholar 

  • Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: (Meta) Evaluation of machine translation. In: Proceedings of the Second Workshop on Statistical Machine Translation, pp. 136–158 (2007)

    Google Scholar 

  • Chen, B., Kuhn, R.: AMBER: A modified BLEU, enhancing ranking metric. In: Proceedings of the Workshop on Statistical Machine Translation (2011)

    Google Scholar 

  • Dabbadie, M., Hartley, A., King, M., Miller, K., Hadi, W.M.E., Popescu-Belis, A., Reeder, F., Vanni, M.: A Hands-On Study of Reliability and Coherence of Evaluation Metrics. In: Handbook of LREC 2002 Workshop Machine Translation Evaluation: Human Evaluation Meet Automated Metrics (2002)

    Google Scholar 

  • Denkowski, D., Lavie, A.: Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In: Proceedings of the Workshop on Statistical Machine Translation (2011)

    Google Scholar 

  • Falkedal, K.: Evaluation Methods for Machine Translation Systems: An Historical Survey and a Critical Account. ISSCO: Interim Report to Suissetra (1994)

    Google Scholar 

  • Joshi, N., Mathur, I., Mathur, S.: Translation Memory for Indian Languages: An Aid for Human Translators. In: Proceedings of 2nd International Conference and Workshop in Emerging Trends in Technology (2011)

    Google Scholar 

  • Lavie, A., Agarwal, A.: METEOR: an automatic metric for evaluation with high levels of correlation with human judgments. In: Workshop on Statistical Machine Translation at the 45th Annual Meeting of the Association of Computational Linguistics (2007)

    Google Scholar 

  • Lehrberger, J., Bourbeau, L.: Machine Translation: Linguistic Characteristics of MT Systems and General Methodology of Evaluation. John Benjamin Publishers (1988)

    Google Scholar 

  • Leusch, G., Ueffing, N., Ney, H.: CDER: Efficient MT Evaluation Using Block Movements. In: Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (2006)

    Google Scholar 

  • Miller, G.A., Beebe-Center, J.G.: Some Psychological Methods for Evaluating the Quality of Translation. Mechanical Translations 3 (1956)

    Google Scholar 

  • Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation, RC22176. Technical Report, IBM T.J. Watson Research Center (2001)

    Google Scholar 

  • Pfafflin, S.M.: Evaluation of Machine Translations by Reading Comprehension Tests and Subjective Judgments. Mechanical Translation and Computational Linguistics 8, 2–8 (1956)

    Google Scholar 

  • Ramnathan, A., Rao, D.: A Lightweight Stemmer for Hindi. In: Proceedings of Workshop on Computational Linguistics for South Asian Languages, 10th Conference of the European Chapter of Association of Computational Linguistics, pp. 42–48 (2003)

    Google Scholar 

  • Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. In: Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA), pp. 223–231 (2006)

    Google Scholar 

  • Wilks, Y.: Machine Translation: Its Scope and Limits. Springer, New York (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nisheeth Joshi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag GmbH Berlin Heidelberg

About this paper

Cite this paper

Joshi, N., Darbari, H., Mathur, I. (2012). Human and Automatic Evaluation of English to Hindi Machine Translation Systems. In: Wyld, D., Zizka, J., Nagamalai, D. (eds) Advances in Computer Science, Engineering & Applications. Advances in Intelligent and Soft Computing, vol 166. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30157-5_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-30157-5_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-30156-8

  • Online ISBN: 978-3-642-30157-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics