Advertisement

Human and Automatic Evaluation of English to Hindi Machine Translation Systems

Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 166)

Abstract

Machine Translation Evaluation is the most formidable activity in Machine Translation Development. We present the MT evaluation results of some of the machine translators available online for English-Hindi machine translation. The systems are measured on automatic evaluation metrics and human subjectivity measures.

Keywords

Machine Translation Evaluation Subjective Evaluation BLEU METEOR 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arnold, D., Balkan, L., Meijer, S., Humphreys, R., Sadler, L. (eds.): Machine Translation: An Introductory Guide. Blackwell-NCC, London (1994)Google Scholar
  2. Banerjee, S., Lavie, A.: METEOR: An automatic metric for MT Evaluation with improved correlation with human judgments. In: Proceedings Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and Summarization of Annual Meetings of Association of Computational Linguistics, pp. 65–72 (2005)Google Scholar
  3. Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: (Meta) Evaluation of machine translation. In: Proceedings of the Second Workshop on Statistical Machine Translation, pp. 136–158 (2007)Google Scholar
  4. Chen, B., Kuhn, R.: AMBER: A modified BLEU, enhancing ranking metric. In: Proceedings of the Workshop on Statistical Machine Translation (2011)Google Scholar
  5. Dabbadie, M., Hartley, A., King, M., Miller, K., Hadi, W.M.E., Popescu-Belis, A., Reeder, F., Vanni, M.: A Hands-On Study of Reliability and Coherence of Evaluation Metrics. In: Handbook of LREC 2002 Workshop Machine Translation Evaluation: Human Evaluation Meet Automated Metrics (2002)Google Scholar
  6. Denkowski, D., Lavie, A.: Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In: Proceedings of the Workshop on Statistical Machine Translation (2011)Google Scholar
  7. Falkedal, K.: Evaluation Methods for Machine Translation Systems: An Historical Survey and a Critical Account. ISSCO: Interim Report to Suissetra (1994)Google Scholar
  8. Joshi, N., Mathur, I., Mathur, S.: Translation Memory for Indian Languages: An Aid for Human Translators. In: Proceedings of 2nd International Conference and Workshop in Emerging Trends in Technology (2011)Google Scholar
  9. Lavie, A., Agarwal, A.: METEOR: an automatic metric for evaluation with high levels of correlation with human judgments. In: Workshop on Statistical Machine Translation at the 45th Annual Meeting of the Association of Computational Linguistics (2007)Google Scholar
  10. Lehrberger, J., Bourbeau, L.: Machine Translation: Linguistic Characteristics of MT Systems and General Methodology of Evaluation. John Benjamin Publishers (1988)Google Scholar
  11. Leusch, G., Ueffing, N., Ney, H.: CDER: Efficient MT Evaluation Using Block Movements. In: Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (2006)Google Scholar
  12. Miller, G.A., Beebe-Center, J.G.: Some Psychological Methods for Evaluating the Quality of Translation. Mechanical Translations 3 (1956)Google Scholar
  13. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation, RC22176. Technical Report, IBM T.J. Watson Research Center (2001)Google Scholar
  14. Pfafflin, S.M.: Evaluation of Machine Translations by Reading Comprehension Tests and Subjective Judgments. Mechanical Translation and Computational Linguistics 8, 2–8 (1956)Google Scholar
  15. Ramnathan, A., Rao, D.: A Lightweight Stemmer for Hindi. In: Proceedings of Workshop on Computational Linguistics for South Asian Languages, 10th Conference of the European Chapter of Association of Computational Linguistics, pp. 42–48 (2003)Google Scholar
  16. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. In: Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA), pp. 223–231 (2006)Google Scholar
  17. Wilks, Y.: Machine Translation: Its Scope and Limits. Springer, New York (2008)Google Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Apaji InstituteBanasthali UniversityTonkIndia
  2. 2.Centre for Development of Advanced ComputingPuneIndia

Personalised recommendations