Advertisement

Analysis of the Impact of Machine Translation Evaluation Metrics for Semantic Textual Similarity

  • Simone MagnoliniEmail author
  • Ngoc Phuoc An Vo
  • Octavian Popescu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10037)

Abstract

We present a work to evaluate the hypothesis that automatic evaluation metrics developed for Machine Translation (MT) systems have significant impact on predicting semantic similarity scores in Semantic Textual Similarity (STS) task, in light of their usage for paraphrase identification. We show that different metrics may have different behaviors and significance along the semantic scale [0–5] of the STS task. In addition, we compare several classification algorithms using a combination of different MT metrics to build an STS system; consequently, we show that although this approach obtains remarkable result in paraphrase identification task, it is insufficient to achieve the same result in STS. We show that this problem is due to an excessive adaptation of some algorithms to dataset domain and at the end a way to mitigate or avoid this issue.

Keywords

Semantic textual similarity Machine translation evaluation metrics Paraphrase recognition 

References

  1. 1.
    Agirre, E., Diab, M., Cer, D., Gonzalez-Agirre, A.: SemEval-2012 task 6: a pilot on semantic textual similarity. In: Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pp. 385–393. Association for Computational Linguistics (2012)Google Scholar
  2. 2.
    Agirre, E., Cer, D., Diab, M., Gonzalez-Agirre, A., Guo, W.: Shared task: semantic textual similarity, including a pilot on typed-similarity. In: The Second Joint Conference on Lexical and Computational Semantics, *SEM 2013. Association for Computational Linguistics, Citeseer (2013)Google Scholar
  3. 3.
    Agirre, E., Baneab, C., Cardiec, C., Cerd, D., Diabe, M., Gonzalez-Agirre, A., Guof, W., Mihalceab, R., Rigaua, G., Wiebeg, J.: SemEval-2014 task 10: multilingual semantic textual similarity. In: SemEval 2014, p. 81 (2014)Google Scholar
  4. 4.
    Agirre, E., Banea, C., Cardie, C., Cer, D., Diab, M., Gonzalez-Agirre, A., Guo, W., Lopez-Gazpio, I., Maritxalar, M., Mihalcea, R., Rigau, G., Uria, L., Wiebe, J.: SemEval-2015 task 2: semantic textual similarity, English, Spanish and pilot on interpretability. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, CO, June 2015Google Scholar
  5. 5.
    de Souza, J.G.C., Negri, M., Mehdad, Y.: FBK: machine translation evaluation and word similarity metrics for semantic textual similarity. In: Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pp. 624–630. Association for Computational Linguistics (2012)Google Scholar
  6. 6.
    Barrón-Cedeño, A., Màrquez Villodre, L., Fuentes Fort, M., Rodríguez Hontoria, H., Turmo Borras, J.: UPC-core: what can machine translation evaluation metrics and Wikipedia do for estimating semantic textual similarity?. In: The Second Joint Conference on Lexical and Computational Semantics, *SEM 2013, pp. 1–5 (2013)Google Scholar
  7. 7.
    Madnani, N., Tetreault, J., Chodorow, M.: Re-examining machine translation metrics for paraphrase identification. In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 182–190. Association for Computational Linguistics (2012)Google Scholar
  8. 8.
    Giménez, J., Màrquez, L.: Asiya: an open toolkit for automatic machine translation (meta-)evaluation. Prague Bull. Math. Linguist. 94, 77–86 (2010)CrossRefGoogle Scholar
  9. 9.
    Šarić, F., Glavaš, G., Karan, M., Šnajder, J., Bašić, B.D.: TakeLab: systems for measuring semantic text similarity. In: Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pp. 441–448. Association for Computational Linguistics (2012)Google Scholar
  10. 10.
    Liu, D., Gildea, D.: Syntactic features for evaluation of machine translation. In: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 25–32. Citeseer (2005)Google Scholar
  11. 11.
    Dolan, B., Quirk, C., Brockett, C.: Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In: Proceedings of the 20th International Conference on Computational Linguistics, p. 350. Association for Computational Linguistics (2004)Google Scholar
  12. 12.
    Denkowski, M., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the EACL 2014 Workshop on Statistical Machine Translation (2014)Google Scholar
  13. 13.
    Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)Google Scholar
  14. 14.
    Doddington, G.: Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: Proceedings of the Second International Conference on Human Language Technology Research, pp. 138–145. Morgan Kaufmann Publishers Inc. (2002)Google Scholar
  15. 15.
    Habash, N., Elkholy, A.: SEPIA: surface span extension to syntactic dependency precision-based MT evaluation. In: Proceedings of the NIST Metrics for Machine Tanslation Workshop at the Association for Machine Translation in the Americas Conference, AMTA-2008. Citeseer, Waikiki, HI (2008)Google Scholar
  16. 16.
    Parker, S.: Badger: A new machine translation metric. Metrics for Machine Translation Challenge (2008)Google Scholar
  17. 17.
    Chan, Y.S., Ng, H.T.: Maxsim: A maximum similarity metric for machine translation evaluation. In: ACL, pp. 55–62. Citeseer (2008)Google Scholar
  18. 18.
    Liu, C., Dahlmeier, D., Ng, H.T.: Tesla: translation evaluation of sentences with linear-programming-based analysis. In: Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metrics MATR, pp. 354–359. Association for Computational Linguistics (2010)Google Scholar
  19. 19.
    Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of Association for Machine Translation in the Americas, pp. 223–231 (2006)Google Scholar
  20. 20.
    Snover, M.G., Madnani, N., Dorr, B., Schwartz, R.: TER-Plus: paraphrase, semantic, and alignment enhancements to translation edit rate. Mach. Transl. 23(2–3), 117–127 (2009)CrossRefGoogle Scholar
  21. 21.
    Robnik-Sikonja, M., Kononenko, I.: An adaptation of relief for attribute estimation in regression. In: Fisher, D.H. (ed.) Fourteenth International Conference on Machine Learning, pp. 296–304. Morgan Kaufmann (1997)Google Scholar
  22. 22.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explor. Newsl. 11(1), 10–18 (2009)CrossRefGoogle Scholar
  23. 23.
    Sultan, M.A., Bethard, S., Sumner, T.: DLS@CU: sentence similarity from word alignment and semantic vector composition. In: SemEval 2015, p. 148 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Simone Magnolini
    • 1
    • 2
    Email author
  • Ngoc Phuoc An Vo
    • 3
  • Octavian Popescu
    • 4
  1. 1.University of BresciaBresciaItaly
  2. 2.FBKTrentoItaly
  3. 3.Xerox Research Centre EuropeMeylanFrance
  4. 4.IBM T.J. Watson ResearchYorktownUSA

Personalised recommendations