Advertisement

Augmenting SMT with Generated Pseudo-parallel Corpora from Monolingual News Resources

  • Krzysztof WołkEmail author
  • Agnieszka Wołk
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 569)

Abstract

Several natural languages have had much processing, but the problem of limited linguistic resources remains. Manual creation of parallel corpora by humans is rather expensive and very time consuming. In addition, language data required for statistical machine translation (SMT) does not exist in adequate capacity to use its statistical information to initiate the research process. On the other hand, applying unsubstantiated approaches to build the parallel resources from multiple means like comparable corpora or quasi-comparable corpora is very complicated and provides rather noisy output. These outputs of the process would later need to be reprocessed, and in-domain adaptations would also be required. To optimize the performance of these algorithms, it is essential to use a quality parallel corpus for training of the end-to-end procedure. In the present research, we have developed a methodology to generate an accurate parallel corpus from monolingual resources through the calculation of compatibility between the results of machine translation systems. We have translations of huge, single-language resources through the application of multiple translation systems and the strict measurement of translation compatibility with rules based on the Levenshtein distance. The results produced by such an approach are very favorable. All the monolingual resources that we obtained were taken from the WMT16 conference for Czech to generate the parallel corpus, which improved translation performance.

Keywords

Parallel corpora Corpora preparation Generating corpora Data mining parallel corpora 

Notes

Acknowledgements

Work financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and was backed by the PJATK legal resources.

References

  1. 1.
    Munteanu, D.S., Fraser, A.M., Marcu, D.: Improved machine translation performance via parallel sentence extraction from comparable corpora. In: HLT-NAACL, pp. 265–272 (2004)Google Scholar
  2. 2.
    Smith, J.R., Quirk, C., Toutanova, K.: Extracting parallel sentences from comparable corpora using document level alignment. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 403–411. Association for Computational Linguistics (2010)Google Scholar
  3. 3.
    Callison-Burch, C., Osborne, M.: Co-training for statistical machine translation. Ph.D. Thesis. Master’s thesis, School of Informatics, University of Edinburgh (2002)Google Scholar
  4. 4.
    Ueffing, N., Haffari, G., Sarkar, A.: Semisupervised learning for machine translation. In: Learning Machine Translation, Pittsburgh, Pennsylvania, pp. 237–256. The MIT Press, February 2009Google Scholar
  5. 5.
    Mann, G.S., Yarowsky, D.: Multipath translation lexicon induction via bridge languages. In: Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics on Language Technologies, pp. 1–8. Association for Computational Linguistics (2001)Google Scholar
  6. 6.
    Kumar, S., Och, F.J., Macherey, W.: Improving word alignment with bridge languages. In: EMNLP-CoNLL, pp. 42–50 (2007)Google Scholar
  7. 7.
    Wu, H., Wang, H.: Pivot language approach for phrase-based statistical machine translation. Mach. Transl. 21(3), 165–181 (2007)CrossRefGoogle Scholar
  8. 8.
    Habash, N., Hu, J.: Improving arabic-chinese statistical machine translation using english as pivot language. In: Proceedings of the Fourth Workshop on Statistical Machine Translation, pp. 173–181. Association for Computational Linguistics (2009)Google Scholar
  9. 9.
    Eisele, A., et al.: Hybrid machine translation architectures within and beyond the EuroMatrix project. In: Proceedings of the 12th Annual Conference of the European Association for Machine Translation (EAMT), pp. 27–34 (2008)Google Scholar
  10. 10.
    Cohn, T., Lapata, M.: Machine translation by triangulation: making effective use of multi-parallel corpora. In: Annual Meeting, p. 728. Association for Computational Linguistics (2007)Google Scholar
  11. 11.
    Leusch, G., et al.: Multi-pivot translation by system combination. In: IWSLT, pp. 299–306 (2010)Google Scholar
  12. 12.
    Bertoldi, N., et al.: Phrase-based statistical machine translation with pivot languages. In: IWSLT, pp. 143–149 (2008)Google Scholar
  13. 13.
    Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pp. 177–180. Association for Computational Linguistics (2007)Google Scholar
  14. 14.
    Stolcke, A., et al.: SRILM-an extensible language modeling toolkit. In: INTERSPEECH, p. 2002 (2002)Google Scholar
  15. 15.
    Junczys-Dowmunt, M., Szał, A.: SyMGiza ++: symmetrized word alignment models for statistical machine translation. In: Bouvry, P., Kłopotek, Mieczysław A., Leprévost, F., Marciniak, M., Mykowiecka, A., Rybiński, H. (eds.) SIIS 2011. LNCS, vol. 7053, pp. 379–390. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-25261-7_30 CrossRefGoogle Scholar
  16. 16.
    Durrani, N., et al.: Integrating an unsupervised transliteration model into statistical machine translation. In: EACL, pp. 148–153 (2014)Google Scholar
  17. 17.
    Cettolo, M., Girardi, C., Federico, M.: WIT3: Web inventory of transcribed and translated talks. In: Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pp. 261–268 (2012)Google Scholar
  18. 18.
    Abdelali, A., et al.: The AMARA corpus: building parallel language resources for the educational domain. In: LREC, pp. 1044–1054 (2014)Google Scholar
  19. 19.
    Moses statistical machine translation, ‘‘OOVs’’ Last revised 13 Feb 2015. http://www.statmt.org/moses/?n=Advanced.OOVs#ntoc2. Accessed 27 Sep 2015
  20. 20.
    Heafield, K.: KenLM: faster and smaller language model queries. In: Proceedings of the Sixth Workshop on Statistical Machine Translation, pp. 187–197. Association for Computational Linguistics (2011)Google Scholar
  21. 21.
    Ruiz costa-jussà, M., Rodríguez fonollosa, J.A.: Using linear interpolation and weighted reordering hypotheses in the moses system. In: Seventh Conference on International Language Resources and Evaluation, pp. 1712–1718 (2011)Google Scholar
  22. 22.
    Moses statistical machine translation, ‘‘Build reordering model.” Last revised 28 Jul 2013. http://www.statmt.org/moses/?n=FactoredTraining.BuildReorderingModel. Accessed 10 Oct 2015
  23. 23.
    Amittai, A., He, X., Gao, J.: Domain adaptation via pseudo in-domain data selection In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, p. 362. Association for Computational Linguistics (2011)Google Scholar
  24. 24.
    Wang, L., et al.: A systematic comparison of data selection criteria for SMT domain adaptation. Sci. World J. (2014). https://www.hindawi.com/journals/tswj/2014/745485/
  25. 25.
    Papineni, K., et al.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)Google Scholar
  26. 26.
    Yujian, L., Bo, L.: A normalized levenshtein distance metric. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1091–1095 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Polish-Japanese Academy of Information TechnologyWarsawPoland

Personalised recommendations