Skip to main content

Modeling Without Sharing Privacy: Federated Neural Machine Translation

  • Conference paper
  • First Online:
Web Information Systems Engineering – WISE 2021 (WISE 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13080))

Included in the following conference series:

Abstract

Training neural machine translation models requires large amount of diverse training corpora. It poses a challenge for collecting sufficient data. In addition, labeling monolingual corpus demands professional knowledge in certain domain. Building collaboration between different institutes produces other problems such as legality of data exchange and commercial data leakage.

In this paper, we proposed a federated neural machine translation model FedNMT to train a robust machine translation system without sharing raw data from participants. By applying FedNMT, neural machine translation (NMT) systems can be ameliorated from the corpus held by different contributors without directly exposing them to one another. This approach preserves the user privacy by utilizing the federated learning framework, encryption techniques. In the federated learning paradigms, a global model is distributed to user clients, and a central server is built to aggregate the learning parameters and update the gradients. Experimental results show the effectiveness of our model in comparison with the data-centralized model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.statmt.org/europarl/v9/training/europarl-v9.de-en.tsv.gz.

  2. 2.

    http://data.statmt.org/news-commentary/v14/news-commentary-v14.tsv.gz.

  3. 3.

    http://mteval.cipsc.org.cn:81/agreement/wmt.

  4. 4.

    https://github.com/thunlp/THULAC-Python.

  5. 5.

    https://github.com/Kyubyong/transformer.

References

  1. Abadi, M., Chu, A., Goodfellow, I.J., et al.: Deep learning with differential privacy, pp. 308–318. ACM (2016)

    Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2015)

    Google Scholar 

  3. Ben-David, S., Blitzer, J., Crammer, K., et al.: A theory of learning from different domains. Mach. Learn. 79(1-2), 151–175 (2010)

    Google Scholar 

  4. Bonawitz, K., Eichner, H., Grieskamp, W., et al.: Towards federated learning at scale: system design. CoRR abs/1902.01046 (2019)

    Google Scholar 

  5. Chen, M., Suresh, A.T., Mathews, R., et al.: Federated learning of N-gram language models, pp. 121–130 (2019)

    Google Scholar 

  6. Cheng, H.-P., et al.: Towards decentralized deep learning with differential privacy. In: Da Silva, D., Wang, Q., Zhang, L.-J. (eds.) CLOUD 2019. LNCS, vol. 11513, pp. 130–145. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23502-4_10

    Chapter  Google Scholar 

  7. Gehring, J., Auli, M., Grangier, D., et al.: Convolutional sequence to sequence learning. Proc. Mach. Learn. Res. 70, 1243–1252 (2017)

    Google Scholar 

  8. Hisamoto, S., Post, M., Duh, K.: Membership inference attacks on sequence-to-sequence models: is my data in your machine translation system? Trans. Assoc. Comput. Linguistics 8, 49–63 (2020)

    Article  Google Scholar 

  9. Lample, G., Ott, M., Conneau, A., et al.: Phrase-based & neural unsupervised machine translation. In: EMNLP (2018)

    Google Scholar 

  10. Li, W., et al.: Privacy-preserving federated brain tumour segmentation. In: Suk, H.-I., Liu, M., Yan, P., Lian, C. (eds.) MLMI 2019. LNCS, vol. 11861, pp. 133–141. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32692-0_16

    Chapter  Google Scholar 

  11. Li, Z., Sun, M.: Punctuation as implicit annotations for Chinese word segmentation. Comput. Linguistics 35(4), 505–512 (2009)

    Article  Google Scholar 

  12. McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. Proc. Mach. Learn. Res. 54, 1273–1282 (2017)

    Google Scholar 

  13. Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. In: 26th IEEE Symposium on Computer Arithmetic, ARITH 2019, Kyoto, Japan, 10–12 June 2019, p. 198. IEEE (2019)

    Google Scholar 

  14. Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation, pp. 311–318. ACL (2002)

    Google Scholar 

  15. Peterson, D., Kanani, P., Marathe, V.J.: Private federated learning with domain adaptation. CoRR abs/1912.06733 (2019)

    Google Scholar 

  16. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need (2017)

    Google Scholar 

  17. Wang, S., Liu, Y., Wang, C., et al.: Improving back-translation with uncertainty-based confidence estimation, pp. 791–802 (2019)

    Google Scholar 

Download references

Acknowledgement

This paper is supported by National Key Research and Development Program of China under grant No. 2018YFB0204403.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lingwei Kong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Huang, Z., Kong, L., Li, D., Xiao, J. (2021). Modeling Without Sharing Privacy: Federated Neural Machine Translation. In: Zhang, W., Zou, L., Maamar, Z., Chen, L. (eds) Web Information Systems Engineering – WISE 2021. WISE 2021. Lecture Notes in Computer Science(), vol 13080. Springer, Cham. https://doi.org/10.1007/978-3-030-90888-1_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90888-1_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90887-4

  • Online ISBN: 978-3-030-90888-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics