Skip to main content

Secure Transfer Learning for Machine Fault Diagnosis Under Different Operating Conditions

  • Conference paper
  • First Online:
Provable and Practical Security (ProvSec 2020)

Abstract

The success of deep learning is largely due to the availability of big training data nowadays. However, data privacy could be a big concern, especially when the training or inference is done on untrusted third-party servers. Fully Homomorphic Encryption (FHE) is a powerful cryptography technique that enables computation on encrypted data in the absence of decryption key, thus could protect data privacy in an outsourced computation environment. However, due to its large performance and resource overheads, current applications of FHE to deep learning are still limited to very simple tasks. In this paper, we first propose a neural network training framework on FHE encrypted data, namely PrivGD. PrivGD leverages the Single-Instruction Multiple-Data (SIMD) packing feature of FHE to efficiently implement the Gradient Descent algorithm in the encrypted domain. In particular, PrivGD is the first to support training a multi-class classification network with double-precision float-point weights through approximated Softmax function in FHE, which has never been done before to the best of our knowledge. Then, we show how to apply FHE with transfer learning for more complicated real-world applications. We consider outsourced diagnosis services, as with the Machine-Learning-as-a-Service paradigm, for multi-class machine faults on machine sensor datasets under different operating conditions. As directly applying the source model trained on the source dataset (collected from source operating condition) to the target dataset (collect from the target operating condition) will lead to degraded diagnosis accuracy, we propose to transfer the source model to the target domain by retraining (fine-tuning) the classifier of the source model with data from the target domain. The target domain data is encrypted with FHE so that its privacy is preserved during the transfer learning process. We implement the secure transfer learning process with our PrivGD framework. Experiments results show that by fine-tuning a source model for fewer than 10 epochs with encrypted target domain data, the model can converge to an increased diagnosis accuracy by up to 20%, while the whole fine-tuning process takes approximate 3.85 h on our commodity server.

This research/project is supported by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Programmatic Programme (Award A19E3b0099).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For CKKS implementation in the SEAL library, the first prime is consumed in the encryption process, the last prime is used to accommodate the scaled plaintext value, and all the other primes in between are consumed one by one after each multiplication.

References

  1. Peduzzi, P., Concato, J., Kemper, E., Holford, T.R., Feinstein, A.R.: A simulation study of the number of events per variable in logistic regression analysis. J. Clin. Epidemiol. 49(12), 1373–1379 (1996)

    Article  Google Scholar 

  2. Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: International Conference on Machine Learning, pp. 201–210 (2016)

    Google Scholar 

  3. Chou, E., Beal, J., Levy, D., Yeung, S., Haque, A., Fei-Fei, L.: Faster cryptonets: leveraging sparsity for real-world encrypted inference. arXiv preprint arXiv:1811.09953 (2018)

  4. Al Badawi, A., et al.: The AlexNet moment for homomorphic encryption: HCNN, the first homomorphic CNN on encrypted data with GPUs. arXiv preprint arXiv:1811.00778 (2018)

  5. Jiang, X., Kim, M., Lauter, K., Song, Y.: Secure outsourced matrix computation and application to neural networks. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 1209–1222 (2018)

    Google Scholar 

  6. Brutzkus, A., Elisha, O., Gilad-Bachrach, R.: Low latency privacy preserving inference. arXiv preprint arXiv:1812.10659 (2018)

  7. Bourse, F., Minelli, M., Minihold, M., Paillier, P.: Fast homomorphic evaluation of deep discretized neural networks. In: Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018. LNCS, vol. 10993, pp. 483–512. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96878-0_17

    Chapter  Google Scholar 

  8. Hesamifard, E., Takabi, H., Ghasemi, M.: CryptoDL: deep neural networks over encrypted data. arXiv preprint arXiv:1711.05189 (2017)

  9. Jin, C., et al.: CareNets: compact and resource-efficient CNN for homomorphic inference on encrypted medical images. arXiv preprint arXiv:1901.10074 (2019)

  10. Sadegh Riazi, M., Samragh, M., Chen, H., Laine, K., Lauter, K., Koushanfar, F.: XONN: Xnor-based oblivious deep neural network inference. In: 28th USENIX Security Symposium (USENIX Security 2019), pp. 1501–1518 (2019)

    Google Scholar 

  11. Mishra, P., Lehmkuhl, R., Srinivasan, A., Zheng, W., Popa, R.A.: Delphi: a cryptographic inference service for neural networks. In: 29th USENIX Security Symposium (USENIX Security 20) (2020)

    Google Scholar 

  12. Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via minionn transformations. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–631 (2017)

    Google Scholar 

  13. Juvekar, C., Vaikuntanathan, V., Chandrakasan, A.: GAZELLE: a low latency framework for secure neural network inference. In: 27th USENIX Security Symposium (USENIX Security 2018), pp. 1651–1669 (2018)

    Google Scholar 

  14. Gentry, C.: Fully homomorphic encryption using ideal lattices. In: Proceedings of the Forty-first Annual ACM Symposium on Theory of Computing, pp. 169–178 (2009)

    Google Scholar 

  15. Yao, A.C.-C.: How to generate and exchange secrets. In: 27th Annual Symposium on Foundations of Computer Science (SFCs 1986), pp. 162–167. IEEE (1986)

    Google Scholar 

  16. Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game, or a completeness theorem for protocols with honest majority. In Proceedings of the Nineteenth ACM Symposium on Theory of Computing, STOC, pp. 218–229 (1987)

    Google Scholar 

  17. Kim, M., Song, Y., Wang, S., Xia, Y., Jiang, X.: Secure logistic regression based on homomorphic encryption: design and evaluation. JMIR Med. Inf. 6(2), e19 (2018)

    Article  Google Scholar 

  18. Smart, N.P., Vercauteren, F.: Fully homomorphic SIMD operations. Des. Codes Cryptogr. 71(1), 57–81 (2014)

    Article  Google Scholar 

  19. Rivest, R.L., Adleman, L., Dertouzos, M.L., et al.: On data banks and privacy homomorphisms. Found. Secure Comput. 4(11), 169–180 (1978)

    MathSciNet  Google Scholar 

  20. Brakerski, Z., Gentry, C., Vaikuntanathan, V.: (leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory (TOCT) 6(3), 1–36 (2014)

    Article  MathSciNet  Google Scholar 

  21. Fan, J., Vercauteren, F.: Somewhat practical fully homomorphic encryption. IACR Cryptology ePrint Archive 2012, 144 (2012)

    Google Scholar 

  22. Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70694-8_15

    Chapter  Google Scholar 

  23. Brakerski, Z., Vaikuntanathan, V.: Fully homomorphic encryption from ring-LWE and security for key dependent messages. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 505–524. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22792-9_29

    Chapter  Google Scholar 

  24. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)

    Article  Google Scholar 

  25. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

  26. Yu, J., Jiang, J.: Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 236–246 (2016)

    Google Scholar 

  27. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)

  28. Rusu, A.A., Večerík, M., Rothörl, T., Heess, N., Pascanu, R., Hadsell, R.: Sim-to-real robot learning from pixels with progressive nets. In: Conference on Robot Learning, pp. 262–270 (2017)

    Google Scholar 

  29. Halevi, S., Shoup, V.: Algorithms in HElib. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8616, pp. 554–571. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44371-2_31

    Chapter  MATH  Google Scholar 

  30. Thaine, P., Gorbunov, S., Penn, G.: Efficient evaluation of activation functions over encrypted data. In: 2019 IEEE Security and Privacy Workshops (SPW), pp. 57–63. IEEE (2019)

    Google Scholar 

  31. Titsias, M.: RC AUEB. One-vs-each approximation to softmax for scalable estimation of probabilities. In: Advances in Neural Information Processing Systems, pp. 4161–4169 (2016)

    Google Scholar 

  32. Basterretxea, K., Tarela, J.M., Del Campo, I.: Approximation of sigmoid function and the derivative for hardware implementation of artificial neurons. IEE Proc. Circuits, Devices Syst. 151(1), 18–24 (2004)

    Google Scholar 

  33. Vlcek, M.: Chebyshev polynomial approximation for activation sigmoid function. Neural Netw. World 4(12), 387–393 (2012)

    Article  Google Scholar 

  34. Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38. IEEE (2017)

    Google Scholar 

  35. Case Western Reserve University Bearing Data Center. Motor bearing fault datasets. https://csegroups.case.edu/bearingdatacenter/home

  36. Jiang, G.-Q., Xie, P., Wang, X., Chen, M., He, Q.: Intelligent fault diagnosis of rotary machinery based on unsupervised multiscale representation learning. Chin. J. Mech. Eng. 30(6), 1314–1324 (2017)

    Article  Google Scholar 

  37. Barker, E., Barker, W., Burr, W., Polk, W., Smid, M., et al.: Recommendation for key management: Part 1: General. National Institute of Standards and Technology, Technology Administration (2006)

    Google Scholar 

  38. Kim, A., Song, Y., Kim, M., Lee, K., Cheon, J.H.: Logistic regression model training based on the approximate homomorphic encryption. BMC Med. Genom. 11(4), 83, (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Jin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jin, C., Ragab, M., Aung, K.M.M. (2020). Secure Transfer Learning for Machine Fault Diagnosis Under Different Operating Conditions. In: Nguyen, K., Wu, W., Lam, K.Y., Wang, H. (eds) Provable and Practical Security. ProvSec 2020. Lecture Notes in Computer Science(), vol 12505. Springer, Cham. https://doi.org/10.1007/978-3-030-62576-4_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62576-4_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62575-7

  • Online ISBN: 978-3-030-62576-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics