Skip to main content

An Efficient 3-Party Framework for Privacy-Preserving Neural Network Inference

  • Conference paper
  • First Online:
Computer Security – ESORICS 2020 (ESORICS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12308))

Included in the following conference series:

Abstract

In the era of big data, users pay more attention to data privacy issues in many application fields, such as healthcare, finance, and so on. However, in the current application scenarios of machine learning as a service, service providers require users’ private inputs to complete neural network inference tasks. Previous works have shown that some cryptographic tools can be used to achieve the secure neural network inference, but the performance gap is still existed to make those techniques practical.

In this paper, we focus on the efficiency problem of privacy-preserving neural network inference and propose novel 3-party secure protocols to implement amounts of nonlinear activation functions such as ReLU and Sigmod, etc. Experiments on five popular neural network models demonstrate that our protocols achieve about \(1.2\times \)\(11.8\times \) and \(1.08\times \)\(4.8\times \) performance improvement than the state-of-the-art 3-party protocols (SecureNN [28]) in terms of computation and communication overhead. Furthermore, we are the first to implement the privacy-preserving inference of graph convolutional networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/snwagh/securenn-public.

References

  1. The health insurance portability and accountability act of 1996 (hipaa). https://www.hhs.gov/hipaa/index.html

  2. Regulation (eu) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (gdpr). https://gdpr-info.eu/

  3. Agrawal, N., Shahin Shamsabadi, A., Kusner, M.J., Gascón, A.: Quotient: two-party secure neural network training and prediction. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1231–1247 (2019)

    Google Scholar 

  4. Angelini, E., di Tollo, G., Roli, A.: A neural network approach for credit risk evaluation. Q. Rev. Econ. Finan. 48(4), 733–755 (2008)

    Article  Google Scholar 

  5. Araki, T., Furukawa, J., Lindell, Y., Nof, A., Ohara, K.: High-throughput semi-honest secure three-party computation with an honest majority. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 805–817 (2016)

    Google Scholar 

  6. Barni, M., Failla, P., Kolesnikov, V., Lazzeretti, R., Sadeghi, A.-R., Schneider, T.: Secure evaluation of private linear branching programs with medical applications. In: Backes, M., Ning, P. (eds.) ESORICS 2009. LNCS, vol. 5789, pp. 424–439. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04444-1_26

    Chapter  Google Scholar 

  7. Barni, M., Orlandi, C., Piva, A.: A privacy-preserving protocol for neural-network-based computation. In: Proceedings of the 8th workshop on Multimedia and security, pp. 146–151 (2006)

    Google Scholar 

  8. Beaver, D.: Efficient multiparty protocols using circuit randomization. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 420–432. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-46766-1_34

    Chapter  Google Scholar 

  9. Bogdanov, D., Laur, S., Willemson, J.: Sharemind: a framework for fast privacy-preserving computations. In: Jajodia, S., Lopez, J. (eds.) ESORICS 2008. LNCS, vol. 5283, pp. 192–206. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88313-5_13

    Chapter  Google Scholar 

  10. Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pp. 136–145. IEEE (2001)

    Google Scholar 

  11. Chellapilla, K., Puri, S., Simard, P.: High performance convolutional neural networks for document processing (2006)

    Google Scholar 

  12. Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in Neural Information Processing Systems, pp. 3844–3852 (2016)

    Google Scholar 

  13. Demmler, D., Schneider, T., Zohner, M.: Aby-a framework for efficient mixed-protocol secure two-party computation. In: NDSS (2015)

    Google Scholar 

  14. Fan, J., Vercauteren, F.: Somewhat practical fully homomorphic encryption. IACR Cryptology ePrint Archive 2012, 144 (2012)

    Google Scholar 

  15. Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In: International Conference on Machine Learning, pp. 201–210 (2016)

    Google Scholar 

  16. Henaff, M., Bruna, J., LeCun, Y.: Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015)

  17. Ishai, Y., Kilian, J., Nissim, K., Petrank, E.: Extending oblivious transfers efficiently. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 145–161. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45146-4_9

    Chapter  Google Scholar 

  18. Juvekar, C., Vaikuntanathan, V., Chandrakasan, A.: \(\{\)GAZELLE\(\}\): a low latency framework for secure neural network inference. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), pp. 1651–1669 (2018)

    Google Scholar 

  19. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  20. Kolesnikov, V., Kumaresan, R.: Improved OT extension for transferring short secrets. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8043, pp. 54–70. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40084-1_4

    Chapter  Google Scholar 

  21. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  22. Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via minionn transformations. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–631 (2017)

    Google Scholar 

  23. Mohassel, P., Rindal, P.: Aby3: a mixed protocol framework for machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 35–52 (2018)

    Google Scholar 

  24. Mohassel, P., Zhang, Y.: Secureml: a system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38. IEEE (2017)

    Google Scholar 

  25. Orlandi, C., Piva, A., Barni, M.: Oblivious neural network computing via homomorphic encryption. EURASIP J. Inf. Secur. 2007, 1–11 (2007). https://doi.org/10.1155/2007/37343

    Article  Google Scholar 

  26. Riazi, M.S., Samragh, M., Chen, H., Laine, K., Lauter, K., Koushanfar, F.: \(\{\)XONN\(\}\): Xnor-based oblivious deep neural network inference. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp. 1501–1518 (2019)

    Google Scholar 

  27. Riazi, M.S., Weinert, C., Tkachenko, O., Songhori, E.M., Schneider, T., Koushanfar, F.: Chameleon: a hybrid secure computation framework for machine learning applications. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pp. 707–721 (2018)

    Google Scholar 

  28. Wagh, S., Gupta, D., Chandran, N.: Securenn: 3-party secure computation for neural network training. Proc. Priv. Enhanc. Technol. 2019(3), 26–49 (2019)

    Article  Google Scholar 

  29. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Yu, P.S.: A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596 (2019)

  30. Yao, A.C.C.: How to generate and exchange secrets. In: 27th Annual Symposium on Foundations of Computer Science (sfcs 1986), pp. 162–167. IEEE (1986)

    Google Scholar 

Download references

Acknowledgment

This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant (No.XDC02040400), DongGuan Innovative Research Team Program (Grant No.201636000100038).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liyan Shen .

Editor information

Editors and Affiliations

Appendices

A Graph Convolutional Network

For spectral-based GCN, the goal of these models is to learn a function of signals/features on a graph \(\mathcal {G=(V,E},\textit{\textbf{A}})\), where \(\mathcal {V}\) is a finite set of \(|\mathcal {V}|\) = n vertices, \(\mathcal {E}\) is a set of edges and \(\textit{\textbf{A}}\in \mathbb {R}^{n\times n}\) is a representative description of the graph structure typically in the form of an adjacency matrix. It will be computed based on the training data by the server and the graph structure is identical for all signals [16]. \(\textit{\textbf{x}}\in \mathbb {R}^{n}\) is a signal, each row \(x_v\) is a scalar for node v. An essential operator in spectral graph analysis is the graph Laplacian \(\textit{\textbf{L}}\), and the normalized definition is \(\textit{\textbf{L}}=\textit{\textbf{I}}_n-\textit{\textbf{D}}^{-1/2}\textit{\textbf{A}}\textit{\textbf{D}}^{-1/2}\), where \(\textit{\textbf{I}}_n\) is the identity matrix and \(\textit{\textbf{D}}\) is the diagonal node degree matrix of \(\textit{\textbf{A}}\).

Defferrard et al. [12] proposed to approximate the graph filters by a truncated expansion in terms of Chebyshev polynomials. The Chebyshev polynomials are recursively defined as \(T_k(x)=2xT_{k-1}(x)-T_{k-2}(x)\), with \(T_0(x)=1\) and \(T_1(x)=x\). And filtering of a signal \(\textit{\textbf{x}}\) with a K-localized filter \(g_{\varvec{\theta }}\) can be performed using: \(g_{\varvec{\theta }}*\textit{\textbf{x}}=\sum _{k=0}^{K-1}\varvec{\theta }_k T_k(\tilde{\textit{\textbf{L}}})\textit{\textbf{x}}\), with \(\tilde{\textit{\textbf{L}}}=\frac{2}{\lambda _{max}}\textit{\textbf{L}}-\textit{\textbf{I}}_n\). \(\lambda _{max}\) is the largest eigenvalue of \(\textit{\textbf{L}}\). \(\varvec{\theta }\in \mathbb {R}^K\) is a vector of Chebyshev coefficients.

The definition generalized to a signal matrix \(\textit{\textbf{X}}\in \mathbb {R}^{n\times c}\) with c dimensional feature vector for each node and f feature maps is as follows: \(\textit{\textbf{Y}}=\sum _{k=0}^{K-1}T_k(\tilde{\textit{\textbf{L}}})\varvec{X\Theta }_k\) where \(\textit{\textbf{Y}}\in \mathbb {R}^{n\times f}\), \(\varvec{\Theta }_k\in \mathbb {R}^{c\times f}\) and the total number of trainable parameters per layer is \(c\times f\times K\) (\(\varvec{\Theta }\in \mathbb {R}^{K\times c\times f}\)). Through graph convolution layer, GCN can capture feature vector information of all neighboring nodes. It preserves both network topology structure and node feature information.

However, the privacy-preserving inference of GCN is not suitable for node classification tasks, in which test nodes (without labels) are included in GCN training. It could not quickly generate embeddings and make predictions for unseen nodes [19].

B Maxpool Protocol

figure f
figure g

C Neural Network Structure

Fig. 3.
figure 3

The neural network A presented in SecureML [24]

Fig. 4.
figure 4

The neural network B presented in Chameleon [27]

Fig. 5.
figure 5

The neural network C/D presented in MiniONN [22] and [21] resp.

Fig. 6.
figure 6

Graph convolutional neural network trained from the MNIST dataset

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shen, L., Chen, X., Shi, J., Dong, Y., Fang, B. (2020). An Efficient 3-Party Framework for Privacy-Preserving Neural Network Inference. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds) Computer Security – ESORICS 2020. ESORICS 2020. Lecture Notes in Computer Science(), vol 12308. Springer, Cham. https://doi.org/10.1007/978-3-030-58951-6_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58951-6_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58950-9

  • Online ISBN: 978-3-030-58951-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics