Skip to main content

: Towards Secure and Lightweight Deep Learning as a Medical Diagnostic Service

  • 2048 Accesses

Part of the Lecture Notes in Computer Science book series (LNSC,volume 12972)

Abstract

The striking progress of deep learning paves the way towards intelligent and quality medical diagnostic services. Enterprises deploy such services via the neural network (NN) inference, yet confronted with rising privacy concerns of the medical data being diagnosed and the pre-trained NN models. We propose , a system framework that enables enterprises to offer secure medical diagnostic service to their customers via an execution of NN inference in the ciphertext domain.  ensures the privacy of both parties with cryptographic guarantees. At the heart, we present an efficient and communication-optimized secure inference protocol that purely relies on the lightweight secret sharing techniques and can well cope with the commonly-used linear and non-linear NN layers. Compared to the garbled circuits based solutions, the latency and communication of  are 24\(\times \) lower and 868\(\times \) less for the secure ReLU, and 20\(\times \) lower and 314\(\times \) less for the secure Max-pool. We evaluate  on two benchmark and four real-world medical datasets, and comprehensively compare it with prior arts. The results demonstrate the promising performance of , which is much more bandwidth-efficient compared to prior works.

Keywords

  • Secure computation
  • Privacy-preserving medical service
  • Neural network inference
  • Secret sharing

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-88418-5_25
  • Chapter length: 23 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   89.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-88418-5
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Notes

  1. 1.

    From a direct comparison with results reported in SOTA which demands highly optimized implementations with GPU acceleration. Our performance result is not based on such optimization.

  2. 2.

    We refer the readers to more details in Appendix Sect. A.

  3. 3.

    Biases can be added to the medical service’s shares locally.

  4. 4.

    Preprocessing: 243MB in  and 4915MB in Delphi.

References

  1. Breast cancer. https://www.kaggle.com/uciml/breast-cancer-wisconsin-data/

  2. Diabetes. https://www.kaggle.com/uciml/pima-indians-diabetes-database

  3. Liver disease. https://www.kaggle.com/uciml/indian-liver-patient-records

  4. Thyroid. https://archive.ics.uci.edu/ml/datasets/Thyroid+Disease

  5. Google DeepMind Health (2020). https://deepmind.com/blog/announcements/deepmind-health-joins-google-health

  6. Microsoft Project InnerEye (2020). https://www.microsoft.com/en-us/research/project/medical-image-analysis/

  7. PathAI (2020). https://www.pathai.com/

  8. 104th United States Congress: Health Insurance Portability and Accountability Act of 1996 (HIPPA) (1996). https://www.hhs.gov/hipaa/index.html

  9. Atallah, M., Bykova, M., Li, J., Frikken, K., Topkara, M.: Private collaborative forecasting and benchmarking. In: Proceedings of WPES (2004)

    Google Scholar 

  10. Barni, M., Failla, P., Lazzeretti, R., Sadeghi, A.R., Schneider, T.: Privacy-preserving ECG classification with branching programs and neural networks. IEEE Trans. Inf. Forensics Secur. 6, 452–468 (2011)

    CrossRef  Google Scholar 

  11. Beaver, D.: Efficient multiparty protocols using circuit randomization. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 420–432. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-46766-1_34

    CrossRef  Google Scholar 

  12. Brutzkus, A., Gilad-Bachrach, R., Elisha, O.: Low latency privacy preserving inference. In: Proceedings of ICML, pp. 812–821. PMLR (2019)

    Google Scholar 

  13. European Parliament and the Council: The General Data Protection Regulation (GDPR) (2016). http://data.europa.eu/eli/reg/2016/679/2016-05-04

  14. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of ACM CCS (2015)

    Google Scholar 

  15. Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: Proceedings of ICML (2016)

    Google Scholar 

  16. Goldreich, O., Micali, S., Wigderson, A.: How to play ANY mental game or a completeness theorem for protocols with honest majority. In: Proceedings of STOC (1987)

    Google Scholar 

  17. Harris, D.: A taxonomy of parallel prefix networks. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers 2003, vol. 2, pp. 2213–2217. IEEE (2003)

    Google Scholar 

  18. Jacobi, A., Chung, M., Bernheim, A., Eber, C.: Portable chest X-ray in coronavirus disease-19 (COVID-19): a pictorial review. Clin. Imaging 64, 35–42 (2020)

    CrossRef  Google Scholar 

  19. Juvekar, C., Vaikuntanathan, V., Chandrakasan, A.: GAZELLE: a low latency framework for secure neural network inference. In: Proceedings of 27th USENIX Security (2018)

    Google Scholar 

  20. Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993)

    CrossRef  Google Scholar 

  21. Li, S., et al.: FALCON: a Fourier transform based approach for fast and secure convolutional neural network predictions. In: Proceedings of IEEE/CVF CVPR (2020)

    Google Scholar 

  22. Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via MiniONN transformations. In: Proceedings of ACM CCS (2017)

    Google Scholar 

  23. Liu, X., Wu, B., Yuan, X., Yi, X.: Leia: A lightweight cryptographic neural network inference system at the edge. IACR Cryptology ePrint Archive 2020, 463 (2020)

    Google Scholar 

  24. Liu, X., Yi, X.: Privacy-preserving collaborative medical time series analysis based on dynamic time warping. In: Sako, K., Schneider, S., Ryan, P.Y.A. (eds.) ESORICS 2019. LNCS, vol. 11736, pp. 439–460. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29962-0_21

    CrossRef  Google Scholar 

  25. Liu, X., Zheng, Y., Yi, X., Nepal, S.: Privacy-preserving collaborative analytics on medical time series data. IEEE Trans. Dependable Secur. Comput., 1 (2020). https://doi.org/10.1109/TDSC.2020.3035592

  26. Lou, Q., Jiang, L.: SHE: a fast and accurate deep neural network for encrypted data. In: Proceedings of NeurIPS, pp. 10035–10043 (2019)

    Google Scholar 

  27. Lou, Q., Lu, W.j., Hong, C., Jiang, L.: FALCON: fast spectral inference on encrypted data. In: Proceedings of NeurIPS, pp. 2364–2374 (2020)

    Google Scholar 

  28. Mishra, P., Lehmkuhl, R., Srinivasan, A., Zheng, W., Popa, R.A.: Delphi: a cryptographic inference service for neural networks. In: USENIX Security Symposium (2020)

    Google Scholar 

  29. Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: Proceedings of IEEE S&P (2017)

    Google Scholar 

  30. Riazi, M.S., Samragh, M., Chen, H., Laine, K., Lauter, K., Koushanfar, F.: XONN: XNOR-based oblivious deep neural network inference. In: Proceedings of 28th USENIX Security (2019)

    Google Scholar 

  31. Riazi, M.S., Weinert, C., Tkachenko, O., Songhori, E.M., Schneider, T., Koushanfar, F.: Chameleon: a hybrid secure computation framework for machine learning applications. In: Proceedings of AsiaCCS (2018)

    Google Scholar 

  32. Wagh, S., Gupta, D., Chandran, N.: SecureNN: 3-party secure computation for neural network training. In: Proceedings of PETS (2019)

    Google Scholar 

  33. Wang, X.: Flexsc (2018). https://github.com/wangxiao1254/FlexSC

  34. Xie, P., Wu, B., Sun, G.: BAYHENN: combining Bayesian deep learning and homomorphic encryption for secure DNN inference. In: Proceedings of IJCAI, pp. 4831–4837 (2019)

    Google Scholar 

  35. Yu, L., Liu, L., Pu, C., Gursoy, M.E., Truex, S.: Differentially private model publishing for deep learning. In: Proceedings of S&P. IEEE (2019)

    Google Scholar 

  36. Zhang, Q., Wang, C., Wu, H., Xin, C., Phuong, T.V.: GELU-Net: a globally encrypted, locally unencrypted deep neural network for privacy-preserved learning. In: Proceedings of IJCAI, pp. 3933–3939 (2018)

    Google Scholar 

  37. Zheng, Y., Duan, H., Wang, C.: Towards secure and efficient outsourcing of machine learning classification. In: Sako, K., Schneider, S., Ryan, P.Y.A. (eds.) ESORICS 2019. LNCS, vol. 11735, pp. 22–40. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29959-0_2

    CrossRef  Google Scholar 

Download references

Acknowledgment

This work was supported in part by Australian Research Council (ARC) Discovery Projects (No. DP200103308, No. DP180103251, and No. DP190102835), ARC Linkage Project (No. LP160101766), and HITSZ Start-up Research Grant (No. BA45001023).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yifeng Zheng .

Editor information

Editors and Affiliations

A Further Implementation Details

A Further Implementation Details

1.1 A.1 More Details of Implementation Setting

   is implemented in Java. Recall that ’s secure NN inference protocol is computed in the secret sharing domain over ring \(\mathbb {Z}_{2^\ell }\), i.e., all real-valued model weights are converted into \(\ell \)-bit signed fixed-point integers and secretly shared in \(\mathbb {Z}_{2^\ell }\). In , we follow the state-of-the-art work [28] to choose the ring size as \(\mathbb {Z}_{2^{32}}\), a 32-bit ring by the modulus 4294967296. To represent the signed integers, we split the ring into two halves, where the lower-half ring \([0,2^{31}-1]\) represents the non-negative values and the upper-half ring \([2^{31}, 2^{32}-1]\) represents the negative values. In this way, both the sign and the secret value is well protected. Besides, to convert the real-valued model weights to 32-bit fixed-point integers, we scale and quantize the weight with a scaling factor s to represent the bit length of the fractional part. For M1, M2, and C1, the factor is set as 1024, 128, and 64, respectively. For all medical datasets, the factor is set as 1024.

Multiplication over two fixed-point integers can overflow the capacity of the ring \(\mathbb {Z}_{2^\ell }\), since the fractional part is increased to 2s bits in the resulting product. To assure the correctness, all intermediate results after multiplying over two shares should be rescaled down by \(2^s\) before subsequent operation. We follow prior works [28, 32] to adopt a secure local truncation scheme proposed in the work [29], which simply discard the last s fractional bits to adjust the product to \(\ell \) bits.

1.2 A.2 Training Details

We provide in Table 12 the detailed setting of training over plaintext datasets. Recall that we train the models M1 and M2 on MNIST, the model C1 on CIFAR-10, and the models over four publicly available medical datasets: Breast Cancer [1], Diabetes [2], Liver Disease [3] and Thyroid Disease [4]. Our training procedure is executed on NVIDIA Tesla V100 GPU with PyTorch backend. We adopt the SGD for M1, M2, C1, and Breast Cancer, and Adam optimizer for Diabetes, Liver Disease, and Thyriod. They are with adaptive learning rate with cosine learning rate decay every 50 epoches. For all datasets, all image pixels and the medical features are normalized to integers in [0, 255]. In this way, the hospital’s inputs do not need to be preprocessed in our secure NN inference protocol.

Table 12. Summary of training settings.
Table 13. Model architecture of M1.
Table 14. Model architecture of M2.
Table 15. Model architecture of C1.

1.3 A.3 More Details of Model Architecture

In this section, we present the detailed model architectures used in our paper. The models M1 and M2 are trained on MNIST. In general, M1 is a Multi-Layer Perception consisting of 3 fully connected (FC) layers with ReLU activation, which has been used in prior works [15, 19, 22, 30, 31]. The architecture of M1 is summarized in Table 13. As shown in Table 14, M2 comprises 3 convolutional (CONV) layers with ReLU, 2 average pooling (AP) layers and an FC layer, which has been adopted in prior works [19, 22, 27, 30]. For CIFAR-10, the model C1 (minionn network) consists of 7 CONV layers with ReLU, 2 AP layers and an FC layer as shown in Table 15. Is has been adopted in prior works [19, 22, 27, 30, 31] for a benchmarking evaluation. Table 16, Table 17, and Table 18 report the architectures of the models on Breast Cancer, Diabetes, and Liver Disease, respectively. They have been adopted in prior work [30]. Table 19 reports the model architecture evaluating on Thyroid disease.

Table 16. Model architecture of breast cancer.
Table 17. Model architecture of diabetes.
Table 18. Model architecture of liver disease.
Table 19. Model architecture of thyroid.

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Liu, X., Zheng, Y., Yuan, X., Yi, X. (2021). : Towards Secure and Lightweight Deep Learning as a Medical Diagnostic Service. In: Bertino, E., Shulman, H., Waidner, M. (eds) Computer Security – ESORICS 2021. ESORICS 2021. Lecture Notes in Computer Science(), vol 12972. Springer, Cham. https://doi.org/10.1007/978-3-030-88418-5_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88418-5_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88417-8

  • Online ISBN: 978-3-030-88418-5

  • eBook Packages: Computer ScienceComputer Science (R0)