Advertisement

DataMix: Efficient Privacy-Preserving Edge-Cloud Inference

Conference paper
  • 570 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12356)

Abstract

Deep neural networks are widely deployed on edge devices (e.g., for computer vision and speech recognition). Users either perform the inference locally (i.e., edge-based) or send the data to the cloud and run inference remotely (i.e., cloud-based). However, both solutions have their limitations: edge devices are heavily constrained by insufficient hardware resources and cannot afford to run large models; cloud servers, if not trustworthy, will raise serious privacy issues. In this paper, we mediate between the resource-constrained edge devices and the privacy-invasive cloud servers by introducing a novel privacy-preserving edge-cloud inference framework, DataMix. We off-load the majority of the computations to the cloud and leverage a pair of mixing and de-mixing operation, inspired by mixup, to protect the privacy of the data transmitted to the cloud. Our framework has three advantages. First, it is privacy-preserving as the mixing cannot be inverted without the user’s private mixing coefficients. Second, our framework is accuracy-preserving because our framework takes advantage of the space spanned by images, and we train the model in a mixing-aware manner to maintain accuracy. Third, our solution is efficient on the edge since the majority of the workload is delegated to the cloud, and our mixing and de-mixing processes introduce very few extra computations. Also, our framework introduces small communication overhead and maintains high hardware utilization on the cloud. Extensive experiments on multiple computer vision and speech recognition datasets demonstrate that our framework can greatly reduce the local computations on the edge (to fewer than 20% of FLOPs) with negligible loss of accuracy and no leakages of private information.

Notes

Acknowledgements

We thank MIT-IBM Watson AI Lab, MIT Quest for Intelligence, Samsung and Facebook for supporting this research. We thank AWS Machine Learning Research Awards for providing the computation resource.

Supplementary material

Supplementary material 1 (mp4 88350 KB)

504452_1_En_34_MOESM2_ESM.pdf (16.5 mb)
Supplementary material 2 (pdf 16944 KB)

References

  1. 1.
    Abadi, M., et al.: Deep learning with differential privacy. In: CCS (2016)Google Scholar
  2. 2.
    Agrawal, D., Aggarwal, C.C.: On the design and quantification of privacy preserving data mining algorithms. In: PODS (2001)Google Scholar
  3. 3.
    Agrawal, R., Srikant, R.: Privacy-Preserving Data Mining. In: SIGMOD (2000)Google Scholar
  4. 4.
    Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P.N., Nayar, S.K.: Face Swapping: Automatically Replacing Faces in Photographs. In: SIGGRAPH (2008)Google Scholar
  5. 5.
    Chen, J., Konrad, J., Ishwar, P.: VGAN-based image representation learning for privacy-preserving facial expression recognition. In: CVPRW (2018)Google Scholar
  6. 6.
    Chen, J., Wu, J., Konrad, J., Ishwar, P.: Semi-coupled two-stream fusion ConvNets for action recognition at extremely low resolutions. In: WACV (2017)Google Scholar
  7. 7.
    Chou, E., et al.: Privacy-preserving action recognition for smart hospitals using low-resolution depth images. In: NeurIPS Workshop (2018)Google Scholar
  8. 8.
    Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or -1. arXiv (2016)Google Scholar
  9. 9.
    Dwork, C.: Differential privacy: a survey of results. In: TAMC (2008)Google Scholar
  10. 10.
    Gholami, A., et al.: SqueezeNext: hardware-aware neural network design. In: CVPR Workshop (2018)Google Scholar
  11. 11.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)Google Scholar
  12. 12.
    Gross, R., Airoldi, E., Malin, B., Sweeney, L.: Integrating utility into face de-identification. In: Privacy Enhancing Technologies (2005)Google Scholar
  13. 13.
    Gross, R., Sweeney, L., Cohn, J., De la Torre, F., Baker, S.: Face de-identification. In: Senior, A. (ed.) Protecting Privacy in Video Surveillance. Springer, London (2009).  https://doi.org/10.1007/978-1-84882-301-3_8CrossRefGoogle Scholar
  14. 14.
    Gross, R., Sweeney, L., De La Torre, F., Baker, S.: Semi-supervised learning of multi-factor models for face de-identification. In: CVPR (2008)Google Scholar
  15. 15.
    Gross, R., Sweeney, L., De la Torre, F., Baker, S.: Model-based face de-identification. In: CVPRW (2006)Google Scholar
  16. 16.
    Han, H., Jain, A., Wang, F., Shan, S., Chen, X.: Heterogeneous face attribute estimation: a deep multi-task learning approach. In: TPAMI (2018)Google Scholar
  17. 17.
    Han, S., Mao, H., Dally, W.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. In: ICLR (2016)Google Scholar
  18. 18.
    Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural networks. In: NIPS (2015)Google Scholar
  19. 19.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  20. 20.
    He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., Han, S.: AMC: AutoML for model compression and acceleration on mobile devices. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 815–832. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_48CrossRefGoogle Scholar
  21. 21.
    He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: ICCV (2017)Google Scholar
  22. 22.
    Howard, A., et al.: Searching for MobileNetV3. arXiv (2019)Google Scholar
  23. 23.
    Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv (2017)Google Scholar
  24. 24.
    Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W., Keutzer, K.: SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and<0.5MB Model Size. arXiv (2016)Google Scholar
  25. 25.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)Google Scholar
  26. 26.
    Iyengar, V.S.: Transforming data to satisfy privacy constraints. In: KDD (2002)Google Scholar
  27. 27.
    Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. In: BMVC (2014)Google Scholar
  28. 28.
    Jourabloo, A., Yin, X., Liu, X.: Attribute preserved face de-identification. In: ICB (2015)Google Scholar
  29. 29.
    Kim, T.H., Kang, D., Pulli, K., Choi, J.: Training with the invisibles: obfuscating images to share safely for learning visual recognition models. arXiv (2019)Google Scholar
  30. 30.
    Krishnamoorthi, R.: Quantizing deep convolutional networks for efficient inference: a whitepaper. arXiv (2018)Google Scholar
  31. 31.
    Leroux, S., Verbelen, T., Simoens, P., Dhoedt, B.: Privacy aware offloading of deep neural networks. In: ICML Workshop (2018)Google Scholar
  32. 32.
    Li, M., Lai, L., Suda, N., Chandra, V., Pan, D.Z.: PrivyNet: a flexible framework for privacy-preserving deep neural network training. arXiv (2017)Google Scholar
  33. 33.
    Li, T., Lin, L.: AnonymousNet: natural face de-identification with measurable privacy. In: CVPR Workshop (2019)Google Scholar
  34. 34.
    Lin, J., Gan, C., Han, S.: TSM: temporal shift module for efficient video understanding. In: ICCV, pp. 7083–7093 (2019)Google Scholar
  35. 35.
    Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: ICCV (2017)Google Scholar
  36. 36.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV (2015)Google Scholar
  37. 37.
    Ma, N., Zhang, X., Zheng, H.-T., Sun, J.: ShuffleNet V2: practical guidelines for efficient CNN architecture design. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 122–138. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01264-9_8CrossRefGoogle Scholar
  38. 38.
    Newton, E.M., Sweeney, L., Malin, B.: Preserving privacy by de-identifying face images. In: TKDE (2005)Google Scholar
  39. 39.
    Oh, S.J., Benenson, R., Fritz, M., Schiele, B.: Faceless person recognition: privacy implications in social media. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 19–35. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_2CrossRefGoogle Scholar
  40. 40.
    Oh, S.J., Fritz, M., Schiele, B.: Adversarial image perturbation for privacy protection: a game theory perspective. In: ICCV (2017)Google Scholar
  41. 41.
    Osia, S.A., et al.: A hybrid deep learning architecture for privacy-preserving mobile analytics. In: TKDD (2017)Google Scholar
  42. 42.
    Osia, S.A., Taheri, A., Shamsabadi, A.S., Katevas, K., Haddadi, H., Rabiee, H.R.: Deep Private-feature extraction. In: TKDE (2018)Google Scholar
  43. 43.
    Raval, N., Machanavajjhala, A., Cox, L.P.: Protecting visual secrets using adversarial nets. In: CVPRW (2017)Google Scholar
  44. 44.
    Ren, Z., Lee, Y.J., Ryoo, M.S.: Learning to anonymize faces for privacy preserving action detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 639–655. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01246-5_38CrossRefGoogle Scholar
  45. 45.
    Ryoo, M.S., Kim, K., Yang, H.J.: Extreme low resolution activity recognition with multi-siamese embedding learning. In: AAAI (2018)Google Scholar
  46. 46.
    Ryoo, M.S., Rothrock, B., Fleming, C., Yang, H.J.: Privacy-preserving human activity recognition from extreme low resolution. In: AAAI (2017)Google Scholar
  47. 47.
    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: CVPR (2018)Google Scholar
  48. 48.
    Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: CCS (2015)Google Scholar
  49. 49.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  50. 50.
    So, D., Le, Q., Liang, C.: The evolved transformer. In: ICML (2019)Google Scholar
  51. 51.
    Sweeney, L.: K-anonymity: a model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. (2002)Google Scholar
  52. 52.
    Tian, Y., Krishnan, D., Isola, P.: Contrastive representation distillation. ICLR (2020)Google Scholar
  53. 53.
    Tramèr, F., Boneh, D.: Slalom: fast, verifiable and private execution of neural networks in trusted hardware. In: ICLR (2019)Google Scholar
  54. 54.
    Wang, J., Zhang, J., Bao, W., Zhu, X., Cao, B., Yu, P.S.: Not just privacy: improving performance of private deep learning in mobile cloud. In: KDD (2018)Google Scholar
  55. 55.
    Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: HAQ: hardware-aware automated quantization with mixed precision. In: CVPR (2019)Google Scholar
  56. 56.
    Warden, P.: Speech commands: a dataset for limited-vocabulary speech recognition. arXiv (2018)Google Scholar
  57. 57.
    Wu, Z., Liu, Z., Lin, J., Lin, Y., Han, S.: Lite transformer with long-short range attention. In: ICLR (2020)Google Scholar
  58. 58.
    Wu, Z., Wang, Z., Wang, Z., Jin, H.: Towards privacy-preserving visual recognition via adversarial training: a pilot study. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 627–645. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01270-0_37CrossRefGoogle Scholar
  59. 59.
    Xie, Q., Hovy, E., Luong, M.T., Le, Q.V.: Self-training with noisy student improves ImageNet classification. In: arXiv (2019)Google Scholar
  60. 60.
    Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: MIXUP: beyond empirical risk minimization. In: ICLR (2018)Google Scholar
  61. 61.
    Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: CVPR (2018)Google Scholar
  62. 62.
    Zhu, C., Han, S., Mao, H., Dally, W.: Trained ternary quantization. In: ICLR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Massachusetts Institute of TechnologyCambridgeUSA
  2. 2.Shanghai Jiao Tong UniversityShanghaiChina
  3. 3.MIT-IBM Watson AI LabCambridgeUSA

Personalised recommendations