Skip to main content

Privacy-Preserving Contrastive Explanations with Local Foil Trees

  • 194 Accesses

Part of the Lecture Notes in Computer Science book series (LNCS,volume 13301)

Abstract

We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data, and the model itself. An extended version of this paper is found at Cryptology ePrint Archive [16].

Keywords

  • Explainable AI
  • Secure multi-party computation
  • Decision tree
  • Foil tree

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-031-07689-3_7
  • Chapter length: 11 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   79.99
Price excludes VAT (USA)
  • ISBN: 978-3-031-07689-3
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   99.99
Price excludes VAT (USA)

Notes

  1. 1.

    Note that it is possible for samples in a foil leaf to have a classification different from B, so care needs to be taken in determining the foil sample.

References

  1. Abspoel, M., Escudero, D., Volgushev, N.: Secure training of decision trees with continuous attributes. Priv. Enhanc. Technol. 2021(1), 167–187 (2021)

    CrossRef  Google Scholar 

  2. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees. Wadsworth (1984)

    Google Scholar 

  3. Cramer, R., Damgård, I., Nielsen, J.B.: Secure Multiparty Computation and Secret Sharing. Cambridge University Press (2015)

    Google Scholar 

  4. de Hoogh, S., Schoenmakers, B., Chen, P., op den Akker, H.: Practical secure decision tree learning in a teletreatment application. In: Christin, N., Safavi-Naini, R. (eds.) FC 2014. LNCS, vol. 8437, pp. 179–194. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-45472-5_12

    CrossRef  Google Scholar 

  5. Dua, D., Graff, C.: UCI machine learning repository (2017)

    Google Scholar 

  6. Dwork, Cynthia: Differential privacy. In: Bugliesi, Michele, Preneel, Bart, Sassone, Vladimiro, Wegener, Ingo (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1

    CrossRef  Google Scholar 

  7. Adams, S., et al.: Privacy-preserving training of tree ensembles over continuous data, CoRR abs/2106.02769 (2021)

    Google Scholar 

  8. Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In:Fu, K., Jung, J. (eds.) Proceedings of the 23rd USENIX Security Symposium, San Diego, CA, USA, 20–22 August 2014. USENIX Association, pp. 17–32 (2014)

    Google Scholar 

  9. Harder, F., Bauer, M., Park, M.: Interpretable and differentially private predictions. In: The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI. AAAI Press, pp. 4083–4090 (2020)

    Google Scholar 

  10. Lundberg, S.M., Lee, S.-I: A unified approach to interpreting model predictions. In: Annual Conference on Neural Information Processing Systems. Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774 (2017)

    Google Scholar 

  11. Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 223–238. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48910-X_16

    CrossRef  Google Scholar 

  12. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. ACM, pp. 1135–1144 (2016)

    Google Scholar 

  13. Schoenmakers, B.: MPyC - Secure Multiparty Computation in Python. https://github.com/lschoe/mpyc

  14. van der Waa, J., Nieuwburg, E., Cremers, A.H.M., Neerincx, M.A.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artif. Intell. 291, 103404 (2021)

    Google Scholar 

  15. van der Waa, J., Robeer, M., van Diggelen, J., Brinkhuis, M., Neerincx, M.: Contrastive explanations with local foil trees, CoRR abs/1806.07470 (2018)

    Google Scholar 

  16. Veugen, T., Kamphorst, B., Marcus, M.: Privacy-preserving contrastive explanations with local foil trees. IACR Cryptology ePrint Archive, no. 360, pp. 1–20 (2022)

    Google Scholar 

  17. Yang, Z., Zhang, J., Chang, E.C., Liang, Z.: Neural network inversion in adversarial setting via background knowledge alignment. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security CCS. ACM, pp. 225–240, November 2019

    Google Scholar 

  18. Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., Song, D.: The secret revealer: generative model-inversion attacks against deep neural networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pp. 250–258, June 2020

    Google Scholar 

Download references

Acknowledgements

The research of this paper has been done within the FATE project, which is funded by the TNO Appl.AI program (internal AI program). We additionally thank Jasper van der Waa for his helpful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thijs Veugen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Veugen, T., Kamphorst, B., Marcus, M. (2022). Privacy-Preserving Contrastive Explanations with Local Foil Trees. In: Dolev, S., Katz, J., Meisels, A. (eds) Cyber Security, Cryptology, and Machine Learning. CSCML 2022. Lecture Notes in Computer Science, vol 13301. Springer, Cham. https://doi.org/10.1007/978-3-031-07689-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-07689-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-07688-6

  • Online ISBN: 978-3-031-07689-3

  • eBook Packages: Computer ScienceComputer Science (R0)