Abstract
Visual counterfactual explanations (VCEs) in image space are an important tool to understand decisions of image classifiers as they show under which changes of the image the decision of the classifier would change. Their generation in image space is challenging and requires robust models due to the problem of adversarial examples. Existing techniques to generate VCEs in image space suffer from spurious changes in the background. Our novel perturbation model for VCEs together with its efficient optimization via our novel Auto-Frank-Wolfe scheme yields sparse VCEs which lead to subtle changes specific for the target class. Moreover, we show that VCEs can be used to detect undesired behavior of ImageNet classifiers due to spurious features in the ImageNet dataset. Code is available under https://github.com/valentyn1boreiko/SVCEs_code.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Augustin, M., Meinke, A., Hein, M.: Adversarial robustness on in- and out-distribution improves explainability. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12371, pp. 228–245. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58574-7_14
Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images (2021)
Bach, S., Binder, A., Gregoire Montavon, F.K., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. (JMLR) 11, 1803–1831 (2010)
Barocas, S., Selbst, A.D., Raghavan, M.: The hidden assumptions behind counterfactual explanations and principal reasons. In: FACCT, pp. 80–89 (2020)
Beery, S., Van Horn, G., Perona, P.: Recognition in terra incognita. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 472–489. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_28
Brendel, W., Bethge, M.: Approximating CNNs with bag-of-local-features models works surprisingly well on imageNet. In: ICLR (2019)
Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.: Unlabeled data improves adversarial robustness. In: NeurIPS (2019)
Carter, S., Armstrong, Z., Schubert, L., Johnson, I., Olah, C.: Exploring neural networks with activation atlases. Distill (2019)
Chang, C.H., Creager, E., Goldenberg, A., Duvenaud, D.: Explaining image classifiers by counterfactual generation. In: ICLR (2019)
Chen, J., Yi, J., Gu, Q.: A Frank-Wolfe framework for efficient and effective adversarial attacks. In: AAAI (2019)
Commission, E.: Regulation for laying down harmonised rules on AI. European Commission (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52021PC0206 &from=EN
Croce, F., et al.: Robustbench: a standardized adversarial robustness benchmark. In: NeurIPS Track on Benchmark and Datasets (2021)
Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: ICML (2020)
Croce, F., Hein, M.: Mind the box: \(l_1\)-APGD for sparse adversarial attacks on image classifiers. In: ICML (2021)
Croce, F., Hein, M.: Adversarial robustness against multiple \(l_p\)-threat models at the price of one and how to quickly fine-tune robust models to another threat model. In: ICML (2022)
Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. arXiv preprint arXiv:2105.05233 (2021)
Dhurandhar, A., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: NeurIPS (2018)
Engstrom, L., Ilyas, A., Salman, H., Santurkar, S., Tsipras, D.: Robustness (python library) (2019). https://github.com/MadryLab/robustness
Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Tran, B., Madry, A.: Adversarial robustness as a prior for learned representations (2019)
Etmann, C., Lunz, S., Maass, P., Schönlieb, C.B.: On the connection between adversarial robustness and saliency map interpretability. In: ICML (2019)
Gao, S., Li, Z.Y., Yang, M.H., Cheng, M.M., Han, J., Torr, P.: Large-scale unsupervised semantic segmentation. arXiv preprint arXiv:2106.03149 (2021)
Goh, G., et al.: Multimodal neurons in artificial neural networks. Distill (2021)
Gowal, S., Qin, C., Uesato, J., Mann, T., Kohli, P.: Uncovering the limits of adversarial training against norm-bounded adversarial examples. arXiv preprint arXiv:2010.03593v2 (2020)
Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: ICML (2019)
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: ECCV (2016)
Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., Darrell, T.: Generating visual explanations. In: ECCV (2016)
Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: ECCV (2018)
Hendrycks, D., Mu, N., Cubuk, E.D., Zoph, B., Gilmer, J., Lakshminarayanan, B.: AugMix: a simple data processing method to improve robustness and uncertainty. In: ICLR (2020)
Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: CVPR (2021)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: NeurIPS (2017)
Hohman, F., Park, H., Robinson, C., Chau, D.H.: Summit: scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Trans. Vis. Comput. Graph. (TVCG) 26(1), 1096–1106 (2020). https://doi.org/10.1109/tvcg.2019.2934659
Jaggi, M.: Revisiting Frank-Wolfe: projection-free sparse convex optimization. In: ICML (2013)
Kolesnikov, A., et al.: Big transfer (bit): general visual representation learning. In: ECCV (2020)
Laidlaw, C., Singla, S., Feizi, S.: Perceptual adversarial robustness: defense against unseen threat models. In: ICLR (2021)
Lang, O., et al.: Explaining in style: training a GAN to explain a classifier in stylespace. arXiv preprint arXiv:2104.13369 (2021)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: NeurIPS (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Marcinkevičs, R., Vogt, J.E.: Interpretability and explainability: a machine learning zoo mini-tour. arXiv:2012.01805 (2020)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Moraru, V.: An algorithm for solving quadratic programming problems. Comput. Sci. J. Moldova 5(2), 14 (1997)
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: FAccT (2020)
Nichol, A., et al.: Glide: towards photorealistic image generation and editing with text-guided diffusion models (2021)
Pawlowski, N., Coelho de Castro, D., Glocker, B.: Deep structural causal models for tractable counterfactual inference. In: NeurIPS (2020)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021)
Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do CIFAR-10 classifiers generalize to CIFAR-10? arXiv preprint arXiv:1806.00451 (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: "why should i trust you?": explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)
Samangouei, P., Saeedi, A., Nakagawa, L., Silberman, N.: ExplainGAN: model explanation via decision boundary crossing transformations. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 681–696. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_41
Sanchez, P., Tsaftaris, S.A.: Diffusion causal models for counterfactual estimation. In: First Conference on Causal Learning and Reasoning (2022)
Santurkar, S., Tsipras, D., Tran, B., Ilyas, A., Engstrom, L., Madry, A.: Image synthesis with a single (robust) classifier. In: NeurIPS (2019)
Schut, L., et al.: Generating interpretable counterfactual explanations by implicit minimisation of epistemic and aleatoric uncertainties. In: AISTATS (2021)
Schutte, K., Moindrot, O., Hérent, P., Schiratti, J.B., Jégou, S.: Using styleGAN for visual interpretability of deep learning models on medical images. In: NeurIPS Workshop "Medical Imaging Meets NeurIPS" (2020)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision 128(2), 336–359 (2019)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR (2014)
Singla, S., Nushi, B., Shah, S., Kamar, E., Horvitz, E.: Understanding failures of deep networks via robust feature extraction. In: CVPR (2021)
Srinivas, S., Fleuret, F.: Full-gradient representation for neural network visualization. In: NeurIPS (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR, pp. 2503–2511 (2014)
Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE PAMI 30(11), 1958–1970 (2008)
Tsiligkaridis, T., Roberts, J.: Understanding frank-wolfe adversarial training. In: CVPR (2022)
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: ICLR (2019)
Verma, S., Dickerson, J.P., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint, arXiv:2010.10596 (2020)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31, 841–887 (2018)
Wang, Z., Wang, H., Ramkumar, S., Fredrikson, M., Mardziel, P., Datta, A.: Smoothed geometry for robust attribution. In: NeurIPS (2020)
Xie, Q., Luong, M.T., Hovy, E., Le, Q.V.: Self-training with noisy student improves imagenet classification. In: CVPR (2020)
Yu, Y., Zhang, X., Schuurmans, D.: Generalized conditional gradient for sparse estimation. J. Mach. Learn. Res. 18(144), 1–46 (2017)
Zech, J.R., Badgeley, M.A., Liu, M., Costa, A.B., Titano, J.J., Oermann, E.K.: Confounding variables can degrade generalization performance of radiological deep learning models. arXiv preprint arXiv:1807.00431 (2018)
Acknowledgement
M.H., P.B., and V.B. acknowledge support by the the DFG Excellence Cluster Machine Learning - New Perspectives for Science, EXC 2064/1, Project number 390727645.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Boreiko, V., Augustin, M., Croce, F., Berens, P., Hein, M. (2022). Sparse Visual Counterfactual Explanations in Image Space. In: Andres, B., Bernard, F., Cremers, D., Frintrop, S., Goldlücke, B., Ihrke, I. (eds) Pattern Recognition. DAGM GCPR 2022. Lecture Notes in Computer Science, vol 13485. Springer, Cham. https://doi.org/10.1007/978-3-031-16788-1_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-16788-1_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16787-4
Online ISBN: 978-3-031-16788-1
eBook Packages: Computer ScienceComputer Science (R0)