Skip to main content

Post-hoc Counterfactual Generation with Supervised Autoencoder

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

Nowadays, AI is increasingly being used in many fields to automate decisions that largely affect the daily lives of humans. The inherent complexity of these systems makes them so-called black-box models. Explainable Artificial Intelligence (XAI) aims at solving this issue by providing methods to overcome this lack of transparency. Counterfactual explanation is a common and well-known class of explanations that produces actionable and understandable explanations for end-users. However, generating realistic and useful counterfactuals remains a challenge. In this work, we investigate the problem of generating counterfactuals that are both close to the data distribution, and to the distribution of the target class. Our objective is to obtain counterfactuals with likely values (i.e. realistic). We propose a model agnostic method for generating realistic counterfactuals by using class prototypes. The novelty of this approach is that these class prototypes are obtained using a supervised auto-encoder. Then, we performed an empirical evaluation across several interpretability metrics, that shows competitive results with the state-of-the-art method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://yann.lecun.com/exdb/mnist/.

References

  1. Dhurandhar, A., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Proceedings of the International Conference on Neural Information Processing Systems (NIPS), pp. 590–601 (2018)

    Google Scholar 

  2. Kramer, M.A.: Nonlinear principal component analysis using autoassociative neural networks. AIChE J. 37(2), 233–243 (1991)

    Article  Google Scholar 

  3. Labaien, J., Zugasti, E., Carlos, X.D.: DA-DGCEx: ensuring validity of deep guided counterfactual explanations with distribution-aware autoencoder loss. arXiv arXiv:2104.09062 (2021)

  4. Le, L., Patterson, A., White, M.: Supervised autoencoders: Improving generalization performance with unsupervised regularizers. In: Proceedings of the International Conference on Neural Information Processing Systems (NIPS) (2018)

    Google Scholar 

  5. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions (2017)

    Google Scholar 

  6. Looveren, A.V., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. arXiv arXiv:1907.02584 (2020)

  7. Mahajan, D., Tan, C., Sharma, A.: Preserving causal constraints in counterfactual explanations for machine learning classifiers. In: Proceedings of the Workshop Microsoft at NIPS - “CausalML: Machine Learning and Causal Inference for Improved Decision Making” (2019)

    Google Scholar 

  8. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. arXiv arXiv:1706.07269 (2017)

  9. Nemirovsky, D., Thiebaut, N., Xu, Y., Gupta, A.: CounteRGAN: generating realistic counterfactuals with Residual Generative Adversarial Nets. arXiv arXiv:2009.05199 (2020)

  10. Wachter, S., Mittelstadt, B.D., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. arXiv arXiv:1711.00399 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Victor Guyomard .

Editor information

Editors and Affiliations

6 Supplementary Material

6 Supplementary Material

This section describes used architectures and hyperparameters.

Supervised Autoencoder: This architecture is composed of an auto-encoder part, and two dense layers (classification layers) next to encoder. The final output is the concatenation of the last dense layer and the decoder output. Auto-encoder part is the same as those used by Van Looveren and Klaise [6]. It consists of an encoder with two convolution layers, the first two contains 16 filters of size \(3\times 3\) and ReLU activations and are followed by a \(2\times 2\) max-pooling layers and finally a convolution layer with a filter of size \(3\times 3\) and linear activation. The decoder part takes encoded instances as input and pass them to a convolutional layer with 16 filters of size \(3\times 3\) and RelU activation, then to a \(2\times 2\) upsampling layer and, a convolutional layer with 16 filters of size \(3\times 3\) and RelU activation again and finally a convolution layer with a filter of size \(3\times 3\) and linear activation. The classification part is composed of 2 dense layers stacked next to the encoder. It takes flatten encoded instances from the encoder and pass them to a dense layer of size 128 with RelU activation and \(L_{1}\) regularization. This dense layer is followed by a softmax layer with 10 units. Loss is defined as summation of reconstruction loss and classification loss as shown in Sect. 3. To fix \(\lambda \) of Eq. 3, we train our model for different lambda values on the training set, and choose the best trade-off between accuracy and reconstruction error on the test set. These results are shown in Table 2, and best performances are obtained for \(\lambda = 10 \).

Table 2. Accuracy and reconstruction error for each \(\lambda \) on test set

As labels are one-hot, classification loss will be a categorical cross-entropy, and reconstruction loss will be defined as a mean squared error loss. Training is achieve with a batch size of 128, and for 25 epochs, optimizer is set as Adam optimizer.

Baseline Autoencoder: This architecture is the same as autoencoder part of supervised autoencoder. Loss is defined as mean squared error, and training is achieved with a batch size of 128, and for 18 epochs, optimizer is set as Adam optimizer. We achieve to train our model, on the training set for a reconstruction error of 0.0016 on test set.

Counterfactual Search Hyperparameters: Hyperparameters values are fixed to those used by Van Looveren et al. [6] (\(\gamma = 100\), \(\kappa = 0\), \(c=1\), \(\beta =0.1\), \(\theta = 100 \), \(K = 5\)).

Autoencoder for Evaluation: We used the same autoencoder architecture as Van Looveren et al. [6]. The training set is the same as the one used for supervised and baseline autoencoder.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guyomard, V., Fessant, F., Bouadi, T., Guyet, T. (2021). Post-hoc Counterfactual Generation with Supervised Autoencoder. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics