Advertisement

Toward an Understanding of Adversarial Examples in Clinical Trials

  • Konstantinos PapangelouEmail author
  • Konstantinos Sechidis
  • James Weatherall
  • Gavin Brown
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11051)

Abstract

Deep learning systems can be fooled by small, worst-case perturbations of their inputs, known as adversarial examples. This has been almost exclusively studied in supervised learning, on vision tasks. However, adversarial examples in counterfactual modelling, which sits outside the traditional supervised scenario, is an overlooked challenge. We introduce the concept of adversarial patients, in the context of counterfactual models for clinical trials—this turns out to introduce several new dimensions to the literature. We describe how there exist multiple types of adversarial example—and demonstrate different consequences, e.g. ethical, when they arise. The study of adversarial examples in this area is rich in challenges for accountability and trustworthiness in ML–we highlight future directions that may be of interest to the community.

Keywords

Adversarial examples Counterfactual modelling Randomised clinical trials Subgroup identification 

Notes

Acknowledgments

K.P. was supported by the EPSRC through the Centre for Doctoral Training Grant [EP/1038099/1]. K.S. was funded by the AstraZeneca Data Science Fellowship at the University of Manchester. G.B. was supported by the EPSRC LAMBDA project [EP/N035127/1].

Supplementary material

478880_1_En_3_MOESM1_ESM.pdf (453 kb)
Supplementary material 1 (pdf 452 KB)

References

  1. 1.
    Alaa, A.M., Weisz, M., van der Schaar, M.: Deep counterfactual networks with propensity-dropout. In: ICML Workshop on Principled Approaches to Deep Learning (2017)Google Scholar
  2. 2.
    Athey, S., Imbens, G.: Recursive partitioning for heterogeneous causal effects. Proc. Natl. Acad. Sci. 113(27), 7353–7360 (2016)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40994-3_25CrossRefGoogle Scholar
  4. 4.
    European Parliament and Council of the European Union: Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (data protection directive). L119, pp. 1–88 (2016)Google Scholar
  5. 5.
    Foster, J.C., Taylor, J.M., Ruberg, S.J.: Subgroup identification from randomized clinical trial data. Stat. Med. 30(24), 2867–2880 (2011)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)Google Scholar
  7. 7.
    Hill, J.L.: Bayesian nonparametric modelling for causal inference. J. Comput. Graph. Stat. 20(1), 217–240 (2011)CrossRefGoogle Scholar
  8. 8.
    Johansson, F., Shalit, U., Sontag, D.: Learning representations for counterfactual inference. In: ICML, pp. 3020–3029 (2016)Google Scholar
  9. 9.
    Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)Google Scholar
  10. 10.
    Lipkovich, I., Dmitrienko, A., D’Agostino Sr., R.B.: Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials. Stat. Med. 36(1), 136–196 (2017)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Miyato, T., Maeda, S.I., Koyama, M., Nakae, K., Ishii, S.: Distributional smoothing with virtual adversarial training. In: ICLR (2016)Google Scholar
  12. 12.
    Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)Google Scholar
  13. 13.
    Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: EuroS&P pp. 372–387. IEEE (2016)Google Scholar
  14. 14.
    Ruberg, S.J., Shen, L.: Personalized medicine: four perspectives of tailored medicine. Stat. Biopharm. Res. 7(3), 214–229 (2015)CrossRefGoogle Scholar
  15. 15.
    Rubin, D.B.: Estimating causal effects of treatments in randomized and nonrandomized studies. J. Educ. Psychol. 66(5), 688 (1974)CrossRefGoogle Scholar
  16. 16.
    Sechidis, K., Papangelou, K., Metcalfe, P.D., Svensson, D., Weatherall, J., Brown, G.: Distinguishing prognostic and predictive biomarkers: an information theoretic approach. Bioinformatics 1, 12 (2018)Google Scholar
  17. 17.
    Shalit, U., Johansson, F., Sontag, D.: Estimating individual treatment effect: generalization bounds and algorithms. In: ICML (2017)Google Scholar
  18. 18.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)Google Scholar
  19. 19.
    Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. In: ICLR (2018)Google Scholar
  20. 20.
    Wager, S., Athey, S.: Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. (2017, just-accepted)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer ScienceUniversity of ManchesterManchesterUK
  2. 2.Advanced Analytics Centre, Global Medicines DevelopmentAstraZenecaCambridgeUK

Personalised recommendations