Advertisement

Scientific Discovery by Generating Counterfactuals Using Image Translation

Conference paper
  • 5.5k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12261)

Abstract

Model explanation techniques play a critical role in understanding the source of a model’s performance and making its decisions transparent. Here we investigate if explanation techniques can also be used as a mechanism for scientific discovery. We make three contributions: first, we propose a framework to convert predictions from explanation techniques to a mechanism of discovery. Second, we show how generative models in combination with black-box predictors can be used to generate hypotheses (without human priors) that can be critically examined. Third, with these techniques we study classification models for retinal images predicting Diabetic Macular Edema (DME), where recent work  [30] showed that a CNN trained on these images is likely learning novel features in the image. We demonstrate that the proposed framework is able to explain the underlying scientific mechanism, thus bridging the gap between the model’s performance and human understanding.

Supplementary material

505204_1_En_27_MOESM1_ESM.zip (3.9 mb)
Supplementary material 1 (zip 3994 KB)

References

  1. 1.
    Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: CVPR (2017)Google Scholar
  2. 2.
    Chang, C.H., Creager, E., Goldenberg, A., Duvenaud, D.: Explaining image classifiers by counterfactual generation. arXiv preprint arXiv:1807.08024 (2018)
  3. 3.
    Chu, C., Zhmoginov, A., Sandler, M.: Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950 (2017)
  4. 4.
    Dhurandhar, A., Chen, P.Y., Luss, R., Tu, C.C., Ting, P., Shanmugam, K., Das, P.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: NeurIPS, pp. 592–603 (2018)Google Scholar
  5. 5.
    Fong, R., Patrick, M., Vedaldi, A.: Understanding deep networks via extremal perturbations and smooth masks. In: ICCV, pp. 2950–2958 (2019)Google Scholar
  6. 6.
    Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: ICCV, pp. 3429–3437 (2017)Google Scholar
  7. 7.
    Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)Google Scholar
  8. 8.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein GANs. In: NeurIPS, pp. 5767–5777 (2017)Google Scholar
  9. 9.
    Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)CrossRefGoogle Scholar
  10. 10.
    Harding, S., Broadbent, D., Neoh, C., White, M., Vora, J.: Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool diabetic eye study. BMJ 311(7013), 1131–1135 (1995)CrossRefGoogle Scholar
  11. 11.
    Joshi, S., Koyejo, O., Vijitbenjaronk, W., Kim, B., Ghosh, J.: Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615 (2019)
  12. 12.
    Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: XRAI: better attributions through regions. In: ICCV, pp. 4948–4957 (2019)Google Scholar
  13. 13.
    Krause, J., et al.: Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 125(8), 1264–1272 (2018)CrossRefGoogle Scholar
  14. 14.
    Lee, R., Wong, T.Y., Sabanayagam, C.: Epidemiology of diabetic retinopathy, diabetic macular edema and related vision loss. Eye Vis. 2(1), 1–25 (2015)CrossRefGoogle Scholar
  15. 15.
    Liu, S., Kailkhura, B., Loveland, D., Han, Y.: Generative counterfactual introspection for explainable deep learning. arXiv preprint arXiv:1907.03077 (2019)
  16. 16.
    Mackenzie, S., et al.: SDOCT imaging to identify macular pathology in patients diagnosed with diabetic maculopathy by a digital photographic retinal screening programme. PLoS ONE 6(5), e14811 (2011)CrossRefGoogle Scholar
  17. 17.
    Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: CVPR, pp. 5188–5196 (2015)Google Scholar
  18. 18.
    Miller, A., Obermeyer, Z., Cunningham, J., Mullainathan, S.: Discriminative regularization for latent variable models with applications to electrocardiography. In: ICML. Proceedings of Machine Learning Research, PMLR (2019)Google Scholar
  19. 19.
    Mordvintsev, A., Olah, C., Tyka, M.: Deepdream-a code example for visualizing neural networks. Google Res. 2(5) (2015)Google Scholar
  20. 20.
    Petsiuk, V., Das, A., Saenko, K.: Rise: randomized input sampling for explanation of black-box models (2018)Google Scholar
  21. 21.
    Pharmaceuticals, R.: Recursion Cellular Image Classification - Kaggle contest. www.kaggle.com/c/recursion-cellular-image-classification/data
  22. 22.
    Poplin, R., et al.: Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2(3), 158 (2018)CrossRefGoogle Scholar
  23. 23.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: ACM SIGKDD (2016)Google Scholar
  24. 24.
    Samangouei, P., Saeedi, A., Nakagawa, L., Silberman, N.: Explaingan: model explanation via decision boundary crossing transformations. In: ECCV (2018)Google Scholar
  25. 25.
    Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)Google Scholar
  26. 26.
    Singla, S., Pollack, B., Chen, J., Batmanghelich, K.: Explanation by progressive exaggeration. arXiv preprint arXiv:1911.00483 (2019)
  27. 27.
    Smilkov, D., Thorat, N., Kim, B., Vigas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise (2017)Google Scholar
  28. 28.
    Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedémiller, M.: Striving for simplicity: the all convolutional net (2014)Google Scholar
  29. 29.
    Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks (2017)Google Scholar
  30. 30.
    Varadarajan, A.V., et al.: Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning. Nat. Commun. 11(1), 1–8 (2020)CrossRefGoogle Scholar
  31. 31.
    Wang, Y.T., Tadarati, M., Wolfson, Y., Bressler, S.B., Bressler, N.M.: Comparison of prevalence of diabetic macular edema based on monocular fundus photography vs optical coherence tomography. JAMA Ophthalmol. 134(2), 222–228 (2016)CrossRefGoogle Scholar
  32. 32.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  33. 33.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2223–2232 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Google ResearchMountain ViewUSA
  2. 2.Google HealthPalo AltoUSA
  3. 3.Rajavithi HospitalBangkokThailand
  4. 4.Google ResearchCambridgeUSA

Personalised recommendations