Abstract
Interpretability of a neural network can be expressed as the identification of patterns or features to which the network can be either sensitive or indifferent. To this aim, a method inspired by DeepDream is proposed, where the activation of a neuron is maximized by performing gradient ascent on an input image. The method outputs curves that show the evolution of features during the maximization. A controlled experiment shows how it enables to assess the robustness to a given feature, or by contrast its sensitivity. The method is illustrated on the task of segmenting tumors in liver CT images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Aerts, H.J.W.L., et al.: Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 5, 4006 (2014)
Avanzo, M., Stancanello, J., Naqa, I.M.E.: Beyond imaging: the promise of radiomics. Phys. Med. Eur. J. Med. Phys. 38, 122–139 (2017)
Bach, S., et al.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS ONE 10(7), e0130140 (2015)
Christ, P.F., et al.: Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks. CoRR abs/1702.05970 (2017)
Couteaux, V., et al.: Kidney cortex segmentation in 2D CT with U-Nets ensemble aggregation. Diagn. Intervent. Imaging 100, 211–217 (2019)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Erden, B., Gamboa, N., Wood, S.: 3D convolutional neural network for brain tumor segmentation. Computer Science, Stanford University, USA, Technical report (2017)
Gillies, R.J., Kinahan, P.E., Hricak, H.: Radiomics: images are more than pictures, they are data. Radiology 278(2), 563–577 (2015)
Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: ICML (2018)
Kindermans, P.J., et al.: The (un) reliability of saliency methods. arXiv preprint arXiv:1711.00867 (2017)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Mahendran, A., Vedaldi, A.: Visualizing deep convolutional neural networks using natural pre-images. Int. J. Comput. Vision 120(3), 233–255 (2016)
Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn. 65, 211–222 (2017)
Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: Going deeper into neural networks. Google Research Blog (2015)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: HLT-NAACL Demos (2016)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
Yeh, C.K., Hsieh, C.Y., Suggala, A.S., Inouye, D., Ravikumar, P.: How sensitive are sensitivity-based explanations? arXiv preprint arXiv:1901.09392 (2019)
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Zwanenburg, A., Leger, S., Vallières, M., Löck, S., et al.: Image biomarker standardisation initiative. arXiv preprint arXiv:1612.07003 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Couteaux, V., Nempont, O., Pizaine, G., Bloch, I. (2019). Towards Interpretability of Segmentation Networks by Analyzing DeepDreams. In: Suzuki, K., et al. Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support. ML-CDS IMIMIC 2019 2019. Lecture Notes in Computer Science(), vol 11797. Springer, Cham. https://doi.org/10.1007/978-3-030-33850-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-33850-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33849-7
Online ISBN: 978-3-030-33850-3
eBook Packages: Computer ScienceComputer Science (R0)