Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. (2017). Network dissection: Quantifying interpretability of deep visual representations. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3319–3327.
Beisbart, C. (2021). Opacity thought through: On the intransparency of computer simulations. Synthese. https://doi.org/10.1007/s11229-021-03305-2
MathSciNet
Article
Google Scholar
Boon, M. (2020). “How Scientists are Brought Back into Science: The Error of Empiricism”. In: Bertolaso M., Sterpetti F. (eds.) A Critical Reflection on Automated Science. Human Perspectives in Health Sciences and Technology, vol 1. Springer, Cham.
Bühlmann, P. (2013). Causal statistical inference in high dimensions. Mathematical Methods in Operations Research, 77(3), 357–370.
MathSciNet
Article
Google Scholar
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251.
Article
Google Scholar
Cichy, R. M., & Kaiser, D. (2019). Deep neural networks as scientific models. Trends in Cognitive Sciences, 23(4), 305–317.
Article
Google Scholar
Cichy, R. M., Khosla, A., Pantazis, D., Torralba, A., & Oliva, A. (2016). Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific Reports, 6, 27755.
Article
Google Scholar
Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A historical perspective of explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1), e1391.
Craver, C., & Darden, L. (2013). In Search of Mechanisms: Discoveries Across the Life Sciences. Chicago University Press.
Book
Google Scholar
Dattilo, A., Vanderburg, A., Shallue, C. J., Mayo, A. W., Berlind, P., Bieryla, A., Calkins, M. L., Esquerdo, G. A., Everett, M. E., Howell, S. B., Latham, D. W., Scott, N. J., & Yu, L. (2019). Identifying exoplanets with deep learning II: Two new super-earths uncovered by a neural network in K2 data. arXiv, 1903.10507.
Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv, 1710.00794
Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds & Machines, 28, 645–666.
Article
Google Scholar
Erasmus, A., Brunet, T. D. P., & Fisher, E. (2020). What is interpretability? Philosophy & Technology.
Gelfert, A. (2016). How to do science with models. A philosophical primer. Dordrecht.
Book
Google Scholar
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation.” AI Magazine, 38(3), 50–57.
Article
Google Scholar
Hohman, F. M., Kahng, M., Pienta, R., & Chau, D. H. (2018). Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics.
Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626.
MathSciNet
Article
Google Scholar
Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational similarity analysis – connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2(4), 1–28.
Google Scholar
Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K.-R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature Communications, 10(1096).
Li, X., Wu, J., Chen, E. Z., & Jiang, H. (2019). What evidence does deep learning model use to classify skin lesions? arXiv, 1811.01051v2.
Lipton, Z.C. (2016). The mythos of model interpretability. arXiv, 1606.03490v3.
Lundberg, S. M. & Lee, S. (2017). A unified approach to interpreting model predictions. arXiv, 1705.07874v2.
Ma, W., Qiu, Z., Song, J., Li, J., Cheng, Q., Zhai, J., & Ma, C. (2018). A deep convolutional neural network approach for predicting phenotypes from genotypes. Planta, 248(5), 1307–1318.
Article
Google Scholar
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. MIT Press.
Google Scholar
Massimi, M. (2019). Two kinds of exploratory models. Philosophy of Science, 86(5), 869–881.
Article
Google Scholar
Miotto, R., Li, L., Kidd, B. A., & Dudley, J. T. (2016). Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Scientific Reports, 6(1), 1–10.
Article
Google Scholar
Montavon, G., Samek, W., & Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.
MathSciNet
Article
Google Scholar
Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press.
MATH
Google Scholar
Pietsch, W. (2015). Aspects of Theory-Ladenness in Data-Intensive Science. Philosophy of Science, 82, 905–916.
Article
Google Scholar
Ratti, E. (2015). Big Data Biology: Between Eliminative Inferences and Exploratory Experiments. Philosophy of Science, 82(2), 198–218.
Article
Google Scholar
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv, 1602.04938v3.
Ritchie, J. B., Kaplan, D.M., & Klein, C. (2019). Decoding the brain: neural representation and the limits of multivariate pattern analysis in cognitive neuroscience. British Journal for the Philosophy of Science, 70(2, 581–607.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.
Article
Google Scholar
Salmon, W. C. (1989). Four decades of scientific explanation. University of Minnesota Press.
Google Scholar
Schmidt, J., Marques, M. R. G., Botti, S., & Marques, M. A. L.(2019). Recent advances and applications of machine learning in solid-state materials science. npj Computational Materials, 5, 83.
Shagrir, O. (2006). Why we view the brain as a computer. Synthese, 153(3), 393–416.
MathSciNet
Article
Google Scholar
Steinle, F. (1997). Entering New Fields: Exploratory Uses of Experimentation. Philosophy of Science 64 (Proceedings): S64–S74.
Sullivan, E. (2019). Understanding from machine learning models. The British Journal for the Philosophy of Science, axz035.
Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv, 1806.07552.
Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2).
Wu, M., Hughes, M. C., Parbhoo, S., Zazzi, M., Roth, V., & Doshi-Velez, F. (2018). Beyond sparsity: Tree regularization of deep models for interpretability. arXiv, 1711.06178v1.
Wu, Y., Ding, Y., Tanaka, Y., & Zhang, W. (2014). Risk factors contributing to type 2 diabetes and recent advances in the treatment and prevention. International Journal of Medical Sciences, 11(11), 1185–1200.
Article
Google Scholar
Zednik, C. (2018). Will machine learning yield machine intelligence? In V. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, 44. Springer: Cham.
Zednik, C. (2019). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00382-7
Article
Google Scholar
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology, 32(4), 661–683.
Article
Google Scholar
Zhavoronkov, A. (2018). Artificial intelligence for drug discovery, biomarker development, and generation of novel chemistry. Molecular Pharmaceutics, 15(10), 4311–4313.
Article
Google Scholar
Zilke, J. R., Mencia, E. L., & Janssen, F. (2016). DeepRED – Rule extraction from deep neural networks. In T. Calders, M. Ceci, & D. Malerba (eds.), Discovery Science 19th International Conference (pp. 457–473).
Zintgraf, L. M., Cohen, T. S., Adel, T., & Welling, M. (2017). Visualizing deep neural network decisions: Prediction difference analysis. arXiv, 1702.04595.