Skip to main content

Instruments, agents, and artificial intelligence: novel epistemic categories of reliability

Abstract

Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics common to both scientific instruments and scientific experts, this paper argues that the familiar epistemic categories that justify belief in the reliability of instruments and experts are distinct, and that belief in the reliability of DL cannot be reduced to either. Understanding what can justify belief in AI reliability represents an occasion and opportunity for exciting, new philosophy of science.

This is a preview of subscription content, access via your institution.

Notes

  1. Philosophers have also revived interest in what we can learn about cognition from deep learning models. For instance, Buckner (2018) argues that the evaluation of the behavior of deep convolutional neural networks helps us resolve questions going back to Locke concerned with human abilities for abstraction. Others, however, have expressed skepticism about the legitimacy of looking to neural nets as plausible models of human cognition at all (Stinson, 2020).

  2. Recent research on the trustworthiness of experts and expert claims notwithstanding (Ioannidis, 2005; Wilholt, 2020), throughout, I take it that we are presumptively entitled to the belief that experts are following best practices and are not being dishonest in their claims.

  3. In fact, many philosophers have argued that simulation requires special philosophical attention (Galison, 1996; Humphreys, 2004, 2009; Oreskes et al., 1994; Rohrlich, 1990; Winsberg, 2001, 2003). In general, I am sympathetic to the view that computational simulation extends the philosophical literature in genuinely fruitful ways and that consideration of simulation deepens our understanding of scientific methodology. It has, nevertheless, proved difficult to articulate precisely in what ways computational simulations give rise to specific philosophical concerns that are qualitatively distinct from those already native to the more general literature on models, experiments, or computation.

  4. See https://www.nsf.gov/pubs/2022/nsf22502/nsf22502.htm.

  5. See, however, Nguyen (2020) who argues that trust is an unquestioning attitude which can be taken with respect to, among other things, ropes.

  6. Experimental techniques are used to calibrate some physically mediated instruments. However, DLMs are not physically mediated instruments. They are mathematical functions.

References

  • Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in Neural Information Processing Systems, 31, 9505–9515.

    Google Scholar 

  • Ashby, W. R. (1961). An introduction to cybernetics. Chapman & Hall Ltd.

  • Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.

    Google Scholar 

  • Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press.

  • Baird, D., & Faust, T. (1990). Scientific instruments, scientific progress and the cyclotron. The British Journal for the Philosophy of Science, 41(2), 147–175.

    Google Scholar 

  • Baker, B., Lansdell, B., Kording, K. (2021). A philosophical understanding of representation for neuroscience. arXiv preprint. arXiv:2102.06592

  • Baker, J. (1987). Trust and rationality. Pacific Philosophical Quarterly, 68(1), 1–13.

    Google Scholar 

  • Birch, J., Creel, K. A., Jha, A. K., & Plutynski, A. (2022). Clinical decisions using AI must consider patient values. Nature Medicine, 28(2), 229–232.

    Google Scholar 

  • Boge, F. J. (2021). Two dimensions of opacity and the deep learning predicament. Minds and Machines, 32(1), 43–75.

    Google Scholar 

  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L., Goel, K., Goodman, N., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D. E., Hong, J., Hsu, K., Huang, J., Icard, T., Jain, S., Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P. W., Krass, M., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X. L., Li, X., Ma, T., Malik, A., Manning, C. D., Mirchandani, S., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J. C., Nilforoshan, H., Nyarko, J., Ogut, G., Orr, L., Papadimitriou, I., Park, J. S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y., Ruiz, C., Ryan, J., Ré, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K., Tamkin, A., Taori, R., Thomas, A. W., Tramèr, F., Wang, R. E., Wang, W., Wu, B., Wu, J., Wu, Y., Xie, S. M., Yasunaga, M., You, J., Zaharia, M., Zhang, M., Zhang, T., Zhang, X., Zhang, Y., Zheng, L., Zhou, K., & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint. arXiv:2108.07258

  • Branch, B., Mirowski, P., & Mathewson, K. W. (2021). Collaborative storytelling with human actors and AI narrators. arXiv preprint. arXiv:2109.14728

  • Buckner, C. (2018). Empiricism without magic: Transformational abstraction in deep convolutional neural networks. Synthese, 195(12), 5339–5372.

    Google Scholar 

  • Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass, 14(10), e12625.

    Google Scholar 

  • Charbonneau, M. (2010). Extended thing knowledge. Spontaneous Generations: A Journal for the History and Philosophy of Science, 4(1), 116–128.

    Google Scholar 

  • Chen, Y., Lin, Z., Zhao, X., Wang, G., & Yanfeng, G. (2014). Deep learning-based classification of hyperspectral data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote sensing, 7(6), 2094–2107.

    Google Scholar 

  • Creel, K. A. (2020). Transparency in complex computational systems. Philosophy of Science, 87(4), 568–589.

    Google Scholar 

  • D’Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., Chen, C., Deaton, J., Eisenstein, J., Hoffman, M. D., Hormozdiari, F., Houlsby, N., Hou, S., Jerfel, G., Karthikesalingam, A., Lucic, M., Ma, Y., McLean, C., Mincu, D., Mitani, A., Montanari, A., Nado, Z., Natarajan, V., Nielson, C., Osborne, T. F., Raman, R., Ramasamy, K., Sayres, R., Schrouff, J., Seneviratne, M., Sequeira, S., Suresh, H., Veitch, V., Vladymyrov, M., Wang, X., Webster, K., Yadlowsky, S., Yun, T., Zhai, X., & Sculley, D. (2020). Underspecification presents challenges for credibility in modern machine learning. arXiv preprint. arXiv:2011.03395

  • Duede, E. (2022). Deep learning opacity in scientific discovery. (Forthcoming at Philosophy of Science) arXiv preprint. arXiv:2206.00520

  • Elgin, C. Z. (2017). True enough. MIT Press.

    Google Scholar 

  • Engelbart, D. C. (1962). Augmenting human intellect: A conceptual framework. Menlo Park.

  • Falco, G., Shneiderman, B., Badger, J., Carrier, R., Dahbura, A., & Danks, D. (2021). Governing AI safety through independent audits. Nature Machine Intelligence, 3(7), 566–571.

    Google Scholar 

  • Faulkner, P. (2007). On telling and trusting. Mind, 116(464), 875–902.

    Google Scholar 

  • Fricker, E. (2006). Second-hand knowledge. Philosophy and Phenomenological Research, 73(3), 592–618.

    Google Scholar 

  • Frigg, R. (2010). Fiction and scientific representation. In Beyond mimesis and convention (pp. 97–138). Springer.

  • Frigg, R., & Nguyen, J. (2016). The fiction view of models reloaded. The Monist, 99(3), 225–242.

    Google Scholar 

  • Frigg, R., & Reiss, J. (2009). The philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613.

    Google Scholar 

  • Frost-Arnold, K. (2013). Moral trust & scientific collaboration. Studies in History and Philosophy of Science Part A, 44(3), 301–310.

    Google Scholar 

  • Galison, P. (1996). Computer simulations and the trading zone. In P. Galison & D. J. Stump (Eds.), The disunity of science: Boundaries, contexts, and power (pp. 118–157). Stanford University Press.

    Google Scholar 

  • Galison, P. (1997). Image and logic: A material culture of microphysics. University of Chicago Press.

  • Gerken, M. (2015). The epistemic norms of intra-scientific testimony. Philosophy of the Social Sciences, 45(6), 568–595.

    Google Scholar 

  • Ghorbani, A., Abid, A., & Zou, J. (2019). Interpretation of neural networks is fragile. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 3681–3688.

    Google Scholar 

  • Giere, R. N. (2010). Explaining science: A cognitive approach. University of Chicago Press.

  • Goldberg, S. C. (2014). Interpersonal epistemic entitlements. Philosophical Issues, 24(1), 159–183.

  • Goldberg, S. C. (2020). Epistemically engineered environments. Synthese, 197(7), 2783–2802.

    Google Scholar 

  • Goldberg, S. C. (2021). What epistemologists of testimony should learn from philosophers of science. Synthese, 199(5), 12541–12559.

    Google Scholar 

  • Goldman, A. I. (1979). What is justified belief? In Justification and knowledge (pp. 1–23). Springer.

  • Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press.

  • Hardin, R. (1996). Trustworthiness. Ethics, 107(1), 26–42.

    Google Scholar 

  • Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 82(7), 335–349.

    Google Scholar 

  • Hardwig, J. (1991). The role of trust in knowledge. The Journal of Philosophy, 88(12), 693–708.

    Google Scholar 

  • Harré, R. (2010). Equipment for an experiment. Spontaneous Generations: A Journal for the History and Philosophy of Science, 4(1), 30–38.

    Google Scholar 

  • Hatherley, J. J. (2020). Limits of trust in medical AI. Journal of Medical Ethics, 46(7), 478–481.

    Google Scholar 

  • Hieronymi, P. (2008). The reasons of trust. Australasian Journal of Philosophy, 86(2), 213–236.

    Google Scholar 

  • Hinchman, E. S. (2005). Telling as inviting to trust. Philosophy and Phenomenological Research, 70(3), 562–587.

    Google Scholar 

  • Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy, 72(1), 63–76.

    Google Scholar 

  • Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press.

    Google Scholar 

  • Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626.

    Google Scholar 

  • Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

    Google Scholar 

  • Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4–25.

    Google Scholar 

  • Jones, K. (2012). Trustworthiness. Ethics, 123(1), 61–85.

    Google Scholar 

  • Keren, A. (2014). Trust and belief: A preemptive reasons account. Synthese, 191(12), 2593–2615.

    Google Scholar 

  • Khalifa, K. (2017). Understanding, explanation, and scientific knowledge. Cambridge University Press.

    Google Scholar 

  • Lackey, J. (2010). Learning from words: Testimony as a source of knowledge. Oxford University Press.

  • Leavitt, M. L., & Morcos, A. (2020). Towards falsifiable interpretability research. arXiv preprint. arXiv:2010.12016

  • Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. Available at SSRN 3403301.

  • Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57.

    Google Scholar 

  • Meeker, K. (2004). Justification and the social nature of knowledge. Philosophy and Phenomenological Research, 69(1), 156–172.

    Google Scholar 

  • Neyshabur, B., Tomioka, R., & Srebro, N. (2014). In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint. arXiv:1412.6614

  • Nguyen, C. T. (2020). Trust as an unquestioning attitude. In Oxford studies in epistemology. Oxford: Oxford University Press.

  • Nickel, P. J. (2012). Trust and testimony. Pacific Philosophical Quarterly, 93(3), 301–316.

    Google Scholar 

  • Nie, W., Zhang, Y., & Patel, A. (2018). A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In International conference on machine learning (pp. 3809–3818). PMLR.

  • Norton, S., & Suppe, F. (2001). Why atmospheric modeling is good science. In Changing the atmosphere: Expert knowledge and environmental governance (pp. 67–105).

  • Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646.

    Google Scholar 

  • Parker, W. S. (2008). Computer simulation through an error-statistical lens. Synthese, 163(3), 371–384.

    Google Scholar 

  • Parker, W. S. (2008). Franklin, Holmes, and the epistemology of computer simulation. International Studies in the Philosophy of Science, 22(2), 165–183.

    Google Scholar 

  • Parker, W. S. (2020). Model evaluation: An adequacy-for-purpose view. Philosophy of Science, 87(3), 457–477.

    Google Scholar 

  • Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. S., … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486.

    Google Scholar 

  • Räz, T. (2022). Understanding deep learning with statistical relevance. Philosophy of Science, 89(1), 20–41.

    Google Scholar 

  • Räz, T., & Beisbart, C. (2022). The importance of understanding deep learning. Erkenntnis. https://doi.org/10.1007/s10670-022-00605-y

    Article  Google Scholar 

  • Rohrlich, F. (1990). Computer simulation in the physical sciences. In PSA: Proceedings of the biennial meeting of the philosophy of science association (Vol. 1990, pp. 507–518). Philosophy of Science Association.

  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

    Google Scholar 

  • Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767.

    Google Scholar 

  • Salmon, W. C. (1971). Statistical explanation and statistical relevance (Vol. 69). University of Pittsburgh Press.

  • Senior, A. W., Evans, R., Jumper, J., Kirkpatrick, J., Sifre, L., Green, T., Qin, C., Žídek, A., Nelson, A. W. R., Bridgland, A., Penedones, H., Petersen, S., Simonyan, K., Crossan, S., Kohli, P., Jones, D. T., Silver, D., Kavukcuoglu, K., & Hassabi, D. (2020). Improved protein structure prediction using potentials from deep learning. Nature, 577(7792), 706–710.

    Google Scholar 

  • Shapin, S., & Schaffer, S. (2011). Leviathan and the air-pump. Princeton University Press.

    Google Scholar 

  • Sines, G., & Sakellarakis, Y. A. (1987). Lenses in antiquity. American Journal of Archaeology, 91, 191–196.

    Google Scholar 

  • Smith, P. J., & Hoffman, R. R. (2017). Cognitive systems engineering: The future for a changing world. Crc Press.

  • Sourati, J., & Evans, J. (2021). Accelerating science with human versus alien artificial intelligences. arXiv preprint. arXiv:2104.05188

  • Stevens, R., Taylor, V., Nichols, J., Maccabe, A. B., Yelick, K., & Brown, D. (2020). AI for science. Technical report, Argonne National Lab.(ANL), Argonne.

  • Stinson, C. (2020). From implausible artificial neurons to idealized cognitive models: Rebooting philosophy of artificial intelligence. Philosophy of Science, 87(4), 590–611.

    Google Scholar 

  • Sullivan, E. (2019). Understanding from machine learning models. British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axz035

    Article  Google Scholar 

  • Wang, S., Kai, F., Luo, N., Cao, Y., Wu, F., Zhang, C., Heller, K. A, & You, L. (2019). Massive computational acceleration by using neural networks to emulate mechanism-based biological models. bioRxiv (p. 559559).

  • Weisberg, M. (2012). Simulation and similarity: Using models to understand the world. Oxford University Press.

  • Wilholt, T. (2020). Epistemic trust in science. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axs007

    Article  Google Scholar 

  • Winsberg, E. (2001). Simulations, models, and theories: Complex physical systems and their representations. Philosophy of Science, 68(S3), S442–S454.

    Google Scholar 

  • Winsberg, E. (2003). Simulated experiments: Methodology for a virtual world. Philosophy of Science, 70(1), 105–125.

    Google Scholar 

  • Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press.

    Google Scholar 

  • Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1), 1–19.

    Google Scholar 

  • Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3), 107–115.

    Google Scholar 

  • Zik, Y., & Hon, G. (2017). History of science and science combined: Solving a historical problem in optics—The case of Galileo and his telescope. Archive for History of Exact Sciences, 71(4), 337–344.

Download references

Acknowledgements

This manuscript benefited greatly from conversations with Kevin Davey, Tyler Millhouse, Jennifer Nagel, Wendy Parker, Tom Pashby, Anubav Vasudevan, Bill Wimsatt, participants of the Theoretical Philosophy Workshop at the University of Chicago, and the insightful feedback of two anonymous referees.

Funding

This work was supported by the US National Science Foundation #2022023 NRT-HDR: AI-enabled Molecular Engineering of Materials and Systems (AIMEMS) for Sustainability.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eamon Duede.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest, including affiliation with or involvement in an organization or entity with a financial or non-financial interest in the subject matter or materials discussed in this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is forthcoming in a special issue of Synthese titled “Philosophy of Science in Light of Artificial Intelligence”.

T.C. : Philosophy of Science in Light of Artificial Intelligence.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Duede, E. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability. Synthese 200, 491 (2022). https://doi.org/10.1007/s11229-022-03975-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-022-03975-6

Keywords

  • Deep learning
  • Scientific knowledge
  • Models
  • Reliability
  • Trust and Justification