Abstract
Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms ‘transparency’ and ‘opacity’ are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic situation of human agents with respect to these systems. While these diagnoses are independently discussed in the literature, juxtaposing them and exploring possible interrelations will help to get a view of the relevant distinctions between conceptions of opacity and their empirical bearing. In pursuit of this aim, two pertinent conditions affecting computer models in general and contemporary AI in particular are outlined and discussed: opacity as a problem of computational tractability and opacity as a problem of the universality of the computational method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bailer-Jones, D. (2009). Scientific models in philosophy of science. Pittsburgh: Pittsburgh University Press.
Beisbart, C. (2021). Opacity thought through: On the intransparency of computer simulations. Synthese, 199(3), 11643–11666.
Black, M. (1962). Models and metaphors. Ithaca: Cornell University Press.
Boge, F. J. (2021). Two dimensions of opacity and the deep learning predicament. Minds and Machines.
Boltzmann, L. (1902). Model. In D.M. Wallace, A.T. Hadley, & H. Chisholm (Eds.), Encyclopaedia Britannica (Vol. 30, , 10th edn., pp. 788–791). Adam and Charles Black, The Times, London.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 1–73.
da Costa, N., & French, S. (2003). Science and partial truth: A unitary approach to models and scientific reasoning. Oxford/New York: Oxford University Press.
Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7(5), 889–904.
Facchini, A., & Termine, A. (2022). A first contextual taxonomy for the opacity of AI systems. In V.C. Müller (Ed.) Philosophy and Theory of Artificial Intelligence 2021.
Frigg, R. , & Hartmann, S. (2020). Models in science. In E.N. Zalta (Ed.), The Stanford encyclopedia of philosophy; page html. Metaphysics Research Lab, Stanford, spring 2020 edition.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge: MIT Press.
Gunning, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. In Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI ’19 (p. ii). New York: ACM.
Hesse, M. B. (1966). Models and analogies in science. Notre Dame: University of Notre Dame Press.
Hohwy, J. (2013). The predictive mind. Oxford: Oxford University Press.
Hohwy, J. (2020). New directions in predictive processing. Mind & Language, 35(2), 209–223.
Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford: Oxford University Press.
Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169, 615–626.
Kleene, S. C. (1967). Mathematical logic. New York: Wiley.
Knuth, D. E. (1973). The art of computer programming (Vol. 1, 2 edn.). Addison-Wesley, Reading.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, K.Q. Weinberger (Eds.), NIPS’12 Proceedings of the 25th International Conference on Neural Information Processing Systems (Vol. 1, pp. 1097–1105). Lake Tahoe. Curran Associates.
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)? - a atakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.
LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature, 521, 436–444.
Markov, A. (1960). Theory of algorithms. American Mathematical Society Translations, 15.
Morgan, M. S., & Morrison, M. (Eds.). (1999). Models as mediators. Perspectives on natural and social science. Cambridge: Cambridge University Press.
Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459.
Putnam, H. (1960). Minds and machines. In S. Hook (Ed.), Dimensions of minds (pp. 138–164). New York: New York University Press.
Putnam, H. (1967). Psychological predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind and religion (pp. 37–48). Pittsburgh: University of Pittsburgh Press.
Robbins, P., & Aydede, M. (Eds.). (2009). The Cambridge handbook of situated cognition. Cambridge: Cambridge University Press.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.
Sullivan, E. (2019). Understanding from machine learning models. The British Journal for the Philosophy of Science, 1.
Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv:1806.07552.
Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42, 230–265.
Turing, A. M. (1946). Letter to W. Ross Ashby of 19 November 1946 (approx.). The W. Ross Ashby Digital Archive.
Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34, 265–288.
Acknowledgements
This work is dedicated to the memory of my dear and trusted PW colleague Helena Bulińska-Stangrecka, who tragically and prematurely died when I was finalising my manuscript. In terms of content, this paper owes a lot to my collaboration with Alessandro Facchini and Alberto Termine, who tried hard to rid me of my philosophers’ naiveté concerning how AI works. The same goes for Cameron Buckner, Holger Lyre, Jan Passoth and Carlos Zednik, who worked towards that goal earlier. Any remaining naiveté will not be the fault of any of those helpful minds.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Greif, H. (2022). Models, Algorithms, and the Subjects of Transparency. In: Müller, V.C. (eds) Philosophy and Theory of Artificial Intelligence 2021. PTAI 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 63. Springer, Cham. https://doi.org/10.1007/978-3-031-09153-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-09153-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09152-0
Online ISBN: 978-3-031-09153-7
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)