Abstract
In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (including other epistemic technologies) AI can be uniquely positioned as an epistemic technology in that it is primarily designed, developed and deployed to be used in epistemic contexts such as inquiry, it is specifically designed, developed and deployed to manipulate epistemic content such as data, and it is designed, developed and deployed to do so particularly through epistemic operations such as prediction and analysis. As has been shown in recent work in the philosophy and ethics of AI (Alvarado, AI and Ethics, 2022a), understanding AI as an epistemic technology will also have significant implications for important debates regarding our relationship to AI technologies. This paper includes a brief overview of such implications, particularly those pertaining to explainability, opacity, trust and even epistemic harms related to AI technologies.
Similar content being viewed by others
Notes
The term AI is here purposedly left as maximally inclusive to refer to many distinct computational technologies that characterize its development in the past decades, from longstanding machine learning (ML) methodology, including deep neural networks (DNNs), to more recent development in analytic tools like transformers, large language models (LLMs) and multimodal generative models that rely on techniques such as gradient descent.
As we will see, this is so even if the artifact can be or is used for something other than what it was originally intended for. A brick can be used as a brick, as a door stop, as a step, or as a weapon, but it can only be used for other things than as a brick in virtue of the fact that it was built as a brick and not a gelatin dessert in the first place.
As we will see in the sections below, there are important developments in the philosophy of science where the epistemic component of computational methods is in fact recognized, albeit with some similar limitations to the ones discussed here. For now, however, let us focus on the accounts briefly mentioned here.
A further common thread, that is orthogonal to our discussion, is a broad and liberal understanding of knowledge-acquisition practices that precludes a distinction between scientific undertakings and ordinary epistemic practices. Under this view, science is simply a continuation of ordinary empirical endeavors such as tasting, touching, or looking at things to gather information. In short, crossing the street and colliding subatomic particles at CERN are seen as both being and belonging to the same kind of epistemic enterprise. These views are in part the product of naturalized epistemology projects that understand a baby repeatedly throwing their milk bottle on the ground as a low budget experiment in gravitational physics. While the deflationary and reductionist assumptions inherent in these views are problematic, it is important to acknowledge that there is a rich an important debate pertaining to what constitutes scientific inquiry and what does not. This issue in the philosophy of science is called the “demarcation problem.”
Notice that while this last item sounds as an exaggeration, the broadness of the criteria so far described allows so that any artifact deployed that “enables” the continuing acquisition of knowledge qualifies. If one accepts that culinary objects such as food and drink are artifacts and that they are used—often explicitly, as in the case of caffeinated drinks—to enable or enhance a practitioner’s stamina in the laboratory, then we would have to accept the coffee maker’s central role in inquiry. Furthermore, while one may want to debate the relevant distinctions between a culinary object such as coffee as an artifact and other mechanical objects such as technical artifacts, this distinction would do little to exclude the complex devices designed, developed and deployed to make the coffee itself from being considered.
Some may reply here that it is simply true that you can build an abacus from a hammer and a hammer from an abacus or abacuses from hammers and vice versa. However, notice that you still have to build an abacus from hammers and vice versa, i.e., the hammer in and of itself as is and in virtue of what it is does not do the job of an abacus and cannot do the job of an abacus unless appropriately arranged as such. This implies, at the very least, that there is something that differentiates them at a fundamental level. That something, as we will see, may be a product of function, engagement, materiality, target phenomena, etc., but it is not merely use.
As we will see, things like pharmaceuticals are designed to intervene chemically with organisms, they work through biochemical interactions and are often deployed and designed for contexts of care and not contexts of inquiry.
Here I want to thank the anonymous reviewer that invited clarification on this key term.
A detailed distinction between the terms ‘epistemic’ and ‘cognitive’ will be made at a later point in sections below, for now however, we can recognize the cognitive as distinct from the physical and at least as more closely related to knowledge-acquisition capacities than the latter, and this suffices for the claims so far.
Radio-aided telescopes, too, expand the range of the spectrum of electromagnetic radiation available to us. Software-intensive (Symons & Horner, 2014) instruments like more modern telescopes pose an epistemologically interesting question in the context of Humphreys’ enhancement taxology. Unfortunately, this question is beyond the space and aims of this current paper.
As we will see later, note, that while in this instance Humphreys may be referring to the successful cases of enhancement, the term need not be a success term. In this sense, both a flawed model and a false model count as epistemic enhancers in principle, even if in practice they are not conducive to truths. For a thorough overview of the intricacies related to idealizations in scientific representation see (Pincock, 2011). For a similar overview related more closely to computational methods such as computer simulations see Morrison (2015). Recently, a more focused debate emerged regarding the role of non-factive content in scientific explanations, with an emerging consensus admitting that non-factive content could indeed be a significant part of a scientific explanation (Paez, 2009, 2019). For a thorough refutation of this conventional view, see Sullivan and Khalifa (2019).
Alvarado (2021a, 2021b) suggests this claim is somewhat misguided. As he mentions in a footnote, novel representational devices such as notations (calculus) or aggregational insights (averages), have indeed enhanced our access to areas of knowledge previously unavailable. Perhaps, hidden in the representational opacity (Alvarado & Humphreys, 2017; Alvarado, 2021a; Burrell, 2016) of neural networks and other similar computational methods in machine learning and statistical analysis we may find a similar enhancement. Furthermore, at least in the mathematical community, genuine questions seem to have arisen about the mathematical novelty of computer-assisted proofs (Hartnett, 2015).
It is worth noting here that there is an interesting question regarding Humphreys’ views on the possibility of automated methods as full, albeit artificial, epistemic agents (Humphreys, 2004, p. 6, 2009a, b). If as specified both in the opening chapter of his book Extending Ourselves (2004) and in his paper Network Epistemology (2009), we can think of certain artifacts as fully autonomous epistemic agents, then perhaps these artificial agents themselves do access a kind of augmentation that is simply inaccessible to humans.
I want to thank the anonymous reviewers that kindly invited clarification on the repetitive use of the term ‘epistemic’, on what was meant by epistemic and on the distinction between epistemic and cognitive, which now follows in detail.
See Record and Miller (2018) for an example of how in discussions of certain technologies both the concept of ‘epistemic technology’ and the concept of ‘mind extenders’ are used in an overlapping manner.
While Hernández-Orallo and Vold want to argue that the tools they are referring to are not “merely cognitive prosthetic” (p.508) but that they can give humans novel cognitive capacities, their examples all seem to refer to already existing human cognitive capacities, only enhanced.
The relationship between intention (intended design, more precisely) and the ontological status of an artifact is contentious. One could easily imagine an artifact whose intended function no longer figures in a future use. However, Symons’ distinction holds. All he needs from this claim is that an intention was present in the original design of the artifact to differentiate its artifactual nature from either organs or other pseudo-artifacts (Kroes, 2002)—i.e., ready-made natural objects whose properties happen to coincide with an existing human interest or need, e.g., a rock formation in the form of a bowl.
The intended function of an artifact—even if it does not endure the existence of the object, e.g., the object is no longer used for what it was intended and/or the intention is all but forgotten—also explains the continued functional identification of an artifact beyond fallibility. For example, we can refer to a broken corkscrew as a corkscrew, even when it does not fulfill its function. We can also continue to refer to a permanently-grounded airplane at a museum as an airplane.
Consider, as a contrast to a broken corkscrew, a blueprint or a non-implementable design.
Capturing the nature of error in software-based epistemic technologies may prove to be a non-trivial philosophical issue. For example, Floridi et al. (2015) posit that software as an artifact is the kind of object that cannot malfunction. Unlike a broken hammer, which may still be a hammer despite its inability to perform the function through which it is identified (as a functionally identifiable object), a word processor that fails to process words is not a word processor.
I want to thank an anonymous reviewer whose comments made clear this distinction needed to be explicitly stated.
As Alvarado (2021a, 2021b) notes, what an artifact is meant to do, the functions it carries out to achieve this purpose and the teleological context in which it is deployed are all distinct. His example is that of a carburetor: its purpose is to mix fuel and oxygen, it does so through the manipulation of valves, and it enables a combustion engine to run. All three are distinct.
It is important to emphasize that strictly speaking these computational artifacts no not deal with images, precisely. Rather they manipulate or engage with a complex web of statistical patterns related to numerical values in pixel information. Hence why some philosophers and technologists deem some of these technologies in image recognition to be epistemically opaque in a specific manner, namely representationally opaque (Burrel, 2016; Alvarado & Humphreys, 2017; Alvarado, 2020, 2022a, 2022b).
Although calling the operations of machine learning methods ‘epistemic’ in this sense may sound to some as a bit of a stretch, there is a sense in which there is a straightforward distinction between low level computational operations such as the ones carried by a compiler and the higher-level operations carried out by machine learning algorithms. Hence the use of the term ‘epistemic’ and not ‘cognitive’. While fleshing out the relationship and the independence between these two concepts goes beyond the scope and aim of this paper, it can be said, without much controversy, that while epistemic tasks require cognitive processes, cognitive processes are not always necessarily epistemic tasks. The processing of light by plants in photosynthesis may, for example, meet the definition of a cognitive task, according to some philosophers (see Calvo, 2016), but it would be a non-trivial inferential leap, for example, to postulate that because a plant processes information in a dynamic way as a response to its environment, that such processes directly correlate to the generation of knowledge in the plant or by the plant. The epistemic status of plants is an issue far beyond the scope and aims of this paper. The point made here is simply to signal a distinction between the concept ‘cognitive’ and the concept ‘epistemic’ with the assumption that cognitive concepts do not necessarily imply any mental states or dispositions such as beliefs, propositions, etc. Although, it is true that ‘epistemic’ as used here, is more related to knowledge creation and acquisition practices and less related to conventionally understood mental states, it is also the case that I am here using an understanding of ‘epistemic’ that implies fallibility. That is, I am referring to things as epistemic in nature even when they fail to achieve the function for which they were designed, e.g., the creation, retention, expansion, of knowledge itself.
Thanks to the anonymous reviewer for inviting clarification of this point.
London makes the following analogy concerning the acceptance of both modern pharmaceuticals and opaque AI technologies: “modern clinicians prescribed aspirin as an analgesic for nearly a century without understanding the mechanism through which it works. Lithium has been used as a mood stabilizer for half a century, yet why it works remains uncertain.” (London, 2019 p. 17). This argumentative strategy works at a rhetorical level. As a reductio at absurdum, this argument pushes us into a corner because in ordinary settings most of us would not want to condemn widespread medical practices as undesirable. However, the soundness of the premises in the argument—mainly that we accept opaque and associationist methodology from medical practitioners without any significant reservation—depends on who are the interlocutors and what is the given situation. When a medical practitioner is talking to a peer as an epistemic source, say as when a doctor is consulting with a radiologist, opaque and merely associationist reasoning will not be as acceptable as when a doctor recommends a treatment to a patient. In fact, given the modern rejection of historical medical paternalism (Millar, 2015), many informed patients would not accept opacity or arbitrary associationism from their care providers unless in the context of a medical crisis. Although views that minimize the importance of epistemic opacity often operate under an assumption of crisis (e.g., by stating that these technologies could save lives) to ethically motivate and practically justify our reliance on opaque technologies, it is not immediately obvious that such a strategy is genuinely informative in the long term or as a default position concerning the world or the human condition. Most humans, for most of the time, in most contexts will not be in such a crisis mode when we design, develop, deploy or interact with these technologies.
As Symons and Alvarado (2019) point out, the transferring of epistemic warrants from one process to another is a non-trivial issue in the epistemology of instruments deployed in inquiry. Having good reason to trust an underlying process or method (or person), does not automatically grant reason to trust a novel technology derived from it.
Note that what Alvarado is saying here is not that we should trust epistemic technologies to provide reliable epistemic content. Rather, what Alvarado is suggesting is that if we are to trust epistemic technologies, we should only try to allocate epistemic trust and no other kind of trust (e.g., interpersonal trust or trust in capacities that are not epistemic). Whether or not one can epistemically trust AI is still an open question according to him (2022a).
According to Simion, an epistemic norm is one that is closely related to an epistemic value (Simion, 2019).
While some of these authors use different examples (and Duran and Jongsma’s work is a broader take on the issues of reliability and opacity), these papers all refer to medical settings and all suggest that pragmatic considerations such as appeals to success records and accurate outputs may suffice to circumvent worries about opacity (see Duran & Jongsma, 2021, p. 332). In this sense and in reference to the problem of opacity they seem to imply a similar argumentative strategy and a similar solution to the problem of opacity.
By the term ‘accepting’, Symons and Alvarado seem to mean something akin to justifying the reliance/trust on, believing the result of, permitting it to count as, admissible. For the sake of the argument, here, I do so too.
See Symons and Alvarado, 2019 to see a thorough account of the distinct epistemic warrants at play in each of those cases.
References
Alvarado, R. (2021a). Explaining epistemic opacity. (Preprint here: http://philsci-archive.pitt.edu/19384/)
Alvarado, R. (2020). Opacity, big data, Artificial Intelligence and machine learning in democratic processes. In K. Macnish (Ed.), Big data and democracy. Edinburgh University Press.
Alvarado, R. (2021). Computer simulations as scientific instruments. Foundations of Science, 27, 1–23.
Alvarado, R. (2022a). What kind of trust does AI deserve, if any? AI and Ethics. https://doi.org/10.1007/s43681-022-00224-x
Alvarado, R. (2022b). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133.
Alvarado, R., & Humphreys, P. (2017). Big data, thick mediation, and representational opacity. New Literary History, 48(4), 729–749.
Anthony, C. (2018). To question or accept? How status differences influence responses to new epistemic technologies in knowledge work. Academy of Management Review, 43(4), 661–679.
Barocas, S., Hardt, M., & Narayanan, A. (2017). Fairness in machine learning. Nips Tutorial, 1, 2017.
Baier, A. C. (1985). What do women want in a moral theory? Noûs, 19(1).
Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press.
Becker, P., & Clark, W. (Eds.) (2001). Little tools of knowledge: Historical essays on academic and bureaucratic practices. University of Michigan Press.
Bergstrom, C. T., & West, J. D. (2021). Calling bullshit: The art of skepticism in a data-driven world. Random House Trade Paperbacks.
Bhatt, S., Sheth, A., Shalin, V., & Zhao, J. (2020). Knowledge graph semantic enhancement of input data for improving AI. IEEE Internet Computing, 24(2), 66–72.
Bjerring, J. C., & Busch, J. (2021). Artificial Intelligence and patient-centered decision-making. Philosophy & Technology, 34(2), 349–371.
Boge, F. J. (2022). Two dimensions of opacity and the deep learning predicament. Minds and Machines, 32(1), 43–75.
Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.
Calvo, P. (2016). The philosophy of plant neurobiology: A manifesto. Synthese, 193(5), 1323–1343.
Carbonell, J. G., Michalski, R. S., & Mitchell, T. M. (Eds.) (1983). An overview of machine learning. In Machine learning (pp. 3–23). Springer.
Cho, J. H., Xu, S., Hurley, P. M., Mackay, M., Benjamin, T., & Beaumont, M. (2019). Stram: Measuring the trustworthiness of computer-based systems. ACM Computing Surveys (CSUR), 51(6), 1–47.
Chockley, K., & Emanuel, E. (2016). The end of radiology? Three threats to the future practice of radiology. Journal of the American College of Radiology, 13(12), 1415–1420.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Danks, D. (2019). The value of trustworthy AI. In Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society (pp. 521–522).
Daston, L. (2012). The sciences of the archive. Osiris, 27(1), 156–187.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.
Davies, T., & Frank, M. (2013). 'There's no such thing as raw data' exploring the socio-technical life of a government dataset. In Proceedings of the 5th annual ACM web science conference (pp. 75–78).
Dougherty, D., & Dunne, D. D. (2012). Digital science and knowledge boundaries in complex innovation. Organization Science, 23(5), 1467–1484.
Dretske, F. (2000). Entitlement: Epistemic rights without epistemic duties? Philosophy and Phenomenological Research, 60(3), 591–606.
Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial Intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. International Journal of Information Management, 48, 63–71.
Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28, 645–666.
Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329–335.
El Naqa, I., & Murphy, M. J. (2015). What is machine learning? In I. El Naqa, R. Li. & M. J. Murphy (Eds.), Machine learning in radiation oncology (pp. 3–11). Springer.
Ferrario, A., & Loi, M. (2021). The meaning of “Explainability fosters trust in AI”. Available at SSRN 3916396.
Ferrario, A., Loi, M., & Viganò, E. (2020). In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions. Philosophy & Technology, 33(3), 523–539.
Ferrario, A., Loi, M., & Viganò, E. (2021). Trust does not need to be human: It is possible to trust medical AI. Journal of Medical Ethics, 47(6), 437–438.
Floridi, L., Fresco, N., & Primiero, G. (2015). On malfunctioning software. Synthese, 192, 1199–1220.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.
Fricker, M. (2017). Evolving concepts of epistemic injustice. In The Routledge handbook of epistemic injustice (pp. 53–60). Routledge.
Friedrich, M. (2018). The birth of the archive: A history of knowledge. University of Michigan Press.
Girer, N., Sasu, N., Ayoola, P., & Fagan, J. M. (2011). Adderall usage among college students.
Goldman, A. I. (1986). Epistemology and cognition. Harvard University Press.
Goldman, A. I. (2018). Philosophical applications of cognitive science. Routledge.
Golinski, J. (1994). Precision instruments and the demonstrative order of proof in Lavoisier’s chemistry. Osiris, 9, 30–47.
Hakkarainen, K., Engeström, R., Paavola, S., Pohjola, P., & Honkela, T. (2009). Knowledge practices, epistemic technologies, and pragmatic web. In I-Semantics (pp. 683–694).
Hartnett, K. (2015). Will computers redefine the roots of math? Quanta Magazine, 19.
Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120.
Hernández-Orallo, J., & Vold, K. (2019). AI extenders: The ethical and societal implications of humans cognitively extended by AI. In Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society (pp. 507–513).
Hinton, G. (2016). Machine learning and the market for intelligence. In Proceedings of the machine learning and marketing intelligence conference.
Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press.
Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626.
Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229.
Jha, S., & Topol, E. J. (2016). Adapting to Artificial Intelligence: Radiologists and pathologists as information specialists. JAMA, 316(22), 2353–2354.
Jöhnk, J., Weißert, M., & Wyrtki, K. (2021). Ready or not, AI comes—an interview study of organizational AI readiness factors. Business & Information Systems Engineering, 63(1), 5–20.
Kiernan, J., Reid, C., & Zavos, P. (2016). Pulling an all-nighter: Current trends of college students’ use of adderall. MOJ Womens Health, 3(1), 167–170.
Kim, J. (1982). Psychophysical supervenience. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 41(1), 51–70.
Knowles, B., & Richards, J. T. (2021). The sanction of authority: Promoting public trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 262–271).
Kroes, P. A. (2003). Physics, experiments, and the concept of nature. In The philosophy of scientific experimentation (pp. 68–86). University of Pittsburgh Press.
Kroes, P. (2010). Engineering and the dual nature of technical artefacts. Cambridge Journal of Economics, 34(1), 51–62.
Kroes, P., & Meijers, A. (2002). The dual nature of technical artifacts-presentation of a new research programme. University Library.
Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 1.
Lazar, S. (forthcoming) Legitimacy, authority, and the political value of explanations. To be presented as Keynote for Oxford Studies in Political Philosophy. https://philpapers.org/archive/LAZLAA-2.pdf
Lombardo, P., Boehm, I., & Nairz, K. (2020). RadioComics–Santa Claus and the future of radiology. European Journal of Radiology, 122.
London, A. J. (2019). Artificial Intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15–21.
Mazurowski, M. A. (2019). Artificial Intelligence may cause a significant disruption to the radiology workforce. Journal of the American College of Radiology, 16(8), 1077–1082.
McCraw, B. W. (2015). The nature of epistemic trust. Social Epistemology, 29(4), 413–430.
Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1–25.
Millar, J. (2015). Technology as moral proxy: Autonomy and paternalism by design. IEEE Technology and Society Magazine, 34(2), 47–55.
Miller, B. (2021). Is technology value-neutral? Science, Technology, & Human Values, 46(1), 53–80.
Miller, B., & Record, I. (2013). Justified belief in a digital age: On the epistemic implications of secret Internet technologies. Episteme, 10(2), 117–134.
Miller, B., & Record, I. (2017). Responsible epistemic technologies: A social-epistemological analysis of autocompleted web search. New Media & Society, 19(12), 1945–1963.
Mitchell, M. (2019). Artificial Intelligence: A guide for thinking humans. Farrar.
Morrison, M. (2015). Reconstructing reality: Models, mathematics, and simulations. Oxford University Press.
Norman, D. A. (1991). Cognitive artifacts. Designing Interaction: Psychology at the Human-Computer Interface, 1(1), 17–38.
Páez, A. (2009). Artificial explanations: The epistemological interpretation of explanation in AI. Synthese, 170(1), 131–146.
Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459.
Pincock, C. (2011). Mathematics and scientific representation. Oxford University Press.
Piredda, G. (2020). What is an affective artifact? A further development in situated affectivity. Phenomenology and the Cognitive Sciences, 19, 549–567.
Polger, T. W. (2013). Physicalism and Moorean supervenience. Analytic Philosophy, 54(1), 72–92.
Ratti, E., & Graves, M. (2022). Explainable machine learning practices: Opening another black box for reliable medical AI. AI and Ethics, 2(4), 1–14.
Ratto, M. (2012). CSE as epistemic technologies: Computer modeling and disciplinary difference in the humanities. In Wes Sharrock & J. Leng (Eds.), Handbook of research on computational science and engineering theory and practice (pp. 567–586). IGI Global.
Record, I., & Miller, B. (2018). Taking iPhone seriously: Epistemic technologies and the extended mind. In Duncan Pritchard (Ed.), Extended epistemology. Oxford University Press.
Reiner, P. B., & Nagel, S. K. (2017). Technologies of the extended mind defining the issues. In Judy Illes (Ed.), Neuroethics: Anticipating the future (pp. 108–122). Oxford University Press.
Rossi, F. (2018). Building trust in Artificial Intelligence. Journal of International Affairs, 72(1), 127–134.
Russo, F. (2022). Techno-scientific practices: An informational approach. Rowman & Littlefield.
Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767.
Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109(3), 247.
Sarle, W. S. (1994). Neural networks and statistical models. In Proceedings of the nineteenth annual SAS users group international conference.
Schifano, F. (2020). Coming off prescribed psychotropic medications: Insights from their use as recreational drugs. Psychotherapy and Psychosomatics, 89(5), 274–282.
Sethumadhavan, A. (2019). Trust in Artificial Intelligence. Ergonomics in Design, 27(2), 34–34.
Simion, M. (2018). The ‘should’ in conceptual engineering. Inquiry, 61(8), 914–928.
Simion, M. (2019). Conceptual engineering for epistemic norms. Inquiry. https://doi.org/10.1080/0020174X.2018.1562373
Simon, J. (2010). The entanglement of trust and knowledge on the Web. Ethics and Information Technology, 12, 343–355.
Stolz, S. (2012). Adderall abuse: Regulating the academic steroid. J.L. & Educ., 41, 585.
Studer, R., Ankolekar, A., Hitzler, P., & Sure, Y. (2006). A semantic future for AI. IEEE Intelligent Systems, 21(4), 8–9.
Sullivan, E., & Khalifa, K. (2019). Idealizations and understanding: Much ado about nothing? Australasian Journal of Philosophy., 97(4), 673–689.
Symons, J. (2010). The individuality of artifacts and organisms. History and Philosophy of the Life Sciences, 32, 233–246.
Symons, J., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 29(1), 37–60.
Symons, J., & Alvarado, R. (2022). Epistemic injustice and data science technologies. Synthese, 200(2), 1–26.
Symons, J., & Horner, J. (2014). Software intensive science. Philosophy & Technology, 27, 461–477.
Van Helden, A. (1994). Telescopes and authority from Galileo to Cassini. Osiris, 9, 8–29.
Van Helden, A., & Hankins, T. L. (1994). Introduction: Instruments in the history of science. Osiris, 9, 1–6.
Varga, M. D. (2012). Adderall abuse on college campuses: A comprehensive literature review. Journal of Evidence-Based Social Work, 9(3), 293–313.
Viola, M. (2021). Three varieties of affective artifacts: Feeling, evaluative and motivational artifacts. Phenomenology and Mind, 20, 228–241.
Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science, 76(2), 225–252.
Wilholt, T. (2013). Epistemic trust in science. The British Journal for the Philosophy of Science, 64(2), 233–253.
Wolfram, S. (2023). What is ChatGPT doing… and why does it work. Stephen Wolfram: Writings.
Yan, Y., Zhang, J. W., Zang, G. Y., & Pu, J. (2019). The primary use of Artificial Intelligence in cardiovascular diseases: What kind of potential role does Artificial Intelligence play in future medicine? Journal of Geriatric Cardiology: JGC, 16(8), 585.
Acknowledgements
Arzu Formánek, Nico Formánek, John Symons, Gabriela Arriagada Bruneau, the Ethics of Technology Early-career Group (ETEG)
Author information
Authors and Affiliations
Contributions
RA is the sole author of this manuscript and carried out all the associated tasks in producing it.
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Alvarado, R. AI as an Epistemic Technology. Sci Eng Ethics 29, 32 (2023). https://doi.org/10.1007/s11948-023-00451-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11948-023-00451-3