1 Introduction

The epistemology of artificial intelligence (AI) is rapidly evolving, with current research largely concentrating on the epistemic status of AI systems (Alvarado, 2022, 2023), the issue of epistemic opacity (Durán & Jongsma, 2021), and the justification of our attitudes towards the trustworthiness of AI and their relation to trust in AI (Durán & Jongsma, 2021; Ferrario, 2023). However, there has been limited exploration into how these systems may exhibit various forms of epistemic superiority. We believe this is an important conceptual gap, particularly as AI systems are often ascribed some form of epistemic superiority in both scholarly discourse and everyday language due to their ability of manipulating information with high performance. For instance, in the medical field, different authors suggest that the predictive accuracy of these systems at tasks, such as recognizing cases of skin cancer or predicting the occurrence of sepsis in the intensive care unit (Esteva et al., 2017; Georgevici & Terblanche, 2019), grants them a form of epistemic expertise and authority over human experts (Bjerring & Busch, 2021; Grote & Berens, 2020).

In particular, the number of publications showing that AI systems are similarly—or even more—accurate than human experts when performing a predictive task is steadily increasing. Moreover, the proficiency of large language model-based services, such as ChatGPT by OpenAI or BARD by Google, in responding accurately to a variety of prompts may suggest that we are in the presence of artificial agents showing abilities that we may ascribe to epistemic experts. In virtue of this presumed superiority, authors argued that there exists an epistemic obligation to rely on the predictions of AI systems (Grote & Berens, 2020). In summary, the problem of characterizing the epistemic expertise and authority of AI systems has implications that extend beyond the epistemology of AI. Their presumed superiority shapes the way we trust, depend on, and interact with AI artefacts in real-world applications.

In this work, we defend two theses. The first thesis claims that it is not possible to attribute genuine epistemic expertise to AI systems, differently from what some scholars in literature seem to suggest. In fact, by discussing Croce and Zagzebski’s accounts of expertise and authority in virtue epistemology (Croce, 2018, 2019a; Zagzebski, 2012), we show that epistemic expertise requires a relationship with understanding and a demonstration of abilities that AI systems lack. As a result, we contend that no epistemic obligation arises from interacting with a highly accurate AI. This said, we concede that AI systems can be granted with a weak form of expertise derived from their ability of storing large sets of true propositions. This is what, we argue, researchers and common parlance inadvertently refer to as the epistemic expertise of AI systems. Further, we do not dismiss the possibility that, in the future, through fine-tuning and appropriate testing, large language model-based AI may simulate the ability of being sensitive to the epistemic needs of their users to a level that will justify endowing those systems with what Croce calls the epistemic authority of belief (Croce, 2019c).

The second thesis posits that human-AI interactions may give rise to a cognitive and epistemic agent, which is distinct from the human agent and the AI system and, in some cases, can be endowed with genuine forms of epistemic expertise and authority. First, as recently proposed by Alvarado, we characterize AI as an epistemic technology, namely, technology deployed in epistemic contexts to manipulate epistemic content with epistemic abilities (Alvarado, 2022, 2023). Then, by applying the theory of Distributed Cognition (Hutchins, 1995b; Salomon, 1997) to the case of human-AI interactions, we show that if an AI system is “successfully appropriated” by a human agent–i.e., it enhances the epistemic capabilities of its user to a degree that allows them to achieve their epistemic goal–a novel, a hybrid agent emerges from this interaction. We argue that the quality of the appropriation depends on a number of intrinsic and relational abilities of the proper parts of the hybrid agent, i.e., the human and the AI system. Finally, adapting Croce’s non-summativist argument on the expertise of collective epistemic agents (Croce, 2019b) to the case of collective agents comprising also non-human parts (e.g., an AI), we show that hybrid agents can be epistemic experts and authorities. We elaborate on this point with two examples from real-world human-AI interactions. As a result, in line with a recent argument by Ferrario and Loi (Ferrario & Loi, 2022), the hybrid system, instead of the human and the AI system, is the appropriate subject of a discourse on trust in human-AI interactions. This discussion could start, for instance, by exploring public perceptions of the epistemic expertise and authority of the hybrid agent and examining the conditions that may foster their appropriate assessment.

The plan of this work is as follows. In Sect. 2, we discuss different accounts of epistemic authority and expertise from the relevant literature. Then, in Sect. 3, we discuss Alvarado’s account of AI systems as epistemic technology (Alvarado, 2022) and two prominent accounts from the literature on medical AI that suggest some form of epistemic superiority for AI systems (Bjerring & Busch, 2021; Grote & Berens, 2020) before presenting our first thesis. In Sect. 4, we focus on our second thesis by characterizing the emergence of the hybrid agent and discussing under which conditions it can be endowed with epistemic expertise and authority. Two examples of hybrid agent support our discussions. In Sect. 5, we present a few final remarks.

2 Experts and Authorities in Virtue Epistemology

Epistemic expertise and authority are central themes in social epistemology. Different philosophical traditionsFootnote 1 have addressed the problem of defining these forms of epistemic superiority. Among these, virtue epistemology, namely, the branch of epistemology that concerns intellectual virtues, has emerged as a prominent field.Footnote 2 In general, epistemic virtues are “characteristics [of a person] that promote intellectual flourishing, or which make for an excellent cognizer” (Turri et al., 2021). For some authors, intellectual virtues are cognitive abilities, such as accurate memory, introspection, deduction and induction, or reliable perception (Greco, 2002). Others see these virtues more as personality traits, e.g., conscientiousness, epistemic humility, perseverance, intellectual courage and open-mindedness instead (Greco, 2002, 1993). In either case, intellectual virtues are truth-conducive, belief-forming cognitive processes that allow people to arrive at the truth and avoid errors in a given doxastic domain (Greco, 2002). Therefore, virtue epistemology underscores the notion that epistemology is a normative discipline and that humans are affected by intellectual virtues and vices—e.g., superstition and wishful thinking—when they exercise their epistemic functions (Turri et al., 2021). In particular, epistemic virtues allow articulating the different types of epistemic superiority that experts and authorities entail, as we will show in what follows.Footnote 3

2.1 Epistemic Experts: A Functional Characterization

Let us discuss the concept of epistemic expert by introducing a recent account of expertise by Croce (Croce, 2019a). This approach builds on Goldman’s functional characterization of epistemic expertise (Goldman, 2018), which “highlights what expertise is by reference to what experts can do for laypersons by means of their special knowledge or skill” (Goldman, 2018, p. 3, emphasis in original).Footnote 4 We note that, choosing this characterization of expertise, reputation is neither necessary nor sufficient for being an expert (Goldman, 2018). However, as we will comment on in this section, Croce’s account of epistemic expert successfully overcomes some difficulties that emerge from previous functionalist approaches to expertise (Goldman, 2018; Jäger, 2016). Therefore, we will focus on Croce’s characterization of expertise as the starting point of our discussions. It reads the following:

Definition 1

(Epistemic expert (Croce, 2019c)) A subject X is an epistemic expert in a domain D iff X has better understanding of D than the majority of people and X possesses research-oriented abilities.Footnote 5

Let us discuss this definition in some detail. First, according to Croce, understanding is the epistemic success characterizing experts. Here, following Elgin (Elgin, 2007), to understand something is to be “in a better position to appreciate the significance of the truth, not merely to recognize that it is a truth” (Elgin, 2007, p. 36). Understanding is an epistemic success that admits of degrees and that can be evaluated along three dimensions: breadth, depth, and significance (Elgin, 2007). These dimensions help comparing the extent to which different subjects understand questions in a domain and assess their epistemic expertise in that domain (Elgin, 2007). More precisely, breadth refers to the capability of integrating true belief within a larger set of true beliefs, while depth refers to the number of propositions and inferential connections, as well as the ability to reconstruct causal chain links between beliefs. Finally, significance states that an agent has a better understanding of a question than another, if the former has a better grasping concerning the relevance of the question in their system of beliefs than the latter (Elgin, 2007). Promoting understanding as the epistemic success that characterizes experts differentiates Croce’s account from others in the literature on expertise. For instance, Goldman defends a veritistic account of expertise in a series of works (Goldman, 2001, 2018). According to Goldman, an epistemic expert is an agent possessing abilities grounded in cognitive states that allow them to entertain more true beliefs and/or fewer false beliefs within a doxastic domain than most people (Goldman, 2018).Footnote 6 (An easier to achieve epistemic success than understanding, we note.) However, Croce notes that a veritistic approach to expertise is prone to important objections, such as in the case of indifferent, lazy and skeptic epistemic agents (Coady, 2012; Scholz, 2009). These agents would avoid increasing their false beliefs by either not engaging in research within a doxastic domain or by systematically suspending their beliefs. Despite these behaviors, according to Goldman’s criteria, they would still be experts (Croce, 2019a). Croce also points out that Goldman’s veritistic approach to expertise is vulnerable to criticism concerning the basis of an expert’s true beliefs. In fact, according to Goldman’s approach, an expert may possess many true beliefs that are unjustified-acquired perhaps by chance or through simple memorization-without understanding them (Croce, 2019a). Further, Croce and Goldman’s accounts differ also on the type of abilities that an expert should possess. In fact, Goldman’s expert possesses novice-oriented abilities, i.e., virtues that allow the expert to support the epistemic needs of laypeople (Goldman, 2018). However, as noted by Croce, this would suggest that epistemic agents, such as a grandmother willing to explain to her grandchild how fish breathe despite having only a vague knowledge of zoology may be considered as experts (Croce, 2019c). Further, the reliance on novice-oriented abilities would exclude from being expert, for instance, those individuals who are extremely competent in an epistemic domain yet do not regularly communicate their findings to peers, nor collaborate or mentor others–see the “Professor Ivory Tower” example in (Croce, 2018).Footnote 7

In summary, by focusing on understanding to define expertise, Croce’s approach circumvents these objections while retaining what we usually expect from epistemic experts, namely, to hold a better epistemic standing that most of people in a given doxastic domain. Finally, the research-oriented abilities of Croce’s expert are defined as:

“[...] virtues that allow an expert [...] to exploit their fund of knowledge to find and face new problems in their field of expertise (e.g., thoroughness, intellectual perseverance, intellectual courage, self-scrutiny, intellectual creativity, open-mindedness, intellectual curiosity, and autonomy)” (Croce, 2019c, p. 17-18).

The list is not exhaustive; for instance, firmness is also mentioned in the literature (Croce, 2018, 2019a). This said, the idea is that Croce’s experts contribute to the epistemic progress of a doxastic domain. They achieve this goal exercising abilities that allow them to improve their understanding in that doxastic domain and, as a result, their epistemic standing compared to the majority of people.

2.2 Epistemic Authority After Zagzebski and Croce

We turn our attention to the concept of epistemic authority in virtue epistemology. Then, we discuss the main differences between epistemic experts and authorities. We start by introducing Zagzebski’s account of epistemic authority.

Definition 2

(Epistemic authority (Zagzebski, 2012)) A subject X is an epistemic authority for a subject Y in the doxastic domain D if X does what Y would do if Y was more conscientious or better at satisfying the aim of conscientiousness, namely, getting the truth.

Here, the virtue of epistemic conscientiousness is “the quality of using our faculties to the best of our ability in order to get the truth” (Zagzebski, 2012, p. 48). It is the disposition to act to achieve the epistemic goals of forming true beliefs and avoiding false ones (Jäger, 2016). Then, an epistemic authority to someone is an individual who displays superior conscientiousness in a doxastic domain and, as a result, is more likely to getting the truth. However, although an authority is more conscientious than a layperson and is more likely to forming true beliefs in D, the authority may not be close to truth either (Jäger, 2016).Footnote 8 (Note that Definition 2 suggests that being an epistemic authority for someone is subject-, domain-, and time-relative.) Zagzebski’s justification of believing an epistemic authority is based on two theses, called the Preemption Thesis and the Justification Thesis for the Authority of Belief (Zagzebski, 2012). The first thesis focuses on the justification of the update of beliefs in presence of an epistemic authority in a given doxastic domain. It states that an epistemic authority having a belief p is a reason for an agent X to believe p as well. This reason replaces the reasons that X has to believe p (and is not added to them) (Zagzebski, 2012). In fact, an epistemic authority possesses a “normative epistemic power which gives me a reason to take a belief pre-emptively on the grounds that the other person believes it” (Zagzebski, 2012, p. 296). The second thesis states that the authority of X’s belief for Y is justified by Y’s conscientious judgment that they would likely to form a true belief and avoid a false belief if they believe what X believes (than trying to it figure out by themselves) (Zagzebski, 2012). Together, the two theses provide Y reasons to attribute authority and normative power to X.

More recently, Croce has distinguished epistemic expertise from authority by characterizing both concepts in terms of different types of epistemic abilities. According to Croce, an epistemic agent is an expert, respectively authority, if the agent exhibits research-, respectively, novice-oriented abilities (Croce, 2019a). These novice-oriented abilities are virtues that allow an expert “to properly address a layperson’s epistemic dependency on them” (Croce, 2018, p. 494). Such virtues include the sensitivity to the layperson’s needs, intellectual generosity and empathy, and maieutic ability. Of these, the sensitivity to the layperson’s needs is the easiest virtue to display and it is necessary to exercise authority. In fact, this virtue involves identifying the epistemic need of an epistemically disadvantaged agent and the availability to provide support to them (Croce, 2019a). The maieutic ability is the hardest novice-oriented virtue to possess instead (Jäger, 2016). It is the ability to “foster the subject’s overall insight into the problem” (Jäger 2016, p. 178) by displaying superior methodological skills and communicating insights to provide reasons against the layperson’s grounds while justifying one’s own (Jäger, 2016). Discussing the different types of (direct) interactions we may have with epistemically superior agents, Croce then differentiates between two types of epistemic authority, i.e., the authority of belief and the authority of understanding (Croce, 2019a). More precisely:

Definition 3

(Authority of belief (Croce, 2018)) A subject X is an authority of belief in a domain D for a subject Y iff X is more conscientious than Y–who considers X to be an epistemic authority–in D and X possesses and makes use of sensitivity to Y’s needs.

Definition 4

(Authority of understanding (Croce, 2018)) A subject X is an authority of understanding in a domain D for a subject Y iff X is more conscientious than Y–who considers X to be an epistemic authority–in D and X possesses and makes use of novice-oriented abilities.

Croce’s authorities of belief and understanding are individuals who possess novice-oriented abilities that they use in an interpersonal relation with an epistemically disadvantaged agent. Here, an agent is epistemically superior to another if they are better positioned epistemically (Croce, 2019a). Further, they both make use of the sensitivity to the layperson’s needs, as this ability allows an authority to acknowledge the epistemic dependence of the layperson and offering to support this agent in an interpersonal and direct relation (Croce, 2018). However, an authority of understanding makes a more extensive use of novice-oriented abilities than one of belief (Croce, 2018). In fact, such an authority necessarily requires a number of different abilities to promote the intellectual growth of an epistemic dependent agent in a structured and efficient way.

Our brief excursus in the literature on virtue epistemology reveals that the distinction between epistemic experts and authorities extends beyond mere terminology. These two forms of epistemic superiority are characterized by different abilities and relations with epistemically inferior agents. As a result, not all experts are authorities and not all authorities are experts.Footnote 9 Being an expert is to be in an epistemically superior position than the majority of subjects in a domain and possessing abilities that allow addressing different problems within that domain. In contrast, an authority is an epistemically privileged subject who displays a special attention to the epistemic needs of another individual. This is the case, for instance, of a grandparent explaining a scientific phenomenon to a curious grandchild, despite having only a basic understanding of the scientific principles involved (Croce, 2019c). Another key distinction lies in the nature of the relationship each has with other individuals. Expertise may involve a more distant or impersonal relationship, while authority requires a closer one. In particular, Zagzebski argues that “the expert is an authority only in a very weak sense, since the expert and the layperson who defer to her may have no relationship with each other” (Zagzebski, 2012, p. 5). To this end, she adds that having a personal relationship with “someone epistemically superior to us in some domain is a necessary condition for her to be able to acknowledge our epistemic dependence and needs” (Croce, 2018, p. 479).

3 The Epistemology of AI Systems

In Sect. 2, we explored some recent accounts of epistemic expertise and authority. In line with the literature on virtue epistemology, we considered the case of human agents and (direct or indirect) interpersonal relations only. Here, we propose a change of direction, turning our attention to AI systems and introducing Alvarado’s perspective on AI as epistemic technology. Then, we show how different authors have tried ascribing epistemic expertise and authority to AI systems in healthcare applications. Motivated by these discussions, we close this section by presenting our perspective on what the epistemic expertise and authority of AI systems might really entail.

3.1 AI, Epistemic Technology and Epistemic Enhancers

AI systems are computational artefacts that use machine learning (ML) methods to execute intended functions.Footnote 10 The systems can produce different outputs depending on their function, including numerical scores, labels, text snippets, images and even videos, as in the case of generative AI systems. To compute these outputs, an AI system requires a training dataset, a training engine, and a learned ML model.Footnote 11 A training dataset is a collection of digitized data that a training engine, which is is essentially an optimisation algorithmFootnote 12 uses to fit a set of parameters of the to-be-learned model. Once the ML model has been trained, it can generate outcomes when provided with new data samples. Note that, in common parlance, the terms ‘(trained/learned) ML model’ and ‘AI system’ are often used as in a synecdoche. In fact, an AI system is an artefact comprising, in particular, a certain number of these ML models, which, in turn, constitute its logical engine. The outcomes computed by the models—that we call ‘predictions’ in what follows—are then shared to the system’s users via dedicated interfaces, which may also include appropriate explanations of the reasons behind the generation of the prediction or other type of structured information that may support users’ acquisition of knowledge and their decision-making. However, AI systems are more than prediction-generating tools. As underlined by Alvarado, AI is an epistemic technology, that is, technology designed, developed, and deployed to be used in epistemic contexts for manipulating epistemic content (e.g., data and propositions), and performing epistemic operations (e.g., computing predictions) (Alvarado, 2023, 2022) aimed at achieving a given specific epistemic purpose (e.g., inquiry).Footnote 13 Here, the term ‘epistemic’ “is meant to refer to the broader categories of practices that relate in a direct manner to the acquisition, retention, use and creation of knowledge” (Alvarado, 2023, p. 32, emphasis in original). Although many other technical artefacts, such as microscopes or calculators, are typically used in epistemic contexts, “when it comes to artificial intelligence, the epistemic element is still more profoundly interconnected to the technology [than in the case of other artefacts]” (Alvarado, 2023, p. 32). In this regard, according to Alvarado, AI is a paradigmatic example of epistemic technology (Alvarado, 2023). While we concur with Alvarado’s view that AI holds a distinctive position among epistemic technologies, we believe this important point deserves further comments. First, we note that—once in production—technological artefacts such as an AI system using ML models to generate some predictions, a calculator and a computer simulation of the N-body problem share a fundamental similarity: they all perform computations. Namely, they return some output by applying deterministic rules to some input. An AI system computes predictions based on the computations encoded in its trained ML model. The calculator executes the implementation of arithmetic operations, while the simulation uses the implementation of the partial differential equations modeling the N-body system. All these artefacts are deployed in epistemic contexts for an epistemic purpose and they manipulate epistemic content via epistemic operations. Then, we argue, the epistemically-relevant distinctions between these artefacts stem from the what precedes their deployment, namely, their design process and the management of knowledge therein. Here, the superiority of AI as epistemic technology descends from their essentially different ontology and, more specifically, the nature of their constitutive training procedure. In fact, the design of calculators and simulations essentially follows a top-down approach, where the designer’s knowledge dictates the rules for computations, which are intended to reflect the designer’s intentions. In contrast, AI systems based on ML are not wholly predefined by the designer. Instead, the designer only defines constraints on the input–output behaviour, while the specific intended function is learned from a set of data through the training procedure. Therefore, the behaviour of a (trained) AI system is determined not only by its designers’ intentions and knowledge, but also by the epistemic content within the training data used. Hence, crucially, the constitutive rules according to which the AI produces its predictions emerge as—often not directly intelligible—patterns and correlations that are mostly distilled from this content and only partially from designers’ knowledge. This relation with epistemic content and knowledge constitutes an essential epistemic divide between AI systems and other epistemically-relevant technologies, such as calculators and computer simulations.

Given this, we understand in which sense AI systems are meant to be epistemic enhancers: they are artefacts that can enhance human agents’ capacities to acquire knowledge by extrapolating, converting, and augmenting epistemic content from data for them (Humphreys, 2004; Alvarado, 2023). (For instance, modern AI can predict the structure of proteins from sequences of amino acids, detect fraudulent transactions by analyzing statistical patterns in financial data, or convert textual input into desired images.) In this sense, the ability to enhance the epistemic capabilities of its human users is an objective property of an AI system that results from its data processing functionalities. However, the extent to which AI systems enhance the epistemic capabilities of their users and whether this enhancement is conducive to the achievement of their epistemic goal depends on the interaction at hand.Footnote 14 Then, if we consider AI systems that are particularly efficient at generating accurate predictions and enhancing the epistemic capabilities of their users over time, we may be tempted to attribute some sort of epistemic expertise or authority to them. This is what some authors have recently proposed. We discuss this point in what follows.

3.2 Expertise and Authority of Medical AI: A Recent Debate

Recently, the concepts of epistemic expertise and authority have made their way in the discourse on human-AI interactions, with special attention to medical AIs. These systems are designed to assist physicians in clinical decision-making. For instance, some systems use ML methods to classify medical images (Esteva et al., 2017) and predict the occurrence of sepsis in patients in the intensive care unit (ICU) (Georgevici & Terblanche, 2019). Fueled by an increasing amount of high quality data and recent advances in ML methodologies (namely, deep learning), medical AI has demonstrated performance that often matches or exceeds that of human experts in various tasks (Bjerring & Busch, 2021; Esteva et al., 2017; Liu et al., 2019). The ability to accurately handle clinical cases has led different authors to discuss the epistemological and ethical problems arising from the use of medical AIs in shared decision-making. In this section, we show how two prominent works in the literature on biomedical ethics elaborate on these topics.

We start with Bjerring and Busch’s (2021) work where the authors discuss the emergence of the epistemic obligation that follows from the expertise of medical AI systems. In particular, the authors state:

“On the one hand, insofar as AI systems will eventually outperform the best practitioners in specific medical domains, practitioners will have epistemic obligation to rely on these systems in medical decision-making. After all, if a practitioner knows of an epistemic source that is more knowledgeable, more accurate, and more reliable in decision-making, she should treat it as an expert and align her verdicts with those of the source.” (Bjerring and Busch, 2021, p. 351)

Further, they add:

“Granted these assumptions about AI systems, it seems, as mentioned, clear that practitioners (will) have an epistemic obligation to align their medical verdicts with those of AI systems. Essentially, the relationship between practitioner and AI system is epistemically analogous to the relationship between practitioner and expert” (Bjerring and Busch, 2021, p. 354, emphasis in original)

Bjerring and Busch claim that from the warranted epistemic superiority of AIs in predicting medical outcomes—where the warrants certify more knowledge, higher accuracy and reliability in decision-making—descends an epistemic obligation, namely, to rely on the outputs of these systems. (Note that reliance is intended as the clinicians’ action of aligning their medical verdicts with the predictions of the AI.) Then, if medical AI systems are experts and if from this expertise descends the epistemic obligation to rely on the AI’s prediction, important epistemic and ethical consequences follow. In fact, the rise of AI systems in medical decision-making seems to be in contrast with patient-centered medicine, as the systemic reliance on the predictions of an AI system can undermine patient’s autonomy (Bjerring & Busch, 2021). This is especially true for the case of opaque AI systems, whose technical and logical bases seem to be not explainable and understandable to the systems’ users (London, 2019). As a result, the use of opaque medical AI systems in medical clinical decision-making can even incur an epistemic loss in understanding and explaining clinical evidence (Bjerring & Busch, 2021).

Relatedly, Grote and Berens highlight the dynamics between medical experts and AI systems granting the latter a form of epistemic expertise that justifies the discussion of ‘peer disagreements’ with the former (Grote & Berens, 2020). In fact, they state:

“Here, the underlying problem can be described as follows: both the clinician and the machine learning algorithm might be conceived as experts of sorts. Yet, they have been trained differently and they reason in very distinct ways. For the clinician, this poses a problem once we consider cases of peer-disagreement. Here, ‘peer disagreement’ describes cases of two (equally) competent peers with respect to a certain domain-related activity, whereby both parties disagree with respect to a certain proposition” (Grote and Berens, 2020, p. 207)

The authors suggest a case of ‘expert/expert problem,’ namely, a disagreement between experts applied to human-AI interactions. In fact, “both the clinician and the [AI system] might be conceived as experts of sorts.” (Grote and Berens, 2020, p. 207). They also note that “the involvement of current machine learning algorithms challenges the epistemic authority of clinicians” (Grote and Berens, 2020, p. 205). (Note the use of both terms ‘expert’ and ‘authority’ for clinicians.) However, they add that it is not always possible to assess the epistemic expertise of AI systems due to, for instance, their opacity (Grote & Berens, 2020). We will return to the relation between expertise and opacity in the forthcoming section.

3.3 Can AI Systems be Expert and Authorities?

Consider the following argument, which is adapted from the works of Bjerring and Busch (2021); Grote and Berens (2020): “if it is possible to ascribe some sort of epistemic expertise to AI systems, then it is possible to compare the expertise of human agents with the one of AI and an epistemic obligation, namely, to rely on the predictions of these systems, follows from their epistemic superiority”. Despite the popularity of this kind of argument in the literature,Footnote 15 we show that an in-depth analysis of AI systems from the perspective of virtue epistemology leads us to different conclusions. Our thesis is that AI systems can be endowed—at best—with a very limited account of epistemic expertise from which no epistemic obligation follows. Further, despite there is the possibility that AI systems will eventually be qualified as authorities of belief in the near future, we have reasons to claim that AI systems will never qualify as authorities of understanding. Let us discuss these claims in some detail.

To ascertain whether an AI system can be endowed with some sort of expertise we need to start discussing what the AI can achieve as epistemic technology. As discussed in Sect. 3.1, AI systems are technical artefacts designed to satisfy an intended function, namely, to compute predictions (see Sect. 3.1). For instance, if the problem is to quantify the risk of a patient to develop sepsis while in the ICU, a medical AI can support physicians providing a numerical quantification of the risk using some patient features from electronic health records. The presumed expertise of AI systems is restricted to the type of problem (or task) addressed by their intended functions. This sets AI apart from human experts who are required to be able to perform a variety of different epistemic challenges instead.Footnote 16 Further, let us suppose that an AI is trained on a dataset \(\mathcal {D}\) such that each of its n samples, e.g., a patient admitted to the ICU, is characterized by an N-dimensional vector of features, e.g., physiological data, and a ground truth label \(y_i\). Let us suppose that the label \(y_i\) certifies an objective state of the world, e.g., that the i-th ICU patient actually developed sepsis 24 h after healthcare professionals collected their physiological data. Then, we may argue that, as the result of the training procedure, the system encodes the set of true propositions \(\{p_1,\dots ,p_n\}\), where \(p_i\)=‘the i-th sample in \(\mathcal {D}\) has ground truth \(y_i\)’ as a weak knowledge basis. (Here, weak knowledge is true beliefs, following Goldman (Goldman, 2001).) If n is large–as common in medical AI applications, where systems are possibly trained on millions of records–then the AI may encode a number of true propositions that is higher than the number of true propositions on samples similar to those in \(\mathcal {D}\) than a human agent may form beliefs about. Then, in virtue of the dataset and the procedures used during its training and if the system’s accuracy is high enough, we can concede that the AI is ‘more knowledgeable’ than most human agents, as remarked by Bjerring and Busch, although only relative to the task that the AI supports and given the dataset used to train it.

This said, further characteristics of AI systems set them apart from what we expect from epistemic experts. The one we are concerned here relates to what is called the Stable-World Principle (Katsikopoulos et al., 2021). This principle states that ML models and, a-fortiori, AI systems that use their predictions to assist humans in their decision-making, may consistently outperform human decision strategies and heuristics in so-called ‘stable situations.’ These are contexts in which there is little or no uncertainty about the evolution over time of the data used by ML models: the past is reliable predictor of the future (Katsikopoulos et al., 2021). By contrast, ‘unstable’ environments, i.e., contexts where the data distributions that are relevant to the ML models change substantially over time—a phenomenon called concept drift in the computer science and ML literature (Quinonero-Candela et al., 2008; Žliobaitė et al., 2016) —pose a serious risk to the deployment and use of ML methods.Footnote 17 Since, in most applications, AI systems operate in unstable environments,Footnote 18 their accuracy typically decreases over time. Then, as accuracy is the primary measure of the presumed expertise of AI systems, its decrease over time affects the quality of the system’s knowledge basis and its ability to appropriately answer new questions in its domain of applicability. This affects their being knowledgeable in that domain and the possibility of being recognized as such by other agents.

The above considerations suggest that AI systems are repositories of extensive, weak knowledge whose quality, i.e., the number of true propositions, typically degrades over time. This form of weak expertise is limited to the domain of all epistemic attitudes toward predicting outcomes using empirical data, and it is under the guardianship of human experts. However, it does not qualify as a genuine example of Croce’s epistemic expertise for two reasons. First, AI systems do not possess the ability to understand. They are not in the position to recognize that something is true, let alone to appreciate the significance of truth, as would an agent that understands something (Elgin, 2007). Further, they do not act on knowledge as we would expect from an agent who understands: they just predict (more or less correctly) a state of the world in virtue of the set of true propositions originally encoded in their training dataset, which, in turn, provides an estimate of the ML model parameters needed to compute each prediction. Finally, it is not possible to ascribe research-oriented abilities to AI systems due to their lack of mental states. The effective simulation of research-oriented virtues, such as intellectual curiosity and creativity, as well as epistemic autonomy seems to lie beyond the capabilities of modern AI, even considering recent technologies such as those using large language models (e.g., OpenAI’s ChatGPT or Google’s Gemini services). In itself, an AI system can neither think nor initiate mental states such as beliefs, emotions, desires or intentions. That is, it lacks cognitive agency. In saying this, we are not denying that AI systems do things for an epistemic community and its members. In fact, as made explicit in Sect. 3.1, they can shape the epistemic capacities of the humans interacting with them. This notwithstanding, we restrain ourselves by using the term ‘agency’ to describe the things AI systems do and the effects they have on humans, their practices and the concerned domain (environment) of application.

If AI systems do not possess epistemic expertise, issues, such as the comparison of expertise (Bjerring & Busch, 2021) (including expert/expert problems) and their epistemic as well as ethical implications (Grote & Berens, 2020) should be also rethought. What we can design is a controlled comparison of the performance of a human expert and the one of a system that executes a (precisely defined) predictive task with a history of high accuracy. However, no comparison of (degrees of) epistemic expertise is possible. As a result, we argue that no genuine conflict between experts can arise in human-AI interactions. Further, without expertise, it is not clear what epistemic obligations should follow from the predictions of an AI system. In our opinion, the weak account of expertise that we are willing to concede to AI systems does not imply any obligation (e.g., to rely on the prediction of the AI). In other words, it is not the accuracy on historical datasets that provides reasons to a rational agent to rely on the AI’s outcomes at the time of decision-making.

Finally, we argue that, in some cases, AI systems may be ascribed with authority of belief, but not understanding. In fact, although the epistemic superiority of an AI can be difficult to ascertain due to the time sensitivity of the system’s accuracy, there are contexts in which human-AI interactions unfold in such a way it seems reasonable to assume this superiority exists (e.g., AI applications that are robust to concept shift, such as in manufacturing).Footnote 19 Then, a historically accurate AI system provides reasons to believe that it holds an epistemically advantaged position with respect to a human agent when it comes to predict an outcome using information encoded on certain digitized data. (This does not imply expertise, as noted above.) However, AI systems lack of novice-oriented abilities as they have no mental states. One may hope that fine-tuned and extensively tested conversational agents may simulate the sensitivity to the epistemic needs of laypeople in the (near?) future. However, more complex virtues, such as the maieutic ability, seem to lie beyond the capabilities of modern AI systems. In general, authority requires at least the ability of identifying the epistemic needs of laypeople, providing them with support and guiding them to understanding in various degrees. Conversations are social interactions that depend on many contextual factors that change over time, often in an unpredictable way.Footnote 20 (Think, for instance, of how a conversation may evolve if an interlocutor suffers from a sudden mental health crisis. This is a scenario that is typically hard to manage by therapeutic conversational agents.) This variability is in tension with the aforementioned Stable-World Principle and for which there might be no appropriate modelling by the kind of mathematics underlying current research in AI–see e.g., (Landgrebe and Smith, 2023, Chapters 10-11). In addition, the epistemic opacity of AI systems seems to further preclude the possibility to foster understanding in a number of human-AI interactions, hence preventing the conscientious ascription of authority of understanding to these systems. The reader be warned: the term ‘epistemic opacity’Footnote 21 does not actually have a univocal and well-defined meaning but refers to a multi-dimensional, plural and (epistemic) context-dependent problem that might therefore take different forms and have different, nonexclusive, sources.Footnote 22 Opacity may in fact concerns the structural properties of the AI systems, and therefore the capability of understanding the different component of an AI system, the format used by the system to store and represent the learned information and thus its interpretation, and the use of the system to model and explain phenomena (Facchini & Termine, 2022). We refer to (Facchini & Termine, 2022) for a taxonomy of opacity and more details on its contextualization. Unsurprisingly, a solution to opacity that would encompass different types of ML models and satisfy the epistemic expectations, e.g., understanding, of different classes of stakeholders is yet to be found.Footnote 23

4 Hybrid Agents in Human-AI Interactions

In the previous section, differently from what has been proposed in recent works on medical AI, we argued that AI systems are neither epistemic experts nor authorities of understanding. They can only be ascribed with a weak form of expertise and authority of belief. In our discussions, we highlighted that, fundamentally, AI systems lack the key virtues, i.e., research (expert)- and novice-oriented abilities, that characterize the two different forms of epistemic superiority. Motivated by this negative finding, we now try to convince the reader that, given the context in which a human-AI interaction unfolds, there exists an agent that cannot be identified with either the human or the AI but can be endowed with cognitive and epistemic capabilities and, in some cases, genuine forms of epistemic expertise as well as authority. In what follows, we discuss under which conditions this agent emerges from the interaction of a human and an AI system and elaborate on its forms of epistemic superiority in two examples.

4.1 What is a Hybrid Agent?

Distributed Cognition (DC) theory (Hutchins, 1995b; Salomon, 1997) can help shedding some light on our promised perspective. DC addresses the problem of characterizing the cognitive systems and their processes that emerge from the interaction between humans and technology. According to DC, in these interactions “a cognitive process is delimited by the functional relationships among the elements that participate in it, rather than by the spatial co-location of the elements” (Hollan et al., 2000, p. 175). Then, DC suggests that cognitive states and processes can sometimes be conceived as being distributed across humans and the artefacts they interact with, ascribing the possibility of cognition to systems that are not individuals (Hollan et al., 2000).Footnote 24 The co-location of these cognitive processes involves the coordination of actions between internal (e.g., the human agent and the artefact) and external (e.g., the environment in which the interaction takes place) structures over time (Hollan et al., 2000). In particular, DC promotes the view that in the cognitive systems resulting from the interaction between humans and technology, cognitive events are not limited to the manipulation of symbols in individuals’ brains, but comprise also cases that involve the cognitive capabilities of different agents in the system and the “transmission as well as transformation of information between them” (Hollan et al., 2000, p. 177). In summary, according to DC, hybrid cognitive systems composed of human agents and artefacts emerge from the interaction between humans and technology. These hybrid systems are capable of performing distributed cognitive tasks and act upon the environment, becoming de facto genuine cognitive agents.Footnote 25

How does DC’s hybrid agent perspective fit into the specific case of human-AI interactions? To address this question, let us start by characterizing these interactions. So far, with the term ‘AI system,’ following Alvarado, we meant an example of epistemic technology, that is a technical artefact designed to be used in epistemic contexts and that manipulates epistemic content with its epistemic abilities (Alvarado, 2023) (see Sect. 3.1). A socio-technical perspective on AI allows characterizing the epistemic context in which AI systems operate by suggesting that each system interacts with its stakeholders in a set of rules or norms which, in turn, are dictated by its social institution (Van de Poel, 2020; Benk et al., 2022). We note that this view is in accordance to the DC’s tenet stating that cognitive processes develop in cultural environments and that culture shapes these processes (Hollan et al., 2000). These rules and norms characterize the epistemic context in which humans and AI interact, ruling the type of their practices and behaviours.Footnote 26

The key point is that, beyond being an example of epistemic technology embedded in a socio-technical context, AI can be both a cognitive, and an epistemic enhancer for its human users (Hernández-Orallo & Vold, 2019; Alvarado, 2023, 2022). ‘Cognitive’ since these systems carry the potential to enhance the cognitive capabilities of human agents (e.g., by improving their memory or by supporting novel perceptual processes) (Hernández-Orallo & Vold, 2019). ‘Epistemic’ as they substantially act as enhancers of users’ capability to acquire new knowledge, as discussed in Sect. 3.1.Footnote 27 In both cases, through an appropriate interaction with a human agent, the AI can enhance cognitive and epistemic capabilities in such a way that the human agent can achieve their epistemic goal and this positive (cognitive or epistemic) effect would be lost if the system was not present (Hernández-Orallo & Vold, 2019). Therefore, in those human-AI interactions where the human agent’s cognitive and epistemic enhancement by the AI is conducive to the achievement of their epistemic goal, the system co-shapes a successful cognitive and epistemic process that is distributed across the agents participating to the interaction. The result, following DC theory and adapting it to an epistemic context, is the emergence of a novel cognitive and epistemic agent from that human-AI interaction. We define this agent as follows.

Definition 5

(Hybrid Agent) Let X be a human agent, M an AI system, and C a context where the interaction between X and M unfolds. We say that, in C, the agent emerging from the interaction of X with M is a hybrid cognitive and epistemic agent (shortly: hybrid agent), whenever X successfully appropriates M in C, that is, X achieves their epistemic goal through the interaction with the epistemic enhancer M in C.

The emergence of a hybrid agent is a success of a given human-AI interaction. The hybrid is a ‘novel’ agent in the given interaction that is distinct from the human and the AI, which, in turn, are its proper parts. As dictated by the complementary principle of extended cognitive systems (Sutton, 2010), these proper parts “play different roles and have different properties while coupling in collective and complementary contributions to flexible thinking and acting” (Sutton, 2010, p.194). In particular, since it emerges as the result of a human subject successfully appropriating the epistemic technology to achieve a shared epistemic goal, the hybrid agent is an epistemic agent (Heersmink & Knight, 2018). Its agency allows it to act on the environment by communicating to third parties (e.g., patients waiting for an AI-infused diagnosis), making decisions, or taking other actions, such as storing or sending data, that support or result from the agent’s epistemic goal. Definition 5 introduces the unit of analysis for the distributed cognition in human-AI interactions, namely, an individual using an AI. It allows discussing situations where an expert, e.g., a physician, uses the system to support their decision-making. These are particularly relevant for the discussions of this work. However, more complex interactions, such as those where multiple humans use the system can be formalized. In general, different interactions may arise for the same AI system in a given context over time. Then, the same AI may participate to distributed cognitive and epistemic processes of different hybrid systems over time due to different interactions with multiple stakeholders.

Let us turn our attention to the characterization of the process of appropriation. In Definition 5, the ‘successful’ appropriation of an AI system by a human agent is conducive to the achievement of the epistemic goal of their interaction. This goal depends on the interaction and the AI that humans use to achieve it. (Examples are identifying statistical patterns that may reveal cases of discrimination in data, analysing the most important features in a ML model and explaining an accurate prediction for a given outcome.) The idea is that, despite AI systems having data processing functionalities to enhance their users’ epistemic capabilities, as discussed in Sect. 3.1, the achievement of the (shared) epistemic goal depends on how the appropriation unfolds. This process, also known as ‘coupling’ or ‘integration’ (Sutton, 2010; Heersmink, 2015), is a complex one. For instance, two users may equally successfully appropriate a given AI system through different sequences of actions. Further, a change to the interface of an AI system (e.g., the introduction of a functionality to support transparency, such as the access to documentation on training data) may be key for the successful appropriation of the AI by a user, ceteris paribus. Then, appropriation may take different ways as it develops within a set of practices and behaviours that are dictated by the context of the interaction. These, in turn, can be also transformed and reshaped by how the appropriation unfolds over time. (Although, in general, inside the boundaries created by the rules and norms characterizing the socio-technical perspective of the AI system.) Then, the challenge is to articulate how the successful appropriation of an AI system by a human agent may occur and, as a result, a hybrid agent emerge, following Definition 5. Here, we agree with Heersmink’s view on extended cognitive systems (Heersmink, 2015),Footnote 28 which states that the way in which humans and artefacts successfully integrate into wider, cognitive systems is a matter of degree and is best seen as a multidimensional phenomenon in which the appropriation of the artefact by the human agent varies along several not hierarchical, intrinsic to the human and the AI, or relational dimensions. We believe that this argument can be applied also to the case of epistemic technology and its successful appropriation. In fact, we argue that, although each human-AI interaction with a successful appropriation of the system gives rise to a hybrid agent, the degree of (successful) appropriation of the system by its user depends on different dimensions. For instance, abilities, such as the (levels of) experience with technology, willingness to use new technology, intellectual curiosity, and other research-oriented virtues are examples of appropriation-conducive dimensions that are intrinsic to the human agent. For the AI system, its accuracy, technical robustness (e.g., its robustness against perturbations of input data that may disproportionately affect its predictions), and, more in general, its objective trustworthiness (Ferrario, 2023), are properties that may support its successful appropriation by human users. Relational dimensions comprise the accessibility and user experience levels of the AI, as well as its degree of transparency, including the provision of explainability methods or the quality of the training the human may have received before interacting with the AI to support their decision-making in real-world use cases. Altogether, these properties show potential to mitigate, in particular, the type of opacity of the AI that is relevant in the context of the interaction (Facchini & Termine, 2022) and that may hinder the successful appropriation of the system, i.e., when the enhancement of the epistemic capabilities of a user by the AI is not sufficient for the human agent to achieve their epistemic goal. This could be the case, for instance, when the human agent did not receive enough training to use the (new) AI system, or the explanation of a prediction is unrevealing, possibly due to a subpar implementation of the explainability method for the class of stakeholders the user belongs to. In general, we believe that a human agent and an AI ranking high on all these dimensions give rise to a better (successful) appropriation of the latter by the former than an agent and a system ranking low on them. This said, we refrain from modeling the relation between these dimensions, their interactions and the degree of successful appropriation in a quantitative way. (Note that, depending on the definition of the epistemic goal, this model could also include information describing the epistemic expectations of third parties, that is, the ‘public’ (Ferrario & Loi, 2022).) What we can provide, however, is a ‘minimum quantitative criterion’ for the identification of the successful appropriation of any AI system by its users. This simple criterion complements the context-relative, qualitative, and quantitative success criteria as well as performance measures that may be needed to certify the achievement of the epistemic goal. The criterion, which is tantamount to what in the literature on human-AI interactions is called complementarityFootnote 29 reads:

Definition 6

(Successful appropriation: minimum quantitative criterion) Let X be a human agent, M an AI system, C the context where the interaction between X and M unfolds and S a set of propositions.Footnote 30 If, for any prediction with respect to S of X supported by M, X’s accuracy is higher or equal to the accuracy of the prediction with respect to S that X and M would achieve independently from each other if no interaction took place between them, then X has successfully appropriated M in C with respect to S.

Definition 6 states that the successful appropriation of any AI system leads to an epistemic gain, namely, a higher or equal accuracy compared to the alternative scenario where no appropriation occurs. As noted above, an (achieved) epistemic goal may have specific success criteria and performance evaluations. This said, the criterion states that the hybrid agent is nonetheless better than its proper parts at generating predictions for cases that are relevant for the interaction at hand. This is what, at its core, the epistemic success of a hybrid entails. (For instance, the hybrid is more accurate at spotting which cases in a dataset are actually cases of discrimination, or predicting which patients have a higher risk of onset of sepsis.) In fact, by Definition 5 and 6, it follows that the hybrid agent is at least as accurate as the most accurate agent in a given interaction. In particular, the hybrid agent is correct for each proposition with respect to which the human agent and/or the AI system are correct. The ‘minimum criterion’ in Definition 6 is simple enough to be used by researchers to identify the emergence of hybrid agents and characterize it statistically by implementing appropriate empirical protocols. However, it does not explain how the superior accuracy of the hybrid is achieved, justified, and possibly communicated to others. In other words, it is not potent enough to discriminate between the different types of successful appropriation, e.g., those cases where the hybrid emerges due to accidental, coincidental, fortuitous circumstances, or as the result of epistemically well-grounded processes. This challenge suggests investigating whether the different dimensions that characterize the process of successful appropriation can become indicators of its reliability, similarly to the discussions on reliabilistic justifications of the epistemic stances on the outcomes of computational systems (Durán & Formanek, 2018; Ferrario, 2023). A topic, however, that would lead us far from the goal of this work and, because of that, is left for future research.

4.2 On the Epistemic Expertise and Authority of a Hybrid Agent

Finally, we show that an hybrid agent, differently from AI systems, can be endorsed with genuine forms of epistemic expertise and authority. Our line of reasoning is inspired by Croce’s argument on the epistemic superiority of collective agents, such as research teams or groups of professionals (Croce, 2019b).

In fact, like Croce, we choose a non-summativist approach to the ascription of epistemic abilities to the hybrid agent. We contend that a hybrid (as a collective agent) possesses the ascribed abilities if (1) the human and the AI jointly commit to the end of that virtue, and (2) the hybrid proves to reliably achieve that end (Croce, 2019b). The point here is that a hybrid agent can possess epistemic abilities even if one of its parts, i.e., the AI system, taken individually, fails to have them. However, differently from Croce’s argument, our collective agent comprises non-human parts. This proves to be a challenge for (1). Our solution is simple: the level of commitment to the end of an epistemic virtue \(\mathcal {V}\) by an AI system is the degree to which the AI can maintain the appropriate, \(\mathcal {V}\)-conducive system functionalities it inherits by design or that emerge through its appropriation by human users. Then, in parallel to the account of contractual trust in AI (Jacovi et al., 2021; Hawley, 2014),.Footnote 31 where the property ‘trustworthiness’ of an AI is defined as the set of functionalities (e.g., being accurate, technically reliable, safe and accessible) that the system is supposed to maintain throughout its use, the commitment to an epistemic virtue is seen as an objective functionality that a system is designed to support throughout its appropriation by a user. Then, the level of commitment to a virtue by the AI is the degree to which the corresponding functionality performs throughout the use of the system. A remark: for an AI system to be committed to a virtue, e.g., by design, neither descends from nor implies manifesting that virtue. For instance, an AI can be committed to the virtues ‘intellectual curiosity’ and ‘conscientiousness’ insofar as its designers programmed an interface that allows the users of the system to access additional information when they validate a prediction (e.g., an explanation of the prediction, the description of predictions ‘close’ to the one under examination, or an analysis of its robustness under small perturbations of the input data point). We do not say that the system manifests the ability of being intellectually curious or conscientious, however. Further, not all virtues are committable to the same degree: the complexity of the functionalities that are required to commit to different research- and novice-oriented abilities varies. For instance, although the commitment to intellectual curiosity could be realized by designing the aforementioned functionality, the commitment to the novice-oriented virtue ‘empathy’ or ‘maieutic ability’ seems to require a greater investment of resources. Then, if a human agent possessing research- or novice-oriented abilities successfully appropriates an AI system that is committed to those abilities to some level, the resulting hybrid agent is an epistemic expert or authority.

Having discussed the possibility of endorsing hybrid agents with epistemic expertise and authority, we conclude our exposition with two examples. In general, different types of epistemically superior hybrid agents can emerge in real-world interactions with AI systems depending, for instance, on whether the human agents in the interactions are expert themselves or not. The first, paradigmatic example is inspired by Ferrario and Loi’s discussion about the ‘human+AI’ dyad and the role of explainability in fostering trust in AI (Ferrario & Loi, 2022). The hybrid agent emerges from the interaction of a human expert and an AI. The second example deals with a human who lacks epistemic expertise instead.

The technology-savvy and trained physicianFootnote 32 A resident physician assesses the risk of sepsis onset in ICU patients of a big hospital on regular basis. The physician is technology-savvy and trained at using medical AI systems. Therefore, he is regularly asked by colleagues to generate and discuss with them AI-infused predictions to support their clinical work. Inspired by Miller’s recent ‘Evaluative AI’ paradigm (Miller, 2023), the physician even asked ML engineers to design a system functionality that, through a simple yet effective interface, allows him to explore and assess hypotheses based on the predictions computed by the AI. However, unbeknownst to the physician, the engineers have recently retrained the AI that predicts the occurrence of sepsis. The system shows now a positive bias towards patients with high blood pressure who are assigned a disproportionately high risk. The physician proceeds with an initial assessment of the case at hand that results in a low level of risk, while the medical AI detects a high level instead. By appropriating the AI, the physician takes some time to analyse the prediction and re-weights the importance of the feature related to blood pressure. In virtue of this analysis, he decides to update his prediction to a case of mild risk and proceeds to explain the case to his colleagues. A more thorough assessment by other physicians and methods, e.g., the APACHE II score (Knaus et al., 1985), confirms the successful appropriation of the AI and the accurate prediction of the hybrid agent. In this example, the physician is an expert showing different research-oriented abilities, such as intellectual curiosity and epistemic humility. He is an authority of belief and understanding (when there is no emergency) for his colleagues. Not only the hybrid agent shows higher accuracy than the physician and the AI, but it displays epistemic expertise and authority. This is because the AI system has been designed to support research- and novice-oriented abilities.

Digital grandfather A grandfather is asked by his granddaughter: “How do I find the pole star in the sky at night?” The grandfather does not know how to answer as he is no expert in astronomy. However, wishing to support the scientific curiosity of his granddaughter and, therefore, manifesting sensitivity to her epistemic needs, he relies on ChatGPT to articulate a well-posed argument. After all, he recently participated to an online course on ‘prompting’ large language models. After a couple of tries, the AI returns him a convincing, step-by-step answer that reads as follows. First, the system starts with an important remark: the pole star is not unique and changes over time. It specifies that the current pole star is Alpha Ursae Minor in the Little Dipper (Ursa Minor). Then, it introduces a method to spot the star: one needs to find the two front stars of the Big Dipper (Ursa Major). (A constellation that is easily recognizable at night.) The pole star is at five times the distance between the two stars. Then, connecting the two stars by an imaginary line, the polar star can spotted with ease. By appropriating the system, the grandfather can starts asking targeted questions to his granddaughter and arrive at the proposed answer dialogically. The appropriation of the system by the grandfather is successful, given that the emerging hybrid agent is at least as accurate as ChatGPT in that interaction. Also note that this scenario is different from one in which the grandfather consults a book on astronomy. In fact, although both a book and ChatGPT are used in the same specific epistemic context, only the latter is specifically designed, developed and deployed to manipulate epistemic content through epistemic operations. Using a book, the manipulation of epistemic content is done by humans, not by the artefact itself. Thus, ChatGPT is an epistemic technology, whereas the book on astronomy is not. As a result, only the successful interaction of the grandfather with ChatGPT gives rise to a hybrid (cognitive and epistemic) agent. Finally, notice that neither is the grandfather an expert in astronomy, nor is the hybrid composed by the grandfather and ChatGPT. In fact, although it manifests research-oriented abilities, the hybrid agent does not have an understanding of astronomy.Footnote 33 However, in virtue of the displayed novice-oriented abilities, the hybrid agent is an epistemic authority for the granddaughter.

5 Final Remarks

Despite the impressive performance shown by AI systems in an increasing number of real-world applications, the ability of generating accurate predictions is not sufficient to ascribe them with either epistemic expertise or authority. In fact, we argued that these forms of epistemic superiority require a relation with understanding and a set of abilities that AI systems do not manifest. Our argument follows from considering a functionalist perspective on expertise and authority that requires ascribing sets of abilities to both, in line with the tradition of virtue epistemology. This said, in this work, we showed that there exist cognitive and epistemic agents emerging from the interactions between humans and AI systems that can become epistemic experts and authorities under certain conditions. They emerge whenever the epistemic technology, i.e., the AI, supports the enhancement of the human’s epistemic capabilities to the point that they can achieve their epistemic goal. We discussed what dimensions characterize the successful appropriation of an AI system and introduced a simple, quantitative criterion to identify such occurrence. This criterion states that the hybrid agent is better than its proper parts at generating predictions. The epistemic expertise and authority of the hybrid agents descend from a non-summativist approach to the manifestation of virtues by collective agents that we adapted to the case where some group members are technical artefacts. Finally, if epistemic obligations descend from epistemic superiority, as, for instance, preemptivist accounts of authority suggest, these hybrid agents become the right subject in a discourse on trust in AI, as highlighted–although for different reasons, namely, to provide a case for explainability fostering trust in AI–in Ferrario and Loi (2022). Future research should investigate (1) the conditions that grant ‘reliability’ to the emergence of the hybrid agents and its epistemic superiority, and (2) how the perception of expertise and authority of the hybrid agent unfolds (e.g., identifying proxies of expertise and authorities) as well as, in particular, the conditions that may foster their appropriate assessment by third parties.