1 Introduction

The world is full of error, evil, and confusion. Much of it originates with false epistemic authorities, i.e., agents who fail to be competent or trustworthy (or both), yet manage to recruit — sometimes significant numbers of — credulous devotees. Often, this has bitter consequences. For example, in 2021 an Austrian politician from the right-wing populist party FPÖ advised anti-vaccine activists to consume certain medical products containing Ivermectin as a Covid-19 antidote. Ivermectin is primarily used to deworm livestock or, in humans, to treat ectoparasites such as scabies. Nevertheless, people stormed the pharmacies, media reported serious poisoning, and for several weeks, horses could not be dewormed with Ivermectin compounds since they were sold out.Footnote 1 (My hunch is that enough might have been in stock had the anti-vaxxers who consumed the products recalculated the dosage and not taken quantities recommended for horses.) People put their faith in what misguided politicians without medical expertise fantasize about bizarre forms of medicationFootnote 2; sect leaders manage to drive their disciples into mass suicides or cajole them into committing brutal homicidesFootnote 3; conspiracy theorists set out to liberate the world from fictitious villains, like Don Quixote who fought (and lost) a battle against windmills. Many other examples could be mentioned where confused or evil demagogues, gurus, populists, or other manipulative agents manage to trash people’s minds with humbug, bullshit,Footnote 4 or dangerous illusions. How come?

People frequently pick out and listen to the wrong epistemic authorities. Epistemic authority is a form of authority that we ascribe to agents or sources that enjoy, relative to us, an advanced epistemic position concerning relevant epistemic goods. Adopting a distinction from Dormandy (2020), we may divide these goods into “representational” and “recognitional” ones. Representational epistemic goods include true belief, justified belief, knowledge, understanding, and maybe also goods such as engaging in the right kind of inquiry (cf. Grasswick, 2020). We can call them “representational” because they constitute or produce cognitive states that represent, or aim at representing, the world as it is (Dormandy, 2020:13). When we pursue these goods, we can profit from the fact that genuine epistemic authorities have more of them than we do. If they share them with us, this can help us make progress in our own epistemic endeavors.

Recognitional epistemic goods, in contrast, are goods that givers of information — for short, speakers — receive from their interlocutors. These include being recognized as a knower, as someone with an advanced level of understanding, and so on. These are additional epistemic goods since, obviously, the mere fact that a speaker enjoys an advanced epistemic position relative to their hearer does not entail that the latter will realize this and treat the speaker accordingly.

I call epistemic authorities with both the relevant representational and recognitional goods recognized epistemic authorities. The problem is that all too often people fail to recognize their true or genuine epistemic authorities. They misidentify those whom they should believe and mistake false authorities for genuine ones. This may have different sociological or psychological reasons. However, in what follows, I will not discuss the psychology of false authority but instead develop analyses and taxonomies that illuminate the topic from a social–epistemological point of view.Footnote 5 I begin by arguing for what I call a social-role account of genuine epistemic authority according to which epistemic authorities are in a position to help their clients accomplish their epistemic ends (section 2). I then distinguish putative authorities of various kinds who fail to meet the requirements of genuine epistemic authority and constitute what I call “false (epistemic) authorities.” Among such false authorities, I discuss unintended false authorities — people who do not pretend to be in an advanced epistemic position — and fake authorities, i.e., people who, in spite of not fulfilling the desiderata of genuine authority, wish to be treated as being in a superior epistemic position. Fake authorities subdivide into what I call epistemic quacks, i.e., agents who falsely believe themselves to be competent, and epistemic charlatans, people who do not believe themselves to be competent or even know that they are incompetent (section 3).

On closer inspection, it turns out that this initial picture is still incomplete. Consider what Jennifer Lackey has dubbed predatory experts, i.e., “experts who use their epistemic authority as a cover for predatory behavior within the domain of their expertise” (Lackey, 2021:144). Lackey’s chief example is the case of the convicted serial child molester Larry Nassar, who served as a long-term team doctor of the US women’s national gymnastics team, and in this role (and elsewhere) sexually assaulted hundreds of girls and young women. Could such experts constitute genuine epistemic authorities? Lackey’s characterization suggests a positive answer. This view is supported by the fact that, concerning the relevant subject matters, people such as Nassar are in an advanced epistemic position relative to their clients. Nassar is a trained physician, after all; his patients were not. However, I argue that, nevertheless, predatory experts fail to constitute genuine epistemic authorities since they systematically deceive their clients, thereby thwarting the latters’ epistemic goals (section 4). In order to accommodate this observation, we need to supplement the competence-oriented account of epistemic authority so far developed with a performance constraint. I argue for a reliability condition to the effect that genuine epistemic authorities not only have the epistemic abilities to help their clients, but that these abilities also reliably manifest themselves in the authority’s social-epistemic behavior (section 5). A short conclusion summarizes my results and outlines questions for further research (section 6).

2 A Social-Role Account of Epistemic Authority

2.1 Epistemic Authority: First Steps

A natural way to start exploring false epistemic authority is to begin with true, or genuine, epistemic authority. This notion has sparked some controversy.Footnote 6 Some authors propose a purely epistemic or “subjective” definition. For example, in a classic study, C.S. Lewis claims that “[b]elieving things on authority only means believing them because you have been told them by someone you think trustworthy” (2009:62, my emphasis). More recently, Sofia Bokros, focusing on credences, maintains that “A is an epistemic authority for S with respect to p iff S judges A to have a higher expected accuracy with respect to p than S takes herself to have independently of following A´s authority” (2021:7, my emphasis).Footnote 7 However, that someone is thought or judged to be in an epistemically advanced position does not entail that they actually have that status. Relatedly, such definitions do not require epistemic authority to be an asymmetric relation. If it depends solely on the agent’s belief whether or not a given source is an epistemic authority for them, A could be S’s authority, while S is A’s authority (at the same time, and regarding the same topic and the same epistemic good). Arguably, this is an unwelcome result which sits ill with our pre-theoretical understanding of the notion.Footnote 8

In view of such problems of purely epistemic definitions, one may opt for a purely objective account according to which epistemic authorities are people (or, more generally, sources) who actually enjoy the relevant epistemic advantages over those who seek advice from them, whether or not the latter recognize this. This would preserve asymmetry, and it would allow us to disagree with people about their epistemic authorities. If someone who drank disinfectant or consumed horse dewormer justified this by deferring to Trump or Kickl as their authorities, the reply could simply be that these people are in reality not the agent’s authorities on medical matters, for the simple reason that they fail to enjoy the required epistemic competence in these fields.

Nevertheless, it may be conceded that proponents of purely subjective accounts rightly capture an agent-relative, subjective connotation of the notion of epistemic authority. We do say things such as: “A was your authority on the matter, all right, but he shouldn’t have been!” We can account for this by adopting a proposal from Jäger (forthcoming) which combines both the subjective and objective connotations of what he calls “recognized epistemic authority”:

EA1: A given epistemic source A is a recognized epistemic authority for an epistemic agent (or group of epistemic agents) S at time t, and relative to some set of propositions or subject matter p and set of epistemic goods G, iff S correctly believes A to be in a significantly advanced epistemic position relative to S at t, concerning p and G.Footnote 9

I shall work with this definition for now, yet argue below that further fine-tuning is called for.

Calling A a “source” is liberal enough to cover non-human players, such as intellectual or cultural traditions and AI systems. Whether and, if so, in which ways, the authority such systems might enjoy is parasitic upon human authority is a matter of debate. Such complexities are not immediately relevant for present purposes, however, so I shall set them aside and focus on the paradigm of human epistemic authority.

The qualification that S (correctly) believes A to be in an advanced epistemic position reflects the subjective aspects outlined above. Still, according to EA1, A fails to constitute an epistemic authority for S if A is not in fact epistemically better off than S; S’s belief to this effect must be true. Moreover, the source’s superiority is topic- and time-relative and relative to specific epistemic goods. Consequently, A may, at (or during) t, be S’s authority concerning some subject matter and epistemic good while S is, at the same time, A’s authority on another topic, or on the same topic but with respect to another epistemic good.Footnote 10 (For example, one scientist may harbor more true beliefs in a certain field of inquiry while another has deeper understanding.) Moreover, at some other time t’, things may have switched and S may have become A’s authority concerning p and G. Finally, requiring a significantly advanced epistemic position reflects that there must be some appropriate epistemic distance between the parties. For example, in cases where true belief is the central epistemic goal and A has 90% true beliefs in a given field while S is right 87% of the time, we would not normally consider A to be S’s epistemic authority.Footnote 11

2.2 Expertise Versus Authority and a Social-Role Account

EA1, I maintain, captures essential aspects of the notion of (genuine) epistemic authority, but it is still silent on one important detail. In order to see this note, first, that it represents what may be called a competence approach to epistemic authority. The core idea is clear enough. If the epistemic goal is for example true belief, then according to the above definition, A is an epistemic authority vis-à-vis S only if A has more true beliefs (and, perhaps, fewer false onesFootnote 12) than S concerning the relevant subject matter. These may consist in not only object-level, first-order beliefs but also methodological beliefs about how to collect and assess evidence, how to draw appropriate conclusions, how to correctly base the relevant beliefs on the evidence, and so forth.

Second, note that according to EA1, not all authorities are experts and not all experts are authorities. Consider for instance one of the most influential accounts of expertise, the one proposed by Alvin Goldman. He argues that:

S is an expert about domain D if and only if (i) S has more true beliefs (or high credences) in propositions concerning D than most people do, and fewer false beliefs; and (ii) the absolute number of true beliefs S has about propositions in D is very substantial (Goldman, 2018:5).Footnote 13

According to this account, an expert’s true (and false) beliefs must meet certain quantitative thresholds relative to large epistemic communities. EA1 deliberately excludes such thresholds for the notion of epistemic authority. We should be able to say, for example, that parents with an average, non-expert education in algebra typically serve as authorities in algebra for their young children. Or to borrow an example from Croce (2019): In order for a grandmother to serve as an epistemic authority for her grandson about how fish breathe, she need not be an ichthyologist. Authority is a subject-relative notion. Whereas Goldman’s conditions (i) and (ii) make sense for expertise, they should not be incorporated in a definition of epistemic authority.

In addition to his threshold account, Goldman (2018) also considers another take on expertise, and the ideas he lays out there, I submit, do capture a feature that also applies to epistemic authority. Goldman also suggests that:

S is an expert in domain D if and only if S has the capacity to help others (especially laypersons) solve a variety of problems in D or execute an assortment of tasks in D which the latter would not be able to solve or execute on their own. S can provide such help by imparting to the layperson (or other client) his/her distinctive knowledge or skills (Goldman, 2018:4).

Others concur. Quast (2018) says, following Goldman, that “the main function of expertise is sharing some knowledge for the benefit of someone else” (2018:13). David Coady argues that.

[an expert] is someone laypeople can go to in order to receive accurate answers to their questions (Coady, 2012:30).

The exact nature of this social-epistemic function is controversial. Croce (2018b, 2019) argues that we should not, or not exclusively, focus on expert-layperson relations but also consider experts as epistemic agents who promote disciplinary progress in the expert community. This fits Coady’s general description well: After all, receiving accurate answers to one’s questions is also a goal of expert-expert discourse; indeed, it is a goal of testimonial exchange generally.Footnote 14

I will not go into the general debate about the nature of testimony here, rather I wish to apply this functional account of expertise to epistemic authority. Although not all experts are epistemic authorities and not all epistemic authorities are experts, the desideratum that epistemic superiors of this stripe be able to help others with their epistemic goals also applies to authorities. Inspired by Goldman’s (2001, 2018) account of expertise and Jäger’s (2016) notion of Socratic authority, which makes understanding a central goal of interacting with epistemic authorities, this has been argued convincingly by Croce. He suggests that.

someone is an [epistemic] authority for another insofar as she responds to (some of) his epistemic needs, which may vary, depending on the circumstances, from getting knowledge to understanding a subject matter, and from receiving what he needs through the authority’s testimony to achieving epistemic goals on his own thanks to the authority’s guidance (Croce 2018a:479).

So let us incorporate a corresponding qualification:

EA2: A is a recognized epistemic authority for S at t, relative to a given set of propositions p and set of epistemic goods G, iff S correctly believes that A has the capacity to help S – in suitable circumstances and in virtue of being in a significantly advanced epistemic position at t relative to S, concerning p and G – accomplish S’s epistemic goals in G regarding p.

I mention epistemic goals concerning epistemic goods, since one may have different goals concerning the same goods. For example, one person may seek to acquire just as many true beliefs as she can get about some topic; another may strive for understanding.

Adopting terminology proposed by Joseph Raz in his theory of political authority (Raz 2006, cf. Croce 2018b), we may call this account a version of a service conception of epistemic authority. Someone is your authority on climate change, for example, iff you correctly regard that person as being able — in virtue of their advanced knowledge and understanding of climate change — to help you accomplish the epistemic goal of acquiring knowledge about, and/or advancing your understanding of, climate change.Footnote 15 Equipped with this picture of genuine epistemic authority, I now turn to false authority.

3 A Taxonomy of False Authority

The above analyses suggest that A is a false epistemic authority vis-à-vis S iff S falsely believes A to have the capacities that enable A to help S achieve S’s epistemic goals. False authorities lack some or all of the positive epistemic features ascribed to them by their clients. This characterization still covers different social-epistemic situations.

First, there are what may be called unintended false authorities, to be distinguished from fake authorities. I call agents “unintended false authorities” iff they constitute false authorities despite having no intention to be regarded as authorities, let alone actively promoting this. Fake authorities, by contrast, are false authorities who wish and intend to be regarded as authorities.

Unintended authority may occur in various contexts. Consider people who enjoy advanced knowledge or understanding in a given field of inquiry, yet are falsely credited with special competences in other, perhaps neighboring fields where they lack these competences. Davis (2016) discusses the example of Uma Narayan, an “Indian professor in a western academic space” who was frequently consulted by students on topics such as “Indian novels in English, … Goddess-worship rituals in South India,” etc., even though none of these topics remotely fell into Narayan’s area of expertise (Davis 2016:488). In such cases, clients illegitimately extend justified authority ascriptions to areas where they are unjustified. Davis considers this an epistemic injustice that potentially harms the authority, e.g., by consuming their time and straining their intellectual resources.

Fake authorities, by contrast, are false authorities who deliberately wish to be treated as authorities. “Fake” is used here as a privative adjective. Examples that spring to mind are fake weapons, or the popular use of the term to denote the pretense of certain physical reactions that do not in fact happen (e.g., while having sex). For example, whereas fake news are news, a fake gun is not a real gun and, let’s say, a fake yawn is not a real yawn.

However, fake authorities subdivide into two varieties. According to the taxonomy I suggest, some fake authorities suffer from a deluded self-concept about their epistemic capacities relative to their clients.Footnote 16 They falsely believe themselves to be sufficiently competent. I call such agents epistemic quacks. Famous examples include esoteric healers (such as Rasputin), visionaries (such as Baba Wanga) or — an interesting contemporary example — “modern druids” such as the German Karl Burghard Bangert alias “Burgos von Buchonia,” who says he was born 2500 years ago, raised by the great wizard Merlin, and dresses like the fictional druid Miraculix from the French Comic book Asterix and Obelix. All this would be quite funny, were it not for the fact that Burgos is a member of the German sovereign citizen conspiracy movement, a Holocaust denier, an anti-Semite, and a Muslim hater, who regularly gave hate speeches and was found hoarding weapons to prepare for taking over when the German state would collapse. “Nazi Gandalf,” as his friends also called him, managed to acquire followers of his own. (Meanwhile, following a court appearance, the authorities seem to have stopped him.)

In other cases, the fake authority does not believe herself to be sufficiently competent or even knows that she is incompetent. I call such agents epistemic charlatans. (Famous examples are confidence tricksters or imposters, such as Cagliostro or Frank Abagnale.) So, the difference between epistemic quacks and epistemic charlatans is their beliefs about themselves.Footnote 17 The following table (Table 1 ) summarizes the distinctions drawn so far:

Table 1 A taxonomy of false epistemic authorities I (preliminary)

I now turn to the phenomenon of predatory experts. Are they on our taxonomy of false authority? And if not, then should they be counted, not as false, but as genuine authorities?

4 Are Predatory Experts Epistemic Authorities?

Lackey, as we heard in section 1, characterizes predatory experts as agents who misuse “their epistemic authority as a cover for predatory behavior within the domain of their expertise” (Lackey 2021:144). Consider her chief example Larry Nassar, a sports-medicine physician who served for almost two decades as a team doctor for the USA Gymnastics. In 2018, he was convicted of multiple accounts of sexual assault and possessing child pornography, upon which he was sentenced to up to 175 years in prison.Footnote 18 In order to illuminate the kind of authority Nassar enjoyed, and the ways he abused it, it will be helpful to look at some original statements of some of his victims as delivered at court in 2018.

“As it turns out,” McKayla MaroneyFootnote 19 maintains, “... Dr Nassar was not a doctor.”

[H]e in fact is, was, and forever shall be, a child molester and a monster of a human being. … He abused my trust, he abused my body and he left scars on my psyche that may never go away (quoted from Lutz 2018).

Another victim, Alexis Moore, says:

To me, he was like a knight in shining armor. But alas, that shine blinded me from the abuse. He betrayed my trust, took advantage of my trust and sexually abused me hundreds of times (quoted from Moghe and del Valle 2018).

Kate Mahon claims that:

Larry Nassar is a master manipulator. His conniving and calculating behavior not only tricked me, but he tricked my mom, who was present for all my appointments as a minor (quoted from Moghe and del Valle 2018).

Or consider a passage from the oft-quoted statement of Rachael Denhollander:

His ability to gain my trust and the trust of my parents, his grooming and carefully calculated brazen sexual assault was the result of deliberate, premeditated, intentional and methodological patterns of abuse (Denhollander 2018).

These quotations capture various aspects of the case. For the present discussion, the crucial point is that, according to the reports, Nassar abused his patients not only physically, but also by systematically exploiting and betraying their trust in him. But what kind of trust is at issue?

This is not the place to delve deeper into the extensive philosophical debate about the nature of trust.Footnote 20 However, it will be helpful to list some key aspects concerning the particular kind of trust involved in the present case. (i) First, the trust people placed in Nassar is so-called interpersonal trust, as opposed to mere reliance. To be sure, Nassar’s patients (and others) were also relying on him in the sense of having strong-enough predictive expectations toward him, that is, expectations that he would deliver sound medical practice. However, as many philosophers have argued in recent years, when expectations constitutive of reliance are frustrated, one may be disappointed but will not respond with reactive attitudes — such as feeling betrayed or let down — toward the person who failed to come through. (The FBI agent may rely on the criminal to take them to the location where it happened, but does not place trust in the criminal for doing so and will not feel let down when things go otherwise.)Footnote 21 Yet Nassar’s victims did have reactive attitudes toward him, which indicates that they had been engaged in an interpersonal trust relationship with him. Moreover, in virtue of Nassar’s social role as an expert and as a team doctor at national level, his victims’ trust in him was natural and well justified.

(ii) Second, the trust under consideration is so-called three-place trust, i.e., trust that people place in someone or something for something. In this case, Nassar’s patients (and others involved) trusted him, to begin with, to do the right things. Specifically, they trusted him to apply legitimate and appropriate diagnostic and therapeutic methods. This is a form of practical trust, i.e., of trusting an agent to act in appropriate ways. In cases like the present one, acting appropriately requires, first, choosing methods that are effective. When we see a doctor and there are diagnostic and therapeutical methods available that are more effective than others, then — if we can leave incentives in medicine and the pharma industry, financial considerations, etc. to one side — we trust the doctor to opt for one of these methods; if there is one most effective method, we trust them — other things being equal — to opt for the most effective one.

In addition, using legitimate or appropriate medical methods also means employing methods that are, given the circumstances, least costly in terms of pain, collateral physical damage, or collateral psychological damage or strain. It is morally obligatory for a physician to use such methods. For example, if we assume that monetary costs played no role, then, if our ankle fracture could be investigated by taking X-rays with a 40-year-old apparatus that would expose us to large quantities of radioactivity, whereas taking an MRI image would deliver equally good or even better results without such costs, we would legitimately trust our physician to (a) know about this and (b) use MRI diagnostics. The doctor would, in these circumstances, be morally obligated to do so. Similarly, if a certain therapy T is as effective as another therapy T*, but T causes less physical or psychological pain or damage than T*, then we trust our physician to opt for T instead of T*, and they are morally obligated to do so. (Of course, in actual practice a typical challenge is to weigh effectiveness against cost.) The upshot is that, according to these criteria, Nassar deliberately applied inappropriate diagnostic or therapeutic methods.

(iii) Third, the situation involves epistemic trust. Epistemic trust in an epistemic authority involves, to begin with, trust to the effect that the authority possesses a sufficient amount of representational epistemic goods, as characterized in section 1. Thus, when we consult a physician, we trust them to have sufficient knowledge in their areas of expertise, enough well-grounded beliefs, understanding, and so on. In the Nassar case, his patients, and others involved, legitimately trusted him to know or at least have well justified beliefs about which diagnostic and therapeutical methods were appropriate. Was this trust in him misplaced?

As far as we can tell, Nassar probably had the relevant medical knowledge and thus did not, in this regard, fall short of fulfilling the (legitimate) normative expectations toward him. Yet, by acting as he did, Nassar deceived his victims and tricked them, with base motivations, into holding false beliefs about which methods were appropriate. Some reports suggest that he sometimes did this by explicitly lying to them and their parents; others suggest that he deployed other means of deception.Footnote 22 According to standard definitions, a speaker S lies to a hearer (or group of hearers) if S makes a statement which they believe to be false, with the intention that the hearer believe it to be true. But even in cases where Nassar did not lie in this sense (or a similar one), he still implied through his actions that the techniques he employed were appropriate.

According to more liberal definitions of lying, we may say that even in these cases, Nassar deceived by lying. According to David Smith, for example, “any form of behavior the function of which is to provide others with false information or to deprive them of true information” amounts to lying (Smith 2004:14; see also Vrij 2008:6). Since Nassar, simply by acting as he did, indirectly provided his patients with false information and deprived them of true information, he may, according to this definition, even be said to have lied when he did state linguistically that what he did was right. In short, Nassar betrayed his clients’ epistemic trust either because he lied to them or because he led them in some other way to hold false beliefs about topics on which he was an expert.

So, did Nassar serve as a genuine or as a false epistemic authority for his patients? The core idea of the social-role account I have suggested is that epistemic authorities help their clients achieve their epistemic goals. Among the epistemic goals of Nassar’s patients, there surely was, at least implicitly, the one of not being deceived about appropriate diagnostic and therapeutical techniques, but to acquire correct beliefs about them. So, since Nassar systematically thwarted these goals, he should not be considered a genuine epistemic authority vis-à-vis the patients he deceived. In general, all this suggests that predatory experts fail to constitute genuine epistemic authorities. Although they are in an epistemically superior position relative to their clients, they fail to live up to one essential requirement of epistemic authority: that of responding to the epistemic needs of their interlocutors. I conclude that predatory experts are genuine experts but fail to constitute genuine epistemic authorities for those whose epistemic trust they betray.Footnote 23

However, on closer inspection, it turns out that the current wording of EA2 does not by itself adequately capture this point. As it stands, it only postulates that A has the capacity to help their clients, and that the latter correctly believe this. As the above discussion shows, predatory experts need not fall short of fulfilling this requirement. When Maroney, in the above quotation, claims that Nassar “was not a doctor,” presumably what she means is that he did not live up to the moral code of conduct and the self-professed norms that physicians are expected to respect (and usually do respect). That seems right. Yet Nassar did have special training and education as a sports-medicine physician and so had the capacity to help his clients get the right information about what would constitute appropriate diagnostic and therapeutical interventions. Unfortunately, then, the case cannot be subsumed under any of the three kinds of false authority so far distinguished. In all these cases, the agents — unintended false authorities, epistemic quacks, and epistemic charlatans — lack the relevant competences or abilities. What are we to say? Was the above conclusion premature? Do predatory experts constitute genuine epistemic authorities, after all?

5 A Reliability Constraint and False Authorities Who Are Systematic Trust Betrayers

The arguments discussed above suggest that we should not consider A to be a genuine epistemic authority vis-à-vis S if A merely fulfills the condition of having the capacity to help S pursue S’s epistemic goals, yet rarely or never exercises that capacity. I suggest, therefore, that we ameliorate EA2 by adding a reliability constraint to the effect that A’s competences or capacities reliably manifest themselves in A’s social-epistemic behavior. Call this a performance condition. We thus obtain:

EA3:

A given epistemic source A is a recognized epistemic authority for an epistemic agent (or group of epistemic agents) S at time t and relative to some set of propositions or subject matter p and set of epistemic goods G iff S correctly believes that:

  1. (i)

    A has the epistemic capacity to help S — in suitable circumstances and in virtue of being in a significantly advanced epistemic position — accomplish S’s epistemic goals in G concerning p (competence condition), and

  2. (ii)

    A reliably exercises this ability in suitable circumstances (performance condition).

The concept of reliability is hotly debated in epistemology. Here we need not delve into traditional controversies about the notion, however. What I mean when I say that A “reliably exercises their ability in suitable circumstances” is that A exercises this ability with sufficient frequency, in suitable circumstances, and in a sufficiently large and varied run of deployments. The locution “sufficient frequency” is admittedly vague; yet, prospects for coming up with a precise threshold seem bleak. What this amounts to will heavily depend on the context, potentially including the agents’ specific goals, their practical stakes, and so on. For example, if a pedestrian topic is at issue, the threshold will be lower than if your life depends on whether the potential authority is mostly right in their judgments. It seems reasonable to require, though, that if an ability or a capacity is reliably exercised, the rate of actualization will range above 50 percent. In many kinds of situation, it may have to be considerably higher.Footnote 24

EA3 excludes predatory experts, since their capacity to support their clients does not reliably manifest itself in their social-epistemic behavior. On the contrary, they systematically deceive and misinform their clients about important questions. This may happen implicitly or indirectly, or explicitly by flat-out lying. However, if predatory experts fail to constitute genuine authorities, according to EA3, then in which category of false authority do they fall?

A moment of reflection reveals that neither of the three types of false authority so far discussed covers predatory experts. Predatory experts act, as we may say, as systematic epistemic-trust abusers. But there is a difference between epistemic charlatans and predatory experts such as Nassar, namely that charlatans fail to be epistemically competent. The same holds for epistemic quacks and unintended false authorities: In all three cases, the agents fail to constitute genuine authorities because they lack the relevant epistemic capacities. Not so with predatory experts who, as we have seen, have the relevant capacities. This suggests an ameliorated taxonomy which accounts for the performance condition of genuine authority discussed above. Predatory experts are fake authorities who fail to fulfil the performance condition.

Moreover, in the same way that we distinguished incompetent false authorities who believe themselves to be competent (epistemic quacks) from those who don’t believe themselves to be competent (epistemic charlatans), we may distinguish fake authorities who believe themselves to perform reliably (unwitting non-performers) from those who don’t believe this (witting non-performers).Footnote 25 Nassar, the evidence suggests, is a witting non-performer. Examples of unwitting non-performers are harder to come up with, but good candidates are people who are competent, yet fail to reliably exercise their competence and do not realize this because, e.g., they rationalize their unreliable performance. The following table (Table 2) summarizes these distinctions and lists what I believe are good examples:

Table 2 A taxonomy of false authorities II

False authorities, I said in section 3, lack some or all of the positive epistemic features ascribed to them by their clients. EA3 says that these features include (i) a social-epistemic competence condition and (ii) a performance condition. A false authority is someone whose clients falsely believe that they fulfil the competence condition EA3 (i) and the performance condition EA3 (ii). I introduced the notions of unintended false authority, epistemic quackery and epistemic charlatanry to characterize agents who, in their own ways, fail to meet the competence condition EA3 (i). If this is the case, then a fortiori they cannot fulfil EA3 (ii): If they lack the relevant competences, they cannot reliably exercise them. Predatory experts, by contrast, fulfil the competence condition EA3 (i), but fail to fulfil the performance condition EA3 (ii). So, in this case, S correctly believes that the purported authority has the relevant capacities and could help S with S’s epistemic goals, but falsely believes that they reliably exercise these capacities.

6 Summary, Merits of the Account, and Questions for Future Research

I have argued for a social-role account of epistemic authority according to which epistemic authorities are agents who, in virtue of being in an advanced epistemic position relative to their clients, can help the latter achieve their epistemic ends. Such authorities can, but need not be, experts in a given field of inquiry. My arguments yielded a distinction of three kinds of false authority, namely (i) unintended false authority, (ii) epistemic quackery, and (iii) epistemic charlatanry. I then discussed what Lackey calls “predatory experts” and argued that, since experts of this stripe systematically deceive their clients and undermine the latters’ epistemic goals, they do not meet the requirements of my social-role account. However, in order to bring this point home, the suggested definition of epistemic authority EA2 needed fine tuning. It had to be supplemented by a reliability constraint to the effect that the capacity to help others also reliably manifests itself in the authority’s social-epistemic behavior. Equipped with this constraint, I then introduced the category of non-performers, i.e., of false authorities who are competent, yet systematically exploit the epistemic trust they receive from their clients to deceive and take advantage of them.

The account I suggest clarifies the difference between experts and authorities. Predatory experts, naturally enough, constitute experts. But they fail to be epistemic authorities because they fail to reliably exercise their capacity to help others with their epistemic goals and hence fail to reliably fulfil the function the social-role account requires of genuine authorities. Moreover, and most importantly, the account provides a taxonomy of the most important kinds of false epistemic authority.

Questions for further research raise their heads. Perhaps the most pressing one is which additional skills and virtues epistemic authorities must have in order to care for the needs of their clients and to effectively help them achieve their epistemic ends. I have argued that epistemic authorities can help others “in virtue of their advanced epistemic position,” while this capacity must also reliably manifest itself in their social-epistemic behavior. But this is not the end of the story. Consider, for example, understanding. In order for an epistemic authority effectively to advance interlocutors’ understanding of a given subject matter, they must be able to interact in specific ways with them. This may involve, for example, a social-epistemic virtue that Jäger and Malfatti (2020) call “epistemic empathy” — the ability to project oneself into one’s interlocutor’s system of thought, in order to see how cognitive dissonances in it can rationally be resolved. When the interlocutors are laypersons or novices, it may require specific “novice-oriented abilities” (Croce 2018b), including intellectual generosity, maieutic abilities, and so on. Further exploration of such intellectual and didactic virtues must await another occasion.

This paper does not directly solve the practical problems that false authorities create in our societies and our Lebenswelt. Often it is hard to tell whether some person or source is a genuine or a false authority, and even when we suspect a false one, we may not immediately be able to tell whether it is an epistemic quack, a charlatan, a predator, or a source of yet another kind that we should shun. Conceptual clarifications and taxonomies, however, are important pieces in the toolbox to be carried along when we wish to identify fake authorities and prevent them from contaminating our social-epistemic environment.Footnote 26