1 Intellectual autonomy: Kant, Emerson and Hume

Intellectual autonomy—roughly, a disposition to think (in some to-be-specified sense) independently—has received various influential endorsements in the Western intellectual tradition. Immanuel Kant (1784) regards independence of thought as at the very heart of the enlightenment’s motto:

Enlightenment is man’s leaving his self-caused immaturity. Immaturity is the incapacity to use one’s intelligence without the guidance of another. Such immaturity is self-caused if it is not caused by lack of intelligence, but by lack of determination and courage to use one’s intelligence without being guided by another. Sapere Aude.Footnote 1! Have the courage to use your own intelligence! is therefore the motto of the enlightenment (1784, §1).

Kant here is effectively praising intellectual self-determination, which can be contrasted with what he’s calling ‘immaturity’,Footnote 2 where the latter involves a kind of intellectual cowardice that manifests when one’s thinking is wilfully allowed to be guided by something external to oneself.

Like Kant, the American essayist and Transcendentalist Ralph Waldo Emerson, in his famous essay Self-Reliance, frames as contrast points what is easy—accepting intellectual guidance—and what is praiseworthy—guiding one’s own intellectual life. As Emerson (1841, p. 55) puts it:

It is easy in the world to live after the world’s opinion. It is easy in solitude to live after our own. But the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.

While both Kant and Emerson extol in different ways the virtues of self-trust, Emerson’s aversion to intellectual conformity of any kind is perhaps even stronger than Kant’s. Whereas Kant’s primary objective in ‘What is Enlightenment’ is a celebration of the value of self-determination in our intellectual lives, Emerson to a greater extent focuses on the disvalue of (even weak forms of) intellectual servility.Footnote 3 This is apparent in Emerson’s overarching argument for self-trust in Self-Reliance, where the opinions of others, and their potential influence, are treated as a threat to undermining the value of the kind of self-worth that is grounded in one’s individuality.Footnote 4

A more moderate though highly influential defence of self-trust, albeit of a more fine-grained sort, can be found in David Hume’s thinking about both induction and testimony. Hume (1772) distinguished, as epistemologically problematic, evidence for induction conceived of as evidence concerning what goes ‘beyond the present testimony of the senses, or the records of our memory’.Footnote 5 And it’s precisely the testimony of one’s own senses and of memory which are always necessarily required, for Hume, in order to ever justifiably believe the word of someone else.Footnote 6 In this respect, Hume gives a epistemic privilege to what is apparent to the self (through perception and memory) which is not afforded to any other kind of evidence, including the kind we acquire by relying on others.

Although Kant, Emerson and Hume are all clear cases where epistemic independence and self-direction are, in different ways, advanced as worthy aims, notice that there is hardly any unified expression here of the disvalue of epistemic dependence, which is something any viable account of intellectual autonomy, conceived of as a virtue, should be sensitive to. Whatever is praiseworthy about intellectual autonomy should be sensitive to both what is valuable about self-direction and what is disvaluable about epistemic dependence; an account that misfires in either direction misses the mark.

We can imagine an account of intellectual autonomy which zealously incorporates Hume’s epistemological demands with the most radical construal of Kant and Emerson. According to such an account, the virtuously autonomous agent (i) should never uncritically trust others (Hume), (ii) should never allow her intelligence or reason to be guided by another (Kant) and—perhaps most radically of all—(iii) should actively see to it that she is not shaped by the opinions of others (Emerson). While such a person would be intellectually autonomous in the sense of maximally independent and self-directive, she would hardly be virtuously so,Footnote 7 and this is because such an individual disvalues epistemic dependence to her own intellectual detriment.

On most any account of epistemic value, the acquisition of true beliefs, knowledge and understanding are epistemic goods,Footnote 8 and aspects of character are typically explained as intellectually good to have (as opposed to say morally good to have) in virtue of their connection to such goods.Footnote 9 The kind of cognitive isolation that results from disproportionately disvaluing epistemic dependence (as in the extreme form of autonomy sketched above) cuts one off from such cognitive goods.

The foregoing considerations can be captured pithily in slogan form: impressive intellectual self-direction and independence of thought without a suitable knowledge base—from others, technology, the internet, etc.—is effectively empty, whereas knowledge acquired in the absence of a suitable capacity for autonomous self-direction is blind.Footnote 10

I want to suggest that it’s possible to articulate a more reasonable way to incorporate some of what the aforementioned thinkers take to be valuable about independence of thought without committing to an implausibly restrictive stance towards epistemic dependence. Firstly, a concession to Hume is that blind trust of any kind of testimony—be it particular mundane items of information, or testimony aimed at guiding inquiry itself—is epistemically criticisable. Though, this concession is compatible with the epistemic appropriateness of certain kinds of trust.Footnote 11 Secondly, a concession to Kant and Emerson: cognitive outsourcing—to what is external to one’s own intelligence (Kant) and individuality (Emerson)—is epistemically criticisable in cases where such outsourcing undermines (in some relevant, non-trivial way) one’s capacity for intellectual self-direction. And this concession is compatible with the thought that sometimes—perhaps even often—we should make use of available resources (other people, technology, etc.) when forming particular beliefs and also when determining what inquiries to pursue.

Putting this all together—and without committing to any detailed account of intellectual autonomy—it’s reasonable to suppose that virtuous intellectual autonomy simply cannot mean, as Roberts and Wood (2007, pp. 259–260) put it, ‘that one never relies on the intellectual labor of another’. But nor should a virtuously autonomous agent necessarily rarely rely on anything other than one’s own endowed faculties. Rather, the crucial idea is that the virtuously autonomous agent actually must rely on others, and outsource cognitive tasks as a means to gaining knowledge and other epistemic goods, up until the point that doing so would be at the expense of her own capacity for self-direction. And this makes intellectual autonomy, essentially, a virtue of self-regulationFootnote 12 in the acquisition and maintenance of our beliefs.Footnote 13

Over-reliance on the opinion of others is perhaps—as Roberts and Wood (2007, p. 259) have suggested—the most straightforward threat to an individual’s capacity for intellectual self-direction, but it is hardly the only such threat. The virtuously autonomous individual must also be sensitive to other less obvious ways in which her own agency can become disconnected from the way she acquires and maintains her beliefs.

Increasingly, in order to meet our cognitive goals, we tend to ‘offload’ tasks (traditionally performed through the use of our endowed biological faculties) to technological gadgets with which we regularly and uncritically interact. Moreover, the latest science and medicine has made it possible to improve cognitive functioning along various dimensions using such methods as nootropics or ‘smart drugs’ (e.g., Adderall, Ritalin, Provigil, Oxiracetan), implants (e.g., neuroprosthetics), direct brain-computer interfaces, and (to some extent) genetic engineering.Footnote 14

Both high-tech cognitive offloading and reliance on medicine to achieve cognitive goals represent increasingly ubiquitous ways in which an individual’s own intellectual agency can—at least potentially—become disconnected from the way she acquires and maintains her beliefs.

In what follows, I want to examine how the foregoing considerations about the connection between virtuous intellectual autonomy and maintaining self-direction interface with the potential threat posed by various forms of cognitive enhancement. The forms of enhancement explored have in common—and in a way that is relevant to intellectual autonomy—that an individual, in virtue of the enhancement, is such that the contribution of her own biologically endowed cognitive faculties to her cognitive projects is diminished.Footnote 15

The remainder of the paper proceeds as follows. Section 2 sharpens the notion of cognitive enhancement and, in the course of doing so, distinguishes it from the related notion of therapeutic cognitive improvement. Section 3 presents three kinds of cognitive enhancement cases which appear to undermine, for slightly different reasons, the intellectual autonomy of the cognitive enhanced agent, by undermining (each case, in a different way) her capacity for intellectual self-direction. Section 4 responds to the three example cases in a way that will clarify why some enhancements are immune to the objection from self-direction while others are not. Once this point is appreciated, it will be suggested how—in the right circumstances—availing ourselves to the latest technology and medicine is not only compatible with, but can fruitfully augment, our intellectual self-direction and autonomy.

2 Cognitive enhancement

One very common reason why we rely on technology and/or medicine to assist us in our cognitive tasks is that our endowed biological cognitive faculties sometimes fail us. Take, for example, Alzheimer’s disease, which has symptoms that include short-term memory loss and confusion. Sufferers of Alzheimer’s disease increasingly rely on drugs such as acetylcholinesterase inhibitors (e.g., Donepezil) to slow the cognitive symptoms of the disease.Footnote 16 Specifically, drugs like Donepezil slow the breakdown of acetylcholine, which is a chemical (in low supply in Alzheimer’s patients) that helps to send messages between nerve cells.Footnote 17

Contrast now the use of Donepezil to slow the progression of Alzheimer’s, from a superficially similar case, where a drug is likewise involved to improve cognition, but which (as we’ll see) might generate a different intuitive reaction. Consider the ‘smart drug’ Modafinil (i.e., Provigil), a eugeroic drug prescribed to patients suffering from narcolepsy, though which is widely used ‘off label’ not to correct any pathology or defect, but to gain some kind of cognitive advantage.Footnote 18 A comprehensive meta-study conducted by Battleday and Brem (2015) has shown Modafinil to be consistently efficacious in enhancing, in non-sleep-deprived healthy individuals, attention, executive functions, and learning, especially in complex cognitive tasks. A potential cost of these gains, as reported in recent studies by Mohamed (2014) and Mohamed and Lewis (2014)—comes in the area of creativity. Such studies indicate that Modafinil, despite its benefits in focus in healthy individuals, has a deleterious effect in convergent and divergent creative thinking tasks, aimed at narrowing possibilities and generating novel ideas, respectively.Footnote 19

In both the case of Donepezil as well as Modafinil use, something extra-agential—viz., drugs—is a significant causal difference maker with respect to the nature and quality as well as the direction of one’s cognitive projects. Also, in both cases, the contribution of the agent’s biologically endowed cognitive faculties to her cognitive projects is diminished given the increased role drugs are playing. However, while it’s not clear that the former case in any way represents a threat to the cognitively improved agent’s intellectual autonomy (if anything, it seems the former case facilitates intellectual self-direction in Alzheimer’s patients), the second case—viz., regarding Modafinil—is a quite a bit murkier. That is, intuitively, it seems as though allowing a drug like Modafinil to substantially shape the way one manages one’s cognitive life (in matters such as focus, learning and creativity) involves a certain sacrifice of self-direction that one doesn’t seem to be making in the case of Donepezil.

One might initially assume that this is because ‘smart drugs’ such as Modafinil generally have a more substantial causal influence on one’s cognitive life (relative to not taking the drug) than do drugs like Donepezil, when the latter is taken therapeutically and the former is taken ‘off label’ by healthy individuals. Though this is hardly the case. A faster rate of neural degeneration would be a much more dramatic shift in one’s cognitive life than an absence of above-the-mean focus and attention. On closer consideration, it seems as though some kind of normative consideration must be what’s underlying the intuition that dependence on Modafinil poses a more credible threat to undermining intellectual autonomy than does dependence on Donepezil.Footnote 20

Here is an obvious (though ultimately, I’ll suggest, misguided) candidate normative consideration: although Donepezil is itself something extra-agential which is causally responsible in an important sense to patients’ belief retention and memory, its use is aimed at correcting a pathology or cognitive defect, and as such to bring the agent closer to normal healthy levels of cognitive functioning. Improvements to human functioning which have this kind of goal are termed therapeutic cognitive improvements.Footnote 21 Modafinil by contrast, when used by healthy individuals to gain a kind of cognitive advantage, constitutes a cognitive enhancement and as such aims go beyond mere healthy human cognitive functioning.Footnote 22

If this normative difference between the two cases is the right explanation, then it would have to be premised upon a more general claim to the effect that cognitive enhancements, as such, pose a distinctive kind of challenge to the retention of intellectual autonomy that is not posed by therapeutic cognitive improvements. Even if this more general claim were true, it would just raise yet a further, more difficult question: why should the fact that enhancements raise cognitive functioning beyond healthy levels be relevant at all to whether autonomy is at risk of being compromised?

Ultimately, what I want to suggest is that the cognitive norm that is, on closer inspection, most fundamental in explaining why some cases where cognition is partly driven by extra-agential factors really do compromise intellectual autonomy, while others don’t, is framed not in terms of enhancement, but rather, in terms of (a lack of) cognitive integration, in a sense that will be articulated in more detail in Sect. 4.

In order to appreciate why it’s not enhancement, as such, that’s the problem from the perspective of retaining one’s intellectual autonomy, it will be helpful to consider three example enhancement cases, which draw from work by Michael Lynch (2016) on neuromedia, Felicitas Kraemer (2011) on pharmacological enhancement and authenticity and Google design ethicists on framing effects and the illusion of choice, respectively. Each case features a strand of cognitive enhancement with a different proximate cause for why the agent’s capacity for intellectual self-direction is undermined. What I hope to show is that these three proximate causes all have an underlying or distal cause which can account for why intellectual autonomy is undermined in these enhancement cases. We’ll see, further, that such a cause needn’t be present in all cases of cognitive enhancement, which is why not all forms of cognitive enhancement are a threat to one’s intellectual autonomy. Moreover, the view advanced can also explain why therapeutic cognitive improvements are generally speaking (though not always) compatible with the retention of intellectual autonomy.

3 Three objections from self-direction

3.1 Learned helplessnessFootnote 23

While drugs can help to help improve cognitive functioning, so can cognitive scaffolding—viz., reliance on technologies that complement our endowed cognitive abilities.Footnote 24 Generally, gadgets used for cognitive scaffolding (i.e., iPhones, laptops, Google Glass) are located outside of our brains. However, this might just be temporary. As Michael Lynch (2016) has pointed out, the gadgets we rely on to store, process and acquire information have become, even just over the past decade, significantly smaller and wearable—as he puts it, trending toward seamless and ‘invisible’. One example of such ‘invisible’ scaffolding includes the new Google ‘Smart Lens’ project, which is bringing to the market ‘smart contact eye lenses’ with tiny wireless chips inside along with a wireless antenna thinner than a human hair.Footnote 25 Since the launch of this project, Samsung has countered (in February 2016) by patenting its own smart contact lenses which includes an invisible camera, with a display that can ‘project images directly into the human eye’.Footnote 26

One of the most provocative kinds of seamless cognitive scaffolding however comes in the form of wireless neural implants. Neural implants, increasingly used to assist individuals with prostheses to allow their brain and nerves to control and receive feedback from movements of prostheses, had previously required the use of wires that are connected to a device outside the agent’s body.Footnote 27 A recent (2016) smart chip development can now be paired with the implants to allow for wireless transmission of brain signals.Footnote 28 While this technology is currently being developed exclusively for therapeutic purposes, it doesn’t take much imagination to envision non-therapeutic uses for wireless neural implants.

In his recent book The Internet of Us, Michael Lynch (2016) anticipates a not unrealistic future scenario where sophisticated wireless neutral implants—what he calls ‘neuromedia’—are the norm in a society. However, Lynch’s story comes with a twist:

NEUROMEDIA: Imagine a society where smartphones are miniaturized and hooked directly into a person’s brain. With a single mental command, those who have this technology—let’s call it neuromedia—can access information on any subject [...] Now imagine that an environmental disaster strikes our invented society after several generations have enjoyed the fruits of neuromedia. The electronic communication grid that allows neuromedia to function is destroyed. Suddenly no one can access the shared cloud of information by thought alone. [...] for the inhabitants of this society, losing neuromedia is an immensely unsettling experience; it’s like a normally sighted person going blind. They have lost a way of accessing information on which they’ve come to rely [...]

The moral Lynch draws, and which he develops in the book, is of course a cautionary one.Footnote 29 Though while the worry expressed by the neuromedia thought experiment appeals to a futuristic scenario featuring ‘extreme’ cognitive scaffolding,Footnote 30 the crux of the worry can be abstracted away from the details of his case, so as to apply more broadly to some of our currently available cognitive scaffolding, such as smartphones.

Here, the social-psychological notion of “learned helplessness” (e.g., Seligman 1972) will be useful in capturing the lesson. Learned helplessness, generally construed, occurs when one repeatedly experiences a lack of control over his or her environment, then resigns herself to such lack of control.Footnote 31 The former president of the Royal Institute of Navigation, Roger McKinlay (2016) offers, in a recent article in Nature, a clear everyday example of learned helplessness as it pertains to the maintenance of navigation skills. McKinlay notes that increased reliance on satellite navigation has led drivers to be less vigilant in tracking where they have previously driven compared to those drivers accustomed to relying on paper maps, and so (in simulation tests) are more inclined to drive past the same place twice without noticing. More generally, deteriorating navigation skills has in turn only increased reliance on satellite navigation, through a process whereby individuals are increasingly ‘giving up’ attempting to orient themselves–and are accordingly failing to develop parts of the brain responsible for spatial orientation.Footnote 32

Putting this all together, a rationale emerges for at least one specific way in which cognitive enhancement can threaten intellectual autonomy—viz., by rendering individuals increasingly intellectually helpless.Footnote 33 To the extent that one is helpless, one is unable even if one tries, to direct one’s cognitive affairs, in the absence of the enhancement in question. Let’s now consider two other potential rationales, which are motivated on the basis of different kinds of considerations.

3.2 Scaffolding, framing effects and the illusion of preference-based choice

An well-known variety of cognitive bias called the framing effect occurs when the presentation of a choice influences how individuals react to it (e.g., Tversky and Kahneman 1981). Framing effects reveal that our perceived control over certain kinds of choices is often illusory, and the shape that our online inquiries take is especially susceptible to such an illusion of control.

A concrete example of such a framing effect involves online searching. We are often led to believe that our own preferences are what’s primarily responsible for determining what inquiries we in fact pursue online, in a way that overlooks, as Google design ethicist Tristan Harris (2016) puts it, the way ‘choices are manipulated upstream by menus we didn’t choose in the first place’. As Harris puts it:

When people are given a menu of choices, they rarely ask: “what’s not on the menu?” “why am I being given these options and not others?” “do I know the menu provider’s goals?” “is this menu empowering for my original need, or are the choices actually a distraction?”

Suppose, for example, you make an online choice (e.g., between five options which turn up in a Yelp search)Footnote 34 about what activity to pursue when visiting a new town. In such a circumstance, you might believe the choice you make, from the options on the Yelp search menu best represents your own preferences. You then search again, to learn some more information about the specific activity you’ve tentatively chosen, and are in the process nudged by Google-auto-complete, when generates another choice from a menu.Footnote 35 At the end of such a series, your curiosity is satisfied and your inquiry is complete. It is an interesting philosophical question to what extent you have just directed the particular chain of inquiries which have culminated in the new set of beliefs you’ve settled upon. This much seems plausible: the shape that online inquiry chains take is significantly influenced by undetected framing effects that are themselves the product of upstream technological design decisions.Footnote 36

The more general line of argument that emerges is the following: enhancement via intelligence augmentation, as when we outsourcing cognitive tasks to smartphones and other gadgets, subjects us to constant framing effects which often go unnoticed. While such gadgets obviously aid us in acquiring knowledge quickly and seamlessly, they—as this line of argument contends—undermine our intellectual self-direction by (and in a manner that typically goes undetected) diminishing the contribution that our own biological cognitive faculties make towards the shape our inquiries take.Footnote 37

3.3 Authenticity and self-direction

Some virtue epistemologists such as Christopher Hookway (2003) and Catherine Elgin (1996) have argued, in different ways, for the epistemological significance of certain kinds of emotions.Footnote 38 Emotions can be influenced pharmacologically, sometimes therapeutically, though sometimes with the aim of enhancing emotional well-being in healthy individuals, as in what Peter Kramer (1994) calls ‘cosmetic pharmacology’.Footnote 39

As Felicitas Kraemer (2011, p. 52) has asked, when drugs are relied on to enhance our emotional well-being ‘Can emotional authenticity or inauthenticity be inferred from the naturalness or artificiality [...]’ of such drugs? How this question is answered is relevant to intellectual self-direction. For if those such as Hookway and Elgin are right that emotions can be epistemically significant, then to the extent such emotions are inauthentic, the self-directedness of inquiries influenced by such emotions appears prima facie called into doubt.

Kraemer’s own line, drawing from work by Kramer (1994) and Elliott (2004), is that we should take seriously that (for example) someone using Prozac might feel ‘like themselves’ for the first time ever and is not in doing so mistaken.Footnote 40 On accounts of emotional authenticity which make the naturalness or artificiality of the enhancing agent relevant to emotional authenticity, such cases are difficult to explain.

Kraemer is led to the conclusion that emotional authenticity is to be regarded as a phenomenally felt quality. She writes:

The notion ‘emotional authenticity’ thus means the phenomenally felt quality that a person perceives with respect to his or her inner emotional state, no matter by which means (natural or artificial) it has been brought about [...] and whenever ‘the individuals experiencing it recognize their own feelings really as their own and identify with them’ (2011, p. 57).

Kraemer’s phenomenological account of emotional authenticity has a number of advantages over other accounts which will (for example) be faced with the problem of accounting for why some non-natural or artificial enhancing agents detract from authenticity while more common cases (as she writes, listening to Mahler, drinking wine) do not.

However, Kraemer’s own account invites a further epistemological worry, one that is highlighted in recent experimental work by Newman et al. (2014, 2015). In short, what these experimental studies found was that ‘people’s true self attributions appear to be influenced in a complex way by their moral judgments’Footnote 41 through what Knobe calls ‘positivity bias’. In particular, as Newman et al. (2014) put it:

people have a general tendency to conclude that the true self is fundamentally good—that is, that deep inside every individual, there is something motivating him or her to behave in ways that are virtuous’ (2014, p. 203).

Such studies cast doubt on the thought that (as per Kraemer) the phenomenally felt quality that a person perceives with respect to his or her inner emotional state is such that, when one experiences it, one can reliably recognize their own feelings really as their own and identify with them. At least, from these findings, it is reasonable to conclude that, given the prevalence of positivity bias, one will be ceteris paribus more inclined to regard some emotions as aspects of one’s true self if those emotions are judged positively than otherwise.

Putting this all together, an epistemological worry materialises: the more positively one regards a given emotional experience, the more likely one is to mistake that experience as characteristic of one’s true self even when it is not. Pharmacological enhancement of emotions tends to generate in one the kinds of emotional experiences that will be regarded positively, from the subject’s perspective. Enhanced emotions, as such, are thus likely to be mistaken as characteristic of one’s authentic self even when they are not. This includes enhanced epistemic emotions (e.g., attentiveness, curiosity, pride), the sort which (as per Hookway, Elgin and others) can be relevant to virtuous inquiry. To the extent that the kind of intellectual self-direction that is crucial to intellectual autonomy involves one’s authentic self, the foregoing looks problematic; enhancement of epistemically relevant emotions threatens to make one more likely to regard her inquiries as self-directed, in the sense of authentically self-directed, when they are in fact not.

4 Cognitive integration and high-tech autonomy

We’re now in a position to put several pieces together. In Sect. 1, it was shown why overly self-reliant conceptions of intellectual autonomy are self-undermining. The virtuously autonomous agent, it was argued, must rely on others, and outsource cognitive tasks as a means to gaining knowledge and other epistemic goods, up until the point that doing so would be at the expense of her own capacity for self-direction. This will happen when a subject’s intellectual agency is (in some relevant sense) caused to be disconnected from the way she acquires and maintains her beliefs. Call this kind of disconnect agential disconnect.

In Sect. 3, we saw three kinds of cases where some form of cognitive enhancement led to agential disconnect. In Sect. 3.1, cognitive enhancement via Lynch-style neuromedia appeared to cause agential disconnect by diminishing through habituation (viz., as in cases of learned helplessness) the contribution of one’s biological faculties in her cognitive projects. In Sect. 3.2, cognitive enhancement via intelligence augmentation/scaffolding appeared to cause agential disconnect via subjecting individuals to constant framing effects which often go unnoticed and which generate an illusion of choice in the course of cognitive projects. In Sect. 3.3, pharmacological enhancement of epistemic emotions appeared to cause agential disconnect by making the subject more likely to regard her inquiries as authentically self-directed, when they are in fact not.

In what follows, I want to argue that these three proximate causes of agential disconnect canvassed in Sects. 3.13.3 share a common distal cause.Footnote 42 Once this point is appreciated, it will be shown that cognitive enhancement, as such, is not doing any interesting work in accounting for the threat to autonomy posed in the three enhancement cases surveyed in Sect. 3. Rather, what’s important is just how the cognitive artifacts in question are being incorporated into the subject’s cognitive character.

4.1 Cognitive character and ‘extended’ agency

There is a well-developed framework in contemporary virtue epistemology which offers a way to model the comparative contribution of a subject’s own agency versus other non-agential factors in one’s intellectual endeavours. A key reference point in the history of epistemology where this framework was especially important came in the wake of meta-incoherence objectionsFootnote 43 to standard process reliabilist accounts of knowledge. According to process reliabilist (e.g., Goldman 1976) approaches to knowledge, knowledge is true belief issued by a reliable belief forming process.

Problematically for this view, some processes which lead to belief generation are themselves disintegrated from the agent’s own cognitive psychology, as was the case in Keith Lehrer’s (1990) famous case of Mr. Truetemp. In that case, an individual—Mr. Truetemp—has (unbeknownst to him) a thermometer planted in his head, and which causes him to reliably form true beliefs about the ambient temperature. Even though Mr. Truetemp’s beliefs are formed by a reliable process, they plausibly (contra the process reliabilist) do not qualify as knowledge.Footnote 44

The virtue reliabilist’s explanation for why such beliefs do not qualify as knowledge is that (in Lehrer’s case) the thermometer is not appropriately integrated within the agent’s—to use Greco’s terminology—cognitive character.Footnote 45 In support of this point, consider that when Truetemp forms a true belief, it is hardly something for which we can credit him.

But this raises a further question: under what conditions does an external artifact, such as a thermometer, count as being appropriately integrated within the psychology of the individual such that it is part of one’s character—viz., so that the product of the inquiry is something the agent can take credit for?

A promising answer to this question—one which offers possibilities for diagnosing the cases considered in Sects. 3.13.3, can be found in recent literature at the intersection of virtue epistemology and active externalist approaches in the philosophy of mind and cognitive science.Footnote 46 Consider, for example, the hypothesis of extended cognition (e.g., Clark and Chalmers 1998; Clark 2008), according to which cognitive processes can cross-cross the boundaries of the skin and skull and includes extra-organismic parts of the world, such as notebooks, iPhones, laptops, etc. Proponents of extended cognition insist that the matter of whether to include something as part of a cognitive process should be made on the basis of the functional role that thing plays, rather than on the basis of its location or material constitution. As Clark and Chalmers (1998, p. 8) put it, such judgments should be guided by a kind of parity principle:

if, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is part of the cognitive process.

What this principle implies is that in cases where, for example, one habituates oneself to outsourcing information storage and retrieval to a device (e.g., an online diary) as opposed to biological memory, then—given that we’re prepared to count biological memory as part of a cognitive process—so likewise we should include one’s iPhone, which is playing the same functional role as biological memory vis-a-vis information storage and retrieval, despite being located outside the head.Footnote 47

Proponents of extended cognition however are careful not to include just any kind of external artefact we interact with as part of a cognitive process. It would be problematic for example to suggest that certain devices we consult only occasionally qualify. And so it is a theoretical project for proponents of extended cognition to explain which extra-organismic artifacts we rely on are to be ruled-in as parts of our cognitive processes, and which should be excluded.Footnote 48

As Duncan Pritchard (2010) has pointed out, the question the virtue-epistemologist must answer—viz., under what conditions does an external artifact, such as a thermometer, count as being appropriately integrated within the psychology of the individual such that it is part of one’s character—bears similarities to the question the proponent of extended cognition must answer in order to avoid ruling in too much as part of a cognitive process. Pritchard (2010) has argued, in the earliest paper to explicitly connect extended cognition and virtue epistemology, that these related questions might have a similar answer. In particular, the line he has pressed is that for some extra-agential item to be included as part of an extended cognitive process, it must also be cognitively integrated in such a way as to qualify as a part of the agent’s own cognitive character.

One obvious such integration condition, one that has been advanced also by Clark (2008), involves accessibility; the external artifact must be accessible in a way that is broadly analogous to our innate faculties. In the case of cognitive scaffolding, this means that the iPhone or gadget must be (literally) available to one much like one’s own biological memory is. Cases of individuals who only occasionally carry a phone thus do not qualify; likewise, analogously, in the pharmacological case, a smart drug, regardless of its efficacy, which is taken irregularly does not qualify. Another such condition involves automatic endorsement; the representational deliverances of the extra-organismic artifact should be automatically endorsed and should not normally be subject to critical scrutiny. Thus, in the scaffolding case, a gadget—say, a tactile visual substitution systemFootnote 49 (TVSS)—is not cognitively integrated if the agent does not trust in a default way the deliverances of the TVSS, much like, in normal circumstances, we trust uncritically the deliverances of our biological eyes; likewise, in the pharmacological case.

A third and important condition, one which Pritchard puts forward, is a kind of cognitive ownership condition—viz., whereby the individual in question endorses in some epistemically respectable way the extra-organismic entity which is playing a role in her belief formation and to appreciate it as reliable. Consider the following case: suppose some ethically compromised cognitive scientists were to surreptitiously install a tactile visual substitution system in an individual with non-working biological eyes—and without that individual having any awareness that such a system had ever been installed. The epistemic pedigree of whatever spatial orientation beliefs such an individual forms via the tactile inputs generated by the system is in this case undermined.Footnote 50 Furthermore, notice that such an individual is in the relevantly same situation as Lehrer’s Mr. Truetemp: neither is aware of the source of the reliability of the relevant beliefs, having never originally endorsed the extra-organismic source as a reliable one. By contrast, Pritchard suggests that if, for example, Mr. Truetemp were to come to find out about the thermometer and to appreciate it as a reliable source of his beliefs about the ambient temperature, then his cognitive success could be creditable to him in a way it is not in the original version of the case, even though in both cases, the temperature beliefs are caused by the thermometer. Mutatis mutandis, in the case of the individual with the tactile visual substitution system.

To the extent that this thinking is on the right track, we have a precedent, as well as some tools, for thinking about the cognitive enhancement cases canvassed in Sects. 3.13.3 featuring agential disconnect, and just how in these cases contribution of the subject’s own agency compares to the contribution from what is external to it.

4.2 Cognitive enhancement and cognitive integration

Cognitive enhancement via Lynch-style neuromedia appeared to cause agential disconnect, and thus to undermine the kind of intellectual self-direction that is crucial to intellectual autonomy, by generating through habituation a kind of ‘learned helplessness’ (in the sense articulated in Sect. 3.1). I want to now suggest that there is an even more fundamental cause of the agential disconnect we find in such cases. The more fundamental cause can be expressed in the language of cognitive integration.

Consider again Pritchard’s diagnosis of the original presentation of the Truetemp case. Mr. Truetemp fails the cognitive ownership condition on cognitive integration, specifically because Mr. Truetemp lacks any conception of the reliability of the source of his own beliefs—which happen to be issued correctly by the implanted thermometer. Lynch’s neuromedia cases are presumably not like Truetemp in the respect that, in Lynch’s thought experiment, those with neuromedia are plausibly aware that they are relying on wireless neural implants to form their beliefs. However, there is another respect in which Lynch’s case lines up closely with the Truetemp case. In neither case does the subject have an epistemically respectable conception of the source of the reliability of the beliefs. Truetemp lacks such a conception because he fails to know about the thermometer. In Lynch’s case, the neglect of epistemic hygiene has led to a profound kind of cognitive atrophy. As Lynch puts it,

Just as overreliance on one sense can weaken the others, so overdependence on neuromedia might atrophy the ability to access information in other ways, ways that are less easy and require more creative effort (2016, p. 22).

A consequence of such atrophy is that the individuals in question lacks any conception, in the absence of relying on the neuromedia itself, of the neuromedia’s reliability. Just as one cannot take cognitive ownership of the legality of the laws enshrined in a legal text by reading in that very text that its laws are legal, so likewise Lynch’s neuromedia proponents are not in a position to satisfy the cognitive ownership condition by bootstrapping—viz., by simply relying on what their neuromedia tells them about the epistemic pedigree of neuromedia,Footnote 51 and endorsing the reliability of neuromedia on this basis. In sum, a lack of cognitive integration would suffice to account for the agential disconnect, that is, for why intellectual agency is caused to be disconnected from the way she acquires and maintains her beliefs.

Consider now the second kind of scaffolding case canvassed—where cognitive enhancement via cognitive scaffolding appeared to cause agential disconnect via subjecting individuals to constant framing effects which often go unnoticed and which generate illusions of choice. The paradigmatic example is one who comes to form a set of beliefs through some form of intelligence augmentation (e.g., Google Glass) as a result of an series of menu choices. Again, in the case described, the cognitive integration conditions are not suitably met; this is in particular because the cognitive ownership condition on cognitive integration fails, though for a different reason than in Lynch’s neuromedia case. In the neuromedia thought experiment, the individual will plausibly concede her helplessness in the absence of the neuromedia (and in the presence of neuromedia, can not satisfy the condition in an epistemically respectable way). In the cases discussed in Sect. 3.2, however, the individual is likely to claim cognitive ownership of her own inquiries. The problem is just that the individual significantly mistaken about what she is entitled to claim.

Given that, as Harris (2016) points out, design decisions often significantly influence upstream menu choices, the salient explanation for why—in the kind of case described, at least—certain inquiries take the shapes they do, and culminate in the beliefs that they do (correct or otherwise), is technological design rather than the individual’s own preferences. For those who outsource certain kinds of inquiries entirely to menu-driven search apps (e.g., Yelp reviews, Spotify music suggestions, etc.) the beliefs which serve as the termination of these inquiries might be very different if the menu choices were different, if one searched through different apps, or by means with fewer automated suggestions.

This is not to say that individual preferences play no explanatory role in such inquiries at all. Preference plays a guiding role even in inquiries that are almost entirely and uncritically influenced by menu-choice design. Rather, the point is that in some of these cases, the subject is led to claim a mistaken level of ownership, oblivious to the framing effect and its influence in her belief formation. This failure of cognitive integration between the individual and the technology whose influence she lacks a conception of, suffices to account for the agential disconnect which undermines intellectual self-direction.

Finally, in the case of pharmacological enhancement considered in Sect. 3.3, things are more complex. In this case, the problem was one of authenticity: it was shown that pharmacological enhancement of epistemically efficacious emotions threatens to make one more likely to regard her inquiries as self-directed, in the sense of authentically self-directed, when they are in fact not. And this was due to the fact that enhanced emotions, as such, are for reasons outlined in Sect. 3.3, likely to be mistaken as characteristic of one’s authentic self even when they are not.

Here I think it will be helpful to draw a diagnostic parallel between (i) the failure of cognitive integration in intelligence augmentation cases where technological design generates an illusion of choice in the direction of one’s inquiries, and (ii) what can be a failure of cognitive integration in some (but not all) cases of pharmacological enhancement. In the former case, the subject’s lack of a conception of how the technology relied upon influenced her cognising resulted in the subject’s failing the cognitive ownership condition on cognitive integration. An analogous problem surfaces in the latter case, when the subject lacks a conception of how the pharmacological enhancement relied upon influences her cognising. To the extent that the former case constitutes a failure of cognitive integration, by parity of reasoning, so should the latter case.

4.3 High-tech autonomy

We’ve arrived at the conclusion that the three kinds of enhancement cases (e.g., as surveyed in Sects. 3.13.3) which seemed to pose a threat to the retention of intellectual autonomy, share a common feature: in each case, plausible conditions on cognitive integration are not satisfied,Footnote 52 and in a way that accounts for the kind of agential disconnect that underlies the agents’ (respective) defect in intellectual self-direction.

I want to now go further to suggest that, in so far as these cases are ones where intellectual autonomy is undermined, the fact that they are cases of cognitive enhancement, as opposed to therapeutic cognitive improvement, is just an accidental feature. In order to appreciate why, we can briefly run variations on all three cases. In each variation, we’ll hold fixed the cognitive enhancement element of the case but shift the cognitive integration present, so that the cognitive ownership condition on integration is satisfied. In each case, what results is that the agential disconnect characteristic of compromised intellectual self-direction is not present.

Consider first a spin on our ‘learned helplessness’ case. Suppose that the individuals in Lynch’s thought experiment rely on neuromedia in a more epistemically hygienic fashion than he has us suppose they do—viz., by maintaining ways, without relying on neuromedia itself, to monitor and calibrate against their environment the neuromedia they’re relying on, and so without allowing their other ways of forming and maintaining beliefs to atrophy. In such a circumstance, these individuals will be such that they would not surrender the capacities to be sensitive to potential faults in their neuromedia, specifically by maintaining other reliable methods for assessing its reliability. In such a circumstance, even when the neural implants are in perfectly working order, the individuals in this revised version of the story are plausibly a position to take a kind of cognitive ownership over the truths they acquire on the basis of their implants. They are, in this respect, in an analogous position with an ‘enlightened Mr. Truetemp’ who is able to monitor the deliverances of the thermometer with other faculties. In neither the enlightened Truetemp nor the enlightened neuromedia cases—where the cognitive ownership condition is satisfied—is it at all clear that we have agential disconnect of the sort that undermines intellectual self-direction.Footnote 53

Similar remarks can be made with respect to ‘enlightened’ variants of the enhancement cases noted in Sects. 3.23.3. In the case of intelligence augmentation and framing effects, the subject is oblivious to the framing effect and its powerful influence in her belief formation. This is why the individual is not in a position to take cognitive ownership of the results of the inquiry, as self-directed. However, the situation is different if the individual were to become aware of the framing effects particular to the gadgets she is relying on, and to attain some conception of how such affects are inclined to nudge inquiries in particular directions, and thus how to avoid such effects when they conflict with her other preferences. One such mechanisms through which this could be accomplished is by actively undertaking ‘framing-effect’ debiasing, which a study by Almashat et al. (2008) has shown has been effective in medical contexts. In a similar vein, an enlightened variation of the pharmacological enhancement case from Sect. 3.3, where the relevant ‘debiasing’ would pertain to positivity bias (as reported by Knobe and co-authors) as opposed to framing-effect biases.

In each of the three enhancement cases, the more information one acquires about the mechanisms by which the extra-organismic artefact (be it drugs, technology) culminate in beliefs the agent holds, and the conditions that are required for these mechanisms to function reliably, the better positioned the individual is to appreciate when such mechanisms are not reliable, and thus to take cognitive ownership even when (as in the enlightened Truetemp case) the extra-organismic element seems to be doing most of the relevant work.Footnote 54

If this is right, then two final points can be gleaned. The first point is that, when cases of cognitive enhancement genuinely threaten to undermine our intellectual autonomy, it is not because they are cases of enhancement, as opposed to therapeutic improvement. Rather, the cases feature a lack of suitable cognitive integration. The corollary to this point is that cases of cognitive enhancement which feature suitable cognitive integration pose no obvious threat to intellectual autonomy, despite the fact that such cases might involve heavy epistemic dependence on extra-organismic elements of the world.

This verdict seems to conflict with our original diagnosis of the Donepezil versus Modafinil case from Sect. 2. That verdict was, recall, that cognitive enhancement in healthy agents, such as by relying on Modafinil, seems to pose a prima facie threat to intellectual autonomy that is not posed in equal measure in cases where an individual relies on drugs such as Donepezil for purely therapeutic purposes, viz., to slow the progression of Alzheimer’s. This reading suggested that in the case of Modafinil, it was the fact of enhancement (which is not present in the Donepezil case) that seemed to make a difference with respect to intellectual autonomy.

The second key point to make is that cases like this, no less than the cases from Sect. 3, can be diagnosed in the language of cognitive integration, and so involve no essential appeal to enhancement. There are two elements to establishing this point. First, consider again Donepezil as used for Alzheimer’s patients, which does not intuitively threaten intellectual autonomy. That it is used therapeutically is not an essential part of the explanation for why. Consider that, in the case of pharmacological therapeutic cognitive improvements such as Donepezil, the drug is administered with the aim of preventing change (e.g., by slowing change) in the agent’s cognitive psychology, rather than causing it. However, other drugs with the same kind of therapeutic purpose can aid in achieving this cognitively ameliorative aim while dramatically inducing new changes in the agent’s cognitive psychology in other ways. A notable example here are benzodiazepines (e.g., Xanax), which can be used to treat anxiety problems. When used therapeutically, these drugs can help to improve poor concentration for anxiety sufferers. However, drugs like Xanax, while helping to quell anxiety-driven distraction and lack of focus, can have cognitively detrimental side effects, including memory lossFootnote 55 and in some extreme cases anterograde amnesia.Footnote 56 Given the prevalence of memory-loss denialFootnote 57 for those suffering memory loss, the use of Xanex therapeutically can engender cognitive disintegration in the subject despite the ameliorative cognitive affects it brings about by fulfilling its therapeutic function. Such individuals can accordingly have their intellectual autonomy compromised even when therapeutic drugs are fulfilling their intended function.Footnote 58

To the extent that therapeutic cognitive improvements, on the whole, generally do not typically pose a threat to autonomy, this is only because the introduction of new changes to system is an aberrant and accidental property of drugs administered in these circumstances. In normal circumstances—as in the example of successful administering of Donepezil—the subject is prevented by the drug from further cognitive deterioration (or at least, caused to have such deterioration forestalled), without the introduction of comparatively more dramatic new changes to the way she maintains and forms beliefs.

As for the case of Modafinil—to the extent that our comparison in Sect. 2 elicited the response that Modafinil is a greater threat to intellectual autonomy than Donepezil as used therapeutically, that is because Modafinil when functioning normally as a cognitive enhancement (as when taken by healthy individuals to gain a cognitive advantage) causes new changes which must be in some epistemically respectable way appreciated by the subject in order for her to take cognitive ownership. An individual taking Modafinil for the first time is likely to be unaware of the specific effect the drug is having and how it is contributing to the individual’s cognitive success. Such an individual begins to trend closer to the unenlightened (and cognitively disintegrated) Mr. Truetemp who trusts the deliverances of the thermometer but fails to appreciate it as a reliable source of his beliefs. However—and this is a point that has been expressed by Pritchard (2010, Sect. 4), time is a factor which can make possible such appreciation. As Pritchard notes, provided the individual (who undergoes a change to her cognitive architecture) in question is suitably epistemically vigilant, she will acquire track record evidence about the way she is forming beliefs when utilising the drug. Over time, one who is vigilant in this way can plausibly take a kind of cognitive ownership of the beliefs formed through the drug which is not possible, say, the first time the drug is taken.

Putting these points together, given that enhancements when functioning normally induce new changes in the individual’s cognitive architecture (whereas therapeutic improvements when functioning normally aim to correct or slow the progression of some pathology or defect) additional demands are made for suitable cognitive integration in all enhancement cases which are more relaxed in the cases where the drug (e.g., Donepezil) when functioning normally does not cause substantial new changes but rather functions so as to prevent or forestall new changes.

These observations about therapeutic cognitive improvements and cognitive enhancements generally speaking suffice to explain initial reactions to the comparison in Sect. 2. Modafinil, in short, is the sort of thing which requires more by way of cognitive integration than Donepezil. And so for any given case of Modafinil use, the likelihood of cognitive integration is lower than in the case of Donepezil for which the standards are more relaxed.Footnote 59 And this is so even though some cases of therapeutic improvements (e.g., consider the memory-loss side effects of Xanax) bring about significant new changes to the agents’ cognitive architecture and thus require more by way cognitive integration, and some cases of cognitive enhancements (e.g., Modafinil, as used epistemically vigilant subject) can become cognitively integrated over time. What goes for pharmacological enhancements goes for other forms of cognitive enhancement, such as cognitive scaffolding.Footnote 60

5 Concluding remarks

Cognitive enhancement is profoundly controversial. Bioconservatives and other critics of what they perceive as ‘techno-progressivism’ and ‘post-humanism’ have offered a range of anti-enhancement arguments, many of which are based on ethical considerations for why cognitive enhancement is dangerous or immoral. These include arguments to the effect that enhancement will undermine human dignity and preclude the possibility of meaningful achievements, by artificially removing obstacles the overcoming of which gives meaning to our lives.Footnote 61 Ethically driven arguments against cognitive enhancement are not only registered by bioconservatives or for that matter by ethical deontologists; on utiltarian grounds, Persson and Savulescu (2012)—ardent proponents of embracing moral bioenhancement—have influentially maintained that, given the ease by which available technologies have made possible the destruction of the human race (e.g., bioweapons, nuclear weapons, etc.) cognitive enhancement is currently too dangerous to pursue, at least until we can morally improve ourselves.

These ethical concerns about cognitive enhancement might be valid.Footnote 62 They are however orthogonal to the specifically epistemic question of whether availing ourselves to the latest science and medicine in order to improve ourselves cognitively (beyond health levels of functioning) threatens to undermine our capacity for intellectual autonomy. I’ve shown how, at least initially, it looks like cognitive enhancement (in three different kinds of cases) poses a direct threat to undermining intellectual autonomy by undermining our capacity for intellectual self-direction. Furthermore, it appeared that epistemic dependence on technology and drugs for the purpose of therapeutic cognitive improvement (with a purely restorative aim) was immune to this charge. I have attempted to provide a different diagnosis. Drawing from recent work on virtue epistemology and extended cognition, I hope to have shown that the notion of enhancement as such is theoretically unimportant for accounting for why certain kinds of high-tech epistemic dependence genuinely threaten to undermine intellectual autonomy and others such kinds of dependence don’t. If my diagnosis is correct, then just as some therapeutic uses of technology and medicine can undermine autonomy, so likewise, ‘high-tech’ intellectual autonomy is not an oxymoron, but a very natural result of combining epistemic dependence with epistemic vigilance. In short, whether embracing new cognitive enhancement technologies is a ultimately threat to maintaining virtuous intellectual autonomy is not a matter of what we’re depending on (e.g., material constitution, location), or why we’re depending on it (to correct a pathology or gain an advantage), but rather is a matter of how we’re depending on it—which, and contrary to some bioconservative jeremiads to the contrary, remains largely in our own hands.Footnote 63