1 Introduction

Suppose that you are an undergraduate student. Now ask yourself: “Should I use ChatGPT to write my papers?”Footnote 1 We think that the answer is “no,” at least in the case of what we will call humanities papers.Footnote 2 The papers we have in mind are those in which you are charged with giving or evaluating reasons for or against a non-trivial conclusion (e.g., that we have free will, that it is morally wrong to eat meat, etc.). We believe that the reasons for writing your own humanities paper are less obvious than you might think. Many of the immediate suggestions can be dispensed with handily. What’s more, reflecting on this question will give us a better understanding of the nature of our agency and why (and to what extent) we are bound by duties of self-improvement. Further, our conclusion has broad implications for the relationship between human agency and the use of generative artificial intelligence.Footnote 3

We will conclude that you should not use ChatGPT to write your papers, precisely because you have a duty to foster and safeguard your autonomy. This means that you have moral reasons (rather than merely prudential ones) to write your own papers, and these reasons are not contingent on other ends you happen to be pursuing. But before we develop that account in Section 3, it will be instructive to explore some plausible (and not so plausible) arguments in Section 2. Your instructor might give you a variety of reasons when explaining why she has chosen to ban ChatGPT. She may argue that it is cheating (2.1), that it robs you of an opportunity to cultivate your capacities (2.2), or that learning to write (without the help of a chatbot) is essential to thinking (2.3). In each case, she is trying to show why using a chatbot to write your essay would undermine the educational objectives of her course.

Each of these arguments picks out something important about education, but we believe that none of them go far enough in terms of identifying the ultimate ground of the duty. In the next section, we explain why we think these arguments fall short, but these shortcomings cannot be fully appreciated until we develop our positive view of education and autonomy.Footnote 4 In Section 3, we present a view of what autonomy is and why we are morally obligated to cultivate certain capacities through education. We conclude after considering some objections and providing clarifications in Section 4.

2 Inadequate Reasons

In this section, we consider various reasons why someone might tell you that you should not use ChatGPT to write your papers. We do not reject these answers categorically, as there may be contexts in which they give you compelling reasons. Our position is more modest. We simply believe that these answers suffer from limitations that prevent them from being sufficiently robust. Of course, we cannot hope to provide an exhaustive list of potential reasons here. We selected the arguments below for two reasons. First, they are frequently cited by instructors and commonly discussed in both the popular and scholarly literature on this topic. Second, each of these arguments purports to show students why the use of ChatGPT undermines the aims of education.

A professor might restrict the use of ChatGPT for other, perfectly legitimate reasons. For instance, someone might ban chatbots on the grounds that they are unreliable and prone to “hallucinations.” Paglieri (2024) cites an incident in which a graduate student submitted a ChatGPT-generated bibliography containing several publications that do not exist (55). But concerns like this could be addressed by producing more reliable chatbots. Another professor might simply dislike chatbots as a matter of personal preference. Perhaps she is averse to their dry writing style.Footnote 5 We will not consider reasons of this kind because they are not grounded in claims about the central aims of education (which is our primary concern here).

There are other putative reasons that are too trifling to deserve serious consideration. For instance, someone could argue that students should not use chatbots because it is not “natural.” This answer falls flat for at least two reasons. First, it is obviously guilty of the appeal to nature fallacy. Not all natural things are morally permissible, and not all morally permissible things are natural. The fact that something is unnatural tells us nothing about its permissibility. Second, even if unnaturalness did give us good reasons to be hesitant about newfangled technologies, it would cut just as much against word processors and spellcheckers. But surely the proponent of this argument would not ban the use of those technologies.Footnote 6 Thus, in what follows we have restricted our attention to arguments that bear some connection to claims about the aims of education.

Before moving on, we should clarify exactly what we mean by “writing” and “ChatGPT.” Although we refer to ChatGPT throughout the paper, our argument applies to all chatbots.Footnote 7 More specifically, we are concerned about students using large language models (LLMs) to write their papers. LLMs are a form of generative artificial intelligence (GenAI) that are capable of generating language. When prompted, LLMs like ChatGPT can generate complete sonnets, essays, or newspaper articles. This is precisely what makes LLMs so worrisome for educators who use writing assignments in order to evaluate student learning. By contrast, our argument does not apply to tools like spell check. The crucial difference is that the spell-check tool requires the student to be the author of her own paper. She must generate her own thoughts and express them in her own words. The spellchecker merely improves work that the student can rightly claim as her own. Most importantly, the thoughts presented in the paper can rightly be attributed to the student. This cannot be said of a paper that was written by an LLM. The student who uses an LLM to write her paper cannot plausibly claim ownership of the ideas or their mode of expression.

This also helps clarify what we mean by “writing.” Naturally, we do not mean anything like “putting words on a page.” To see why this is neither necessary nor sufficient for “writing,” consider dictation. Imagine two students, Rosa and Ray, who are taking a philosophy course together. As the paper deadline approaches, Ray tells Rosa that he did not read A Theory of Justice, but he needs to write a paper on it to pass the class. Rosa suggests that she can dictate the entire paper to Ray, and he will simply write down every word she says. A week later, the professor confronts Ray with concerns about academic integrity, so she asks him to explain the paper. Ray is unable to explain the thoughts expressed in the paper, so he confesses that Rosa dictated the paper. But he insists that he should receive credit for writing the paper because he was the one who put the words on the page. Obviously, the professor should not give credit to Ray. She can rightly tell him that he did not write the paper. Rosa wrote the paper. If “putting words on the page” is sufficient for writing, then we could say that Ray wrote the paper. But he did not. If “putting words on the page” is a necessary condition of writing, then we could not say that Rosa wrote the paper. But she did. This is no different than crediting Milton for writing Paradise Lost even though he dictated the entire poem to his daughter and friends.

Thus, when we say that a student ought to “write” her own papers, we mean that the ideas in the paper must come from the student, and she must express those ideas in her own words. When an author incorporates suggestions from a word processor’s spelling and grammar tools, this does not undermine her ability to claim ownership of the ideas and words in her work. By contrast, if Ray were to use ChatGPT to generate his entire essay, then he could not claim to have written it in this sense.Footnote 8

Finally, we should point out that we are not committed to the claim that writing assignments are the only way to promote the educational aims we have in mind. They simply happen to be one of the most popular learning assessments used in humanities classes. This means that our concern is not with writing per se. Our focus is on the underlying value that writing promotes. We are open to the idea that the educational aims that are promoted through writing assignments could be achieved by means of other assessments (oral exams, tutorials, discussions, presentations, etc.). To be precise, we could add this qualification throughout the paper by pointing out how students’ reasons to write their own papers extend to these activities as well. But we won’t be so persnickety. We will simply talk about writing papers. And if we are successful, our discussion will shed light on what value writing promotes, thereby clarifying which features a suitable surrogate would need to have. With these clarifications behind us, we can begin scrutinizing the reasons students might consider when asking themselves if they should use ChatGPT to write their papers.

2.1 No, because it would be cheating

One response might be that you should not use ChatGPT because that would be cheating, an instance of academic misconduct. Your professor might think it is tantamount to plagiarism, since you are falsely representing the chatbot’s paper as your own work. Instructors who hold this view could adopt Harvard’s “maximally restrictive draft policy” on the use of AI in courses:

We expect that all work students submit for this course will be their own. In instances when collaborative work is assigned, we expect for the assignment to list all team members who participated. We specifically forbid the use of ChatGPT or any other generative artificial intelligence (AI) tools at all stages of the work process, including preliminary ones. Violations of this policy will be considered academic misconduct. (Harvard, 2024; emphasis added)

Although we think that cheating is wrong, we do not think that this fully captures our reasons to refrain from using ChatGPT.Footnote 9 After all, codes of conduct can change. Professors could simply allow students to use chatbots. Some instructors, like Ethan Mollick, an associate professor at the Wharton School, are embracing the use of ChatGPT. He reasons:

I think everybody is cheating … I mean, it’s happening. So what I’m asking students to do is just be honest with me … [I ask them to tell] me what they use ChatGPT for, tell me what they used as prompts to get it to do what they want … (Kelly, 2023).

Suppose that all your instructors have this policy. Indeed, suppose that every instructor at the university decides to permit unrestricted use of ChatGPT. They may come to believe, as Mollick does, that it’s going to happen anyway, so they adopt this language from Harvard’s “fully-encouraging draft policy:”

This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited (Harvard, 2024).

Under such circumstances, would there be any reason to refrain from letting ChatGPT do as much work for you as possible? Or, exploring the question from the other side, is there a good reason for the university to ensure that at least some courses require that you write your own papers? We think so, but it is not so easy to explain why students ought to write their own papers. For now, what we can say about this response is that it sets the bar too low. Surely there is more to the normative ideal of education than simply refraining from cheating.

2.2 No, because it would constitute the loss of a capacity.

Even if the use of ChatGPT does not count as “cheating” (because the professor has chosen to let students use chatbots), someone could still suggest that it is wrong on the grounds that you are failing to cultivate your capacities. Educators could be concerned with the loss of an ability for any number of reasons. For instance, your professor might suggest that you won’t always have access to chatbots. So you should learn to write on your own. This reply is reminiscent of our elementary school math teachers who warned us that we won’t always have a calculator handy. Obviously, that claim turned out to be false. By having a smartphone, nearly everyone has a calculator on their person at all times. There is still value in knowing how to do simple arithmetic in your head, but that value cannot be explained by the fact that you do not always have access to a calculator. For precisely the same reason, this answer is unsatisfactory when it comes to the use of ChatGPT. Chatbots are becoming increasingly ubiquitous. They can be accessed easily, even from your phone. So lack of access does not constitute a good reason to refrain from letting ChatGPT write your undergraduate essays.

Furthermore, this response smacks of a certain paranoia. Most of us can’t grow our own food, build our own houses, and so on. We are deeply dependent on others and on technology for so much of what we need on a daily basis, but we do not (and should not) organize education around learning the skills we would need to survive some sort of apocalypse. We can see that the intellectual survivalism underpinning this first response is not held consistently. And there are good reasons to reject it. Learning to do everything from scratch would, among other things, needlessly take time away from more valuable educational activities that are made possible by the fact that we can rely on the division of labor to free us up to learn about subjects more relevant to our actual lives.

Perhaps you will almost always have a phone on you that is ChatGPT enabled. But there is still something lamentable about the fact that many students would be losing an ability, even if in most circumstances they will functionally maintain it. In the Phaedrus, Socrates famously expresses a concern about reading and writing along these lines. He worries that the widespread use of writing will worsen our memory.Footnote 10 And he has a point. There is no denying that preliterate cultures accomplished some impressive feats of memorization. Homer’s Odyssey alone comprises 12,109 lines, and it was passed down through several generations before the invention of the Greek alphabet (Foley, 2007). Socrates may be right that those who participated in oral traditions had better memories than those of us who have outsourced this capacity to the written word.

Indeed, some critics have raised concerns about technological affordances precisely because our reliance on them might undermine our capacities. Avigail Ferdman (2023) voices this concern about ChatGPT in particular. Her argument is grounded in Hurka’s “perfectionist conception of the good life,” which is committed to the claim that human flourishing is constituted by the exercise of our capacities (2023, 2). She writes:

These technologies include things like delivery robots, self-driving cars, generative models like ChatGPT, and decision-making algorithms. They promise to replace certain human tasks or activities in the name of convenience or efficiency, but in the process might reduce the propensity to exercise certain human capacities. In this way, capacities that could have been used are degraded, leading to inactivity, and to loss of flourishing. (2023, 17)

We are generally sympathetic to Ferdman’s view. We think she is right to be concerned about the possibility of students diminishing their capacities by relying on chatbots. But we believe this concern with “capacities” (as such) is too broad; not all of our capacities matter morally.Footnote 11

When a new technology causes us to lose an ability (or if it merely weakens a capacity), this should not automatically generate a moral concern. We would need a further argument to establish the value of the capacity in question. Those who were raised to write with word processors may not know how to change the ink ribbon in a typewriter, but it’s not obvious that we have lost something of great value there. What’s more, even if the capacity in question has value, we must weigh the value that was lost against what was gained. Although we may not remember quite as much as our ancestors who participated in oral traditions, reading and writing allow us to produce and pass on a far greater quantity of knowledge. We can imagine a world where the use of chatbots becomes so widespread that people lose the capacity to write on their own. But, once again, a further argument is needed to establish why the loss of this ability should count as a harm and why that harm is greater than the efficiency we gain from chatbots. After all, technology has caused us to lose countless capacities, but we do not lament the loss of each one. We might be better off without knowing how to churn butter by hand or change ink ribbons.

Changing tack, one way in which the loss of the ability might constitute a harm is how it relates to post-college employment. Surely, one aim of education is to prepare you for participation in the workforce.Footnote 12 What’s more, future employers may regard your bachelor’s degree as evidence that you have cultivated certain skills. For instance, an engineering firm might regard a student’s bachelor’s degree as evidence that she is skilled enough in mathematics to do the work they will require of her. If a future employer treats your bachelor’s degree in English or philosophy as evidence of your writing ability, your wanton use of ChatGPT makes you (and perhaps the university) guilty of false advertising. You are representing yourself as someone who has demonstrated facility with writing at the university level, but that is not the case. This is fraudulent.

Such a response is limited at best. The future employer might know that you are coming from a university that allows the use of chatbots, and, as Mollick states, using ChatGPT is an “emerging skill” (Kelly, 2023). Similar arguments could have been leveled against the use of typewriters. Forty years ago, if someone had argued that universities should not teach students to use word processors or spellcheck because employers will regard degrees as evidence of students’ ability to use typewriters and spell, then they would have been mistaken. Employers, just like universities, adapted to changes in technology, and typewriters have become obsolete. So too will much written communication authored exclusively by human beings. Of course, there may be employers who forbid the use of ChatGPT.Footnote 13 And those employers will want to know that you are able to write on your own, without the use of a chatbot. In those instances, you would indeed have a reason to refrain from using ChatGPT to write all of your papers. But that situation is subject to change.

Employers might become increasingly accepting of chatbots over time, and they could abandon their interest in having employees who can write on their own, just as they abandoned their interest in having employees who could use typewriters. Future employers may simply want to know that you are capable of producing the outputs they request of you, and they may be indifferent as to questions of process (e.g., whether you use chatbots or not).

For instance, London cab drivers take a notoriously difficult test, which requires them to memorize 25,000 streets and every business and landmark on them. It has been called “one of the most difficult tests in the world,” and it takes aspiring cabbies years to acquire such an encyclopedic knowledge of the city (Rosen, 2014). There may well have been a time in the past when such knowledge was necessary for being an effective cab driver, but Uber and Lyft drivers get around just fine by using GPS on their phones. Over time, companies might simply allow employees to use chatbots just as rideshare companies allow drivers to use GPS. So long as you do not falsely advertise yourself as having acquired a skill that you lack, there do not seem to be any reasons stemming from this answer. This claim (about the loss of an ability) does not give you a robust reason to write papers on your own. Yet we think something would indeed be lost in a world where a university education does not involve thinking through substantial questions for yourself. We will explain what we find plausible about this claim in Section 3.

2.3 No, because writing is thinking

The previous reasons, while instructive to consider, might not have seemed so promising on their face. Furthermore, many of those arguments (e.g., “it would be false advertising to future employers”) may seem grating for other reasons. For instance, many educators feel that preparation for employment isn’t the sole aim of education. They may see the intellectual development of the student as a valuable end in its own right.Footnote 14 So let us now take a page from the educator’s playbook to see if we can develop a better argument, inspired by the popular slogan “writing is thinking.” We can begin by asking what this slogan means. Steven Mintz offers a compelling account of it in an editorial for Inside Higher Ed:

Writing is not merely a mode of communication. It’s a process that, if we move beyond simple formulas, forces us to reflect, think, analyze and reason. The goal of a writing assignment worth its salt is not simply to describe or persuade or summarize: it’s to drive students to make sense of difficult material and develop their own distinctive take. (Mintz, 2021)

The slogan encapsulates the idea that writing is much more than information transmission, it is (among other things) information processing. And the reason you should engage in these activities is that they promote the ability to make sense of difficult material and to develop your own point of view.

These certainly sound like worthwhile abilities to advance. But can they satisfy the challenge of responding to the student who sincerely asks, “What’s the point of making sense of difficult material?”. Or, “What’s the point of developing my own take?”. We, the authors, are amenable to the point of view that Mintz expresses. After all, we chose to study philosophy and pursue it as a career. We certainly enjoy making sense of difficult material and developing our own views. Nevertheless, we do not think that this claim, as stated, is up to the challenge. We can get a clearer sense of why by scrutinizing the proposal a bit more.

Let’s begin with a closer examination of the idea that making sense of difficult material gives you a reason to write your own papers (taking for granted that writing has this effect).Footnote 15 It is fair to ask why one should master difficult material in the first place. It would be difficult to memorize all of the streets in London, but that might not be a sufficient reason to do it. If this is our justification for banning ChatGPT on assignments, we should be able to spell it out.

A satisfactory answer to this challenge would have three steps. It would, first, identify the regress stopper. This is the value that ends the chain of “why” questions. It would involve identifying some purportedly final value, something that is valuable for its own sake (Korsgaard, 1983). This is the crux of our argument in the next section. That value could be mastering difficult material, but it needn’t be.Footnote 16 Many papers touting the value of writing via its connection to critical thinking go on to ground the value of critical thinking in some other value, such as employability.Footnote 17 And, presumably, employability is not finally valuable. Its value is instrumental. But, in the end, it is instrumental for what? Call this the identity problem, which is simply the challenge of pinning down what, exactly, the terminal value in the chain of justification is even supposed to be. Once the terminal value has been identified, the account must give reasons for thinking that the terminal value is, indeed, finally valuable. There are roughly two ways to do this successfully. Either it will be self-evident that the proposed final value really is valuable or there must be some kind of argument that can substantiate the claim. So, while it might seem that making sense of difficult material is valuable, is it really self-evident that it is finally valuable?

We think not for at least two reasons. First, as noted above, it is all too common to see this value instrumentalized. Being able to master difficult material is often touted as a respectable learning outcome because it will help students on the job market or help them get better grades in other courses.Footnote 18 Second, it’s plausible that it merely seems that being able to master difficult material is finally valuable just because mastering difficult material has so much instrumental value. That is, the intuition that it’s finally valuable can be debunked. So, if mastering difficult material is the final value, more needs to be said about why we should think that it is, indeed, finally valuable. Call this the authority challenge: we need good reasons to think that the terminal value really is authoritative. We must not only identify where the regress stops; we must also explain why the regress stopper has final value.

To get a better sense of the first two steps, consider Mill’s argument for hedonism (i.e., the claim that pleasure has final value). In order to complete the first step, Mill must identify the bedrock value that grounds the goodness of everything else. He settles on happiness, which he defines as “pleasure and the absence of pain” (1957, 10). To complete the first step, it must be plausible to suggest that the value of everything else (health, honor, virtue, friendship, etc.) is reducible to pleasure. This is why he takes the time to explain that money and fame are not desirable as ends in themselves; in his view, they are instrumentally valuable only insofar as they contribute to the final value of happiness. After completing the first step (identifying what has final value), Mill must address the authority challenge. To do this, he must either provide an argument for the conclusion that pleasure has final value or he must explain why such an argument would be unnecessary (on the grounds that its value is self-evident). Mill goes with the second option, claiming that “questions of ultimate value do not admit of proof” and suggesting that this is “common to all first principles” (Mill 1957, 44).Footnote 19

Once we acknowledge these two requirements, we should have a clearer sense of what is often lacking in the arguments we have considered thus far. Even when they claim to identify something of value (e.g. exercising your capacities, mastering difficult material, etc.), they do not always provide a satisfactory answer to the authority challenge. To see why, consider the London cab drivers once again. There may be some who believe in the value of their extensive knowledge, but it is far from obvious that it has final value.Footnote 20 Why should we think that every difficult activity achieves this kind of value? It would be extremely difficult to count all of the blades of grass in your backyard, but surely such a skill lacks final value. If something has final value, it must be plausible to claim that it has objective value. The grass counter might enjoy the activity immensely (giving it plenty of subjective value), but most of us are likely to agree with Rawls that this activity is objectively pointless. Something has final value only if it has objective value and it has objective value if and only if it makes a valid claim on all agents.Footnote 21

Lastly, the connection between writing and final value needs to actually explain why writing is something typical students should do. It will not be enough to point out that writing promotes the final value. To take a dramatic example from Mark Schroeder, eating my car would supply me with certain nutrients (2007, 96). But that does not mean I should eat my car. The connection between writing and the valuable thing needs to be unique enough to explain why writing is actually worth doing, why not writing one’s own essays is, in some significant sense, a failure. Call this the explanatory challenge.

We hope that it goes without saying that similar problems arise for developing one’s own take. Is this meant to be the final value, or is this an instrument that ultimately promotes or partly constitutes some other value? If it is, is it self-evidently finally valuable? If not, what is the argument for its being finally valuable? And if it is finally valuable, is it connected to writing in the right way such that it can explain why students must write for themselves? As with making sense of difficult material, the gaps here need to be filled, and that is not a trivial task. The idea here is not that the “writing is thinking” approach is wrongheaded. In fact, we find a lot of promise in it. The idea is that it is simply incomplete. That’s fine for Mintz’s purposes. In his essay (and other places where the slogan pops up), the idea is typically to communicate that writing isn’t mere communication and that it is connected to educational values that are typically taken for granted (promoting critical thinking, future employment, and so on). That being said, the slogan alone cannot give a satisfying answer to the challenge that we are concerned with here. It is too coarse grained. Not all activities that involve “thinking” or “mastering difficult material” have final value. A more specific value must be identified.

3 Adequate Reasons: The Duty Cultivate Your Autonomy

Some of the above reasons come close to capturing what we believe are adequate reasons. There is something compelling about the idea that “writing is thinking.” Similarly, the loss of certain abilities should be avoided, and we think that cheating is wrong. What the above arguments lack is an account of precisely what grounds the value of the skills in question. As we explained, not all abilities are essential to maintain. The same can be said of outsourcing certain cognitive tasks to reliable tools in your environment.

As we see it, the accounts we considered above were not only incomplete, but they were also overly broad. Some aspects of our account will resonate with the claim that “writing is thinking,” and we will also voice a concern about the danger of diminishing our capacities. But we will have a considerably narrower focus. The account we explored under the “writing is thinking” banner was in part incomplete because it was broad. By committing itself to the value of mastering difficult material, it had to take on the baggage of defending this capacity and this was part of why it floundered.

We will argue that the reason to write your own papers is borne out of a duty to respect your own humanity, your capacity to self-govern. And, partly in the interest of tightening our focus, we will say more in the next section about what we mean by “humanity.” This will enable us to take up the authority question, the challenge of explaining why you must respect your own humanity. In the previous section, we argued that many of the arguments against ChatGPT do not go far enough in terms of providing students with reasons to refrain from using chatbots. This discussion led to three desiderata of a satisfactory account. First, we must identify what has final value. We take up this task in Section 3.1. Second, we must present an argument to defend the authority of this value. We present this argument in Section 3.2. Finally, we must explain the connection between writing papers and that which has final value, and we establish this connection in Section 3.3.

3.1 The Value of Humanity

Our answer to the question of why humanity matters is unabashedly Kantian.Footnote 22 We share Kant’s commitment to the unique value of “humanity,” a word that we use interchangeably with “autonomy”.Footnote 23 Humanity is the rational agent’s capacity to set and pursue her own ends. Rather famously, Kant argues that our possession of this capacity puts us under moral obligations to respect it in ourselves as well as in others. He believes humanity has this uniquely elevated status because it is the only thing whose value is objective, unconditional, and non-fungible. It, unlike the other values mentioned so far, can stop a regress. Each of these three features requires a brief explanation.

First, for Kant, the value of autonomy is objective, as it makes a valid claim on all rational agents. This is what distinguishes the categorical imperative, which generates obligations for all rational agents, from hypothetical imperatives, which generate obligations only for those who happen to desire some end. Kant believes that we are all necessarily committed to the value of rational agency, whereas the value of other ends is merely contingent. You may or may not care about the end of understanding Othello, but you have no choice but to care about your capacity to evaluate which ends matter to you and which ends do not.Footnote 24 Why? We begin by noting that we are self-conscious creatures. That is, when we hold a view (such as the view that humanity is not valuable) we are aware that this can be done for better or worse reasons. Suppose you are asked why we should not value humanity. What would you do? Presumably, you would give reasons to think that humanity isn’t valuable. By doing this, you betray yourself: you’ve just used your capacity to evaluate ends and have thereby endorsed it in using it in your defense.Footnote 25 Given the kinds of creatures we are, it is very hard to see how we can get around not endorsing this key aspect of our very own nature.Footnote 26

Second, the value of humanity is unconditional, which means that every human being has it and there is no context in which this moral status is forfeited (via, e.g., bad behavior). So long as you have the capacity to set and pursue ends, you embody the objective value of humanity. Finally, its value is non-fungible, which means that it does not admit of exchanges for things of equal value: We cannot make up for the wanton killing of a rational being by simply producing another. Further, its value is lexically prior to things like pleasure or desire satisfaction.

This final claim is particularly striking. Is it really true that we should not forfeit autonomy for any amount of pleasure? To see why this might be more plausible than it seems at first glance, consider James Griffin’s personal despot argument.Footnote 27 He asks you to imagine what you would say to someone who convincingly shows that you would be much happier if you were to hand over all of your decision making to a (benevolent) personal despot. Griffin balks at the offer, claiming “I shall want to go on being my own master” (1986, 9). Even Mill seems to concur on this point. He makes this point in On Liberty:

If a person possesses any tolerable amount of common-sense and experience, his own mode of laying out his existence is the best, not because it is the best in itself, but because it is his own mode (1988, 64).

Mill seems to be suggesting that choosing your own life path is preferable even if you would have been “happier” letting someone else make your decisions. And this is true despite the fact that he is often cited as a paradigm hedonist, committed to the claim that only pleasure is intrinsically valuable.Footnote 28

3.2 Respecting Humanity

Establishing the moral weight of autonomy does not, however, suffice to generate moral obligations. We need some additional principle to bridge the gap between this claim about value and the existence of moral obligations. For instance, if this were a consequentialist position, we would be tempted to think that we should create as much of this value as possible. Unsurprisingly, that is not Kant’s view.Footnote 29 Instead, Kant argues that our actions ought to express the kind of respect that is appropriate for something that has dignity. When Iago manipulates Othello by means of deceit, his action expresses disrespect for Othello’s capacity to set and pursue his own ends. If we truly have respect for rational agency, then we must refrain from undermining the abilities of other agents to make their own decisions.

Unlike many other moral philosophers, Kant believes that we also owe moral duties to ourselves.Footnote 30 His formula of humanity makes this explicit:

So act that you use humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means (G 4:429, emphasis added).

But the implications of this duty of respect are somewhat less obvious. It is clear enough that you should refrain from manipulating others by means of deceit, but what does it mean to show respect for your own humanity?

Kantian ethicists have used this idea to argue for a variety of duties. Hill (1973) argues that respect for our own humanity requires us to eschew servility. Hay (2011) defends a duty to resist one’s own oppression. Whatever the particular duty might be, they all stem from the same source: respect for our humanity in ourselves. When it comes to the question of cultivating capacities, Kant thinks we ought to make ourselves worthy of our own humanity by preserving our ability to pursue whatever ends we happen to choose (MS 6:392).Footnote 31 If we were to let our talents rust, we would render ourselves incapable of pursuing the ends that we set for ourselves. This would not only impoverish our lives by narrowing our menu of options, but it would also express disregard for the very capacity that we are required to respect. Or, to frame it as Paul Schofield does, you are wronging your future self, and you do presently not occupy the perspective of the person who could consent to this decision (2015).Footnote 32

Once again, there is a danger of overstating the extent of this duty. It is not as if every capacity should be included within the scope of this obligation. This point is especially important in the context of emerging technologies that serve as replacements for erstwhile human capacities. If we consider the entire set of talents that humans possess (the ability to memorize thousands of London streets, to recite Homer’s Odyssey from memory, to use a typewriter, etc.), then we must acknowledge that only a proper subset of these talents are essential to your humanity. By outsourcing certain outmoded skills (whether to machines, pieces of paper, or anything else), you do not necessarily undermine your own autonomy.

For this reason, it would be instructive to turn our attention to the capacities that lie at the heart of humanity rather than those found on the periphery. At its core, Kant thinks that humanity (rational agency) is what sets us apart from other animal beings.Footnote 33 As Kant sees it, nonhuman animals do not have a say in determining their own ends. Their ends (which are mostly restricted to survival and reproduction) are given to them by nature or instinct. But, unlike other animals, human beings can reflect on the value of purported ends. This is what sets Kant’s theory in polar opposition to Hume’s. For Kant, reason is not, as Hume thought, the slave of the passions. Desires do not compel us to act. We can step back from and reflect on our desires, decide which ones matter most, which cohere with our conception of the good, and we do our best to act accordingly.Footnote 34

It is precisely this kind of reflection which is often regarded as the sine qua non of personal autonomy. The autonomous agent is not passive when it comes to deciding what ends to pursue. On the contrary, autonomy consists in the ability to critically evaluate ends. This requires us to examine the coherence of various desires with our conception of the good life and to take up an active role in the construction of that conception. And this is something we must do. We are, as Korsgaard says, condemned to choosing what to do and, therefore, who to be (2009).

There are many skills that we could outsource to a machine without compromising our autonomy. We can let them correct our spelling, find the best route to work, and remind us of our appointments. None of these abilities have immediate implications for your humanity. But imagine what it would be like to hand over your ability to critically assess your values, to forfeit the capacity to reflect on your conception of the good life and revise it. Such a forfeiture would amount to giving up authorship of your life story.Footnote 35 If Kant is right about our duty to respect humanity, then we have both positive and negative duties to ourselves when it comes to the capacities that are essential to autonomy. We have positive duties to cultivate these abilities, and we have negative duties to refrain from doing things that would undermine them.

3.3 Writing and the cultivation of humanity

If what we said above is right, then we can conclude you have a duty to cultivate your humanity. This means that you owe it to yourself to foster your own autonomy. As we extend this duty to writing papers, it will be what Kant calls an imperfect duty, that is, one that comes with a degree of latitude. We can contrast imperfect duties with perfect ones, ones that come with no latitude. Hill (1971) lays this distinction bare by focusing on the form these duties take: while perfect duties have determine forms, such as “always do this” or “never do that,” imperfect ones have what Kant identifies as “play-room for free choice” (MS 6:390) and Hill identifies as having a form such as, “sometimes—to some extentdo this” (Hill, 1971).

Heeding this distinction is crucial for meeting the explanatory challenge properly. It tells us how writing must connect with the cultivation of humanity if our argument is to come together successfully. To show that you should write your own papers, we must argue that humanities papers offer a unique opportunity to discharge the duty to cultivate your autonomy.

Before explaining why we think humanities papers offer such an opportunity, let us dig a bit deeper into the idea of uniqueness. Our heuristic for thinking about when an opportunity is unique enough that passing on it constitutes a failure is the “if not now, when” test. The idea here is that an imperfect duty is driven by an end (a goal) that you must adopt, one that has “play-room” for how it must be pursued. While this play-room gives you latitude to pass on some options to pursue the goal, the latitude is not infinite. Given an opportunity that seems optional, one must ask whether it really is compatible with your pursuit of the goal to pass on this opportunity to advance it. To make things a bit less abstract, let’s consider an illustrative case from David Velleman:

Suppose that you stay in shape by swimming laps two mornings a week, when the pool is open to recreational swimmers. But suppose when your alarm goes off this morning, you just don’t feel like facing the sweaty locker room, the dank showers, the stink of chlorine, and the shock of diving into the chilly pool (Velleman, 2006, 21).

The question that Velleman poses about this case is whether “not feeling like it” can constitute a good reason to pass on swimming today, when you are already up, the pool is open, you have nothing better to do, and so on. He thinks not, reasoning that treating this as a reason is equivalent to having no commitment to swimming at all. If we think of commitments as akin to promises to ourselves, a commitment to exercise that treats “I don’t feel like it” as a reason to defect is like a promise such as, “I promise to pick you up from the airport, if I feel like it when the time comes.” Hardly a commitment at all!

Velleman finds Kantian backing for this thought, observing that Kant recognized that, “acting for reasons is essential to being a person, something to which you unavoidably aspire” (Velleman, 2006, 22, emphasis added). This connects deliberating about swimming to something we mentioned above. It shows the difficulty (perhaps the impossibility) in denying the value of humanity. Having and endorsing humanity involves, as Velleman notes, acting for reasons. This involves rising above momentary feelings and impulses to ask whether those feelings or impulses could constitute reasons. And now, hopefully, the force of the “if not now, when” test is clear. If you’re committed to something, are in the presence of a great opportunity to uphold that commitment, but then pass on that opportunity for no good reason, you’re failing to uphold your commitment. You demonstrate that you have not adopted the end in question.

With all of this in place, we can now ask why humanities papers offer us a sufficiently unique opportunity to cultivate our humanity. We believe that passing on the opportunity to write your own paper constitutes the same sort of failure that skipping swimming does in Velleman’s case. It shows that you are not truly committed to the end of fostering your autonomy. It is important to recall that our titular question was framed in the context of an undergraduate student. This context is crucial. Those who tout the value of a liberal arts education are right to point out how studying humanities in higher education gives students a special opportunity to critically reflect on commitments they inherited—ones they might never have subjected to scrutiny (Brighouse 2006). For most students, primary school and secondary school do not afford many opportunities to ask these kinds of questions. And Aristotle had good reason for thinking that some pupils are too young for moral philosophy (Nicomachean Ethics Book 1, 1095a). Thoughtful engagement about questions concerning practical wisdom requires maturity.

As a typical university student, you have just reached the age where you are no longer a minor. For the first time in your life, you are legally permitted to make choices that were previously dictated by your parents or legal guardians. This includes important decisions about what to do with the rest of your life. In The Ideal of the University, Robert Paul Wolff claims that universities have a special role to play in virtue of the fact that most students come to college at a unique moment in their development:

“But on the threshold of adulthood, [...] [the student] is suddenly faced with a problem much greater than any his schooling has ever posed. He must decide who he is, and hence who he is going to be for the rest of his life. He must choose not only a career, a job, an occupational role, but also a life-style, a set of values which can serve as his ideal self-image, and toward which he can grow through the commitment of his emotional energies … College is the setting for this transitional experience, and undergraduate education should facilitate and enrich it, not to squelch it” (1969, 38-39).

In order to make informed choices about these matters, you must reflect on your values and commitments. For instance, when contemplating career choices, you might consider how important future earnings are to you and how to weigh that against other values (e.g., leisure time, your enjoyment of the work, marriage, family, etc.). Writing a humanities paper for yourself affords you an opportunity to practice the sort of reasoning involved in grappling with these sorts of questions and to receive feedback on your thinking.

Imagine a philosophy professor gives you a writing assignment in which you are asked to critically assess an idea from an assigned reading. You now have a special opportunity to engage with an expert and present them with your reasoning about a subject that could have a substantial impact on your life. Perhaps you were assigned Singer’s Animal Liberation. Rather than reading the text and writing your own paper, you ask ChatGPT to write a response to Singer. It may do an adequate job of generating a paper for a passing grade, but you have missed the opportunity to ask questions about your values—ones that might have changed the course of your life. And if the professor gives you critical comments (as we believe she should), then you have missed the chance to receive feedback about both your thinking process and your conclusions.

These opportunities are precious, and you are not likely to encounter them again. What’s more, you are not simply missing out on a unique opportunity to learn an esoteric skill. At various points in your life, you might pass on unique opportunities to become better at chess, improve your equestrian skills, or learn to cut vegetables like a Michelin star chef. Admirable as it might be to pursue those ends, when you declined those offers, you were not passing on an opportunity to sharpen the very skill of determining what kinds of ends are worthwhile in the first place. By writing your own humanities papers and engaging with your professor’s feedback, you are honing your ability to weigh different commitments, to assess the strength of competing arguments, and to render your worldview coherent, skills that are definitive of, or, at the very least, partially constitutive of autonomy.Footnote 36

Further, you do not usually have a good reason to pass on these opportunities. College is (typically) a special time in life. Even those of us who had to work through college could recognize that this is a time where you have outsized space for your studies, compared, at least, with what comes later in a typical adult life. Further, in many circumstances where you don’t have that space (perhaps a family member dies or falls ill) instructors will typically (and should) make some space for you, offering extensions, incompletes, and so on.

Bringing it all together, our answer to the challenge runs as follows:

  1. 1.

    You have a duty to cultivate your humanity, because this capacity has final value.

  2. 2.

    If you have a duty to cultivate something, happen upon a good and unique opportunity to cultivate it, and do not have a good reason to pass on the opportunity, you ought to take the opportunity.

  3. 3.

    Writing your own humanities papers is one such opportunity.

  4. 4.

    So, you ought to write your own humanities papers.

We hope to have made it clear how this clears the hurdles that are the identification, authority, and explanatory challenges. It identifies autonomy as the ground of the duty. It provides reasons for believing that autonomy has final value. And it explains how writing fits in within the broader context of the moral duty to cultivate autonomy.

What’s more, this account captures the intuitively compelling features of the inadequate responses discussed in Section 2. While we do not think that refraining from cheating captures the full reason to avoid using chatbots, Kantian ethics (and our application of it here) can help explain why cheating is wrong (when it is). Among other things, it involves deception in ways that disrespect the humanity of your professor. But, according to the view we defend in this paper, cheaters are also failing to fulfill a duty they owe themselves. They are shirking an obligation to promote their own autonomy. This provides justification for the truism that cheaters are ultimately cheating themselves.

We also believe that our Kantian account provides a better understanding of why we are required to cultivate certain capacities. It shows why some capacities matter while others do not. We should be particularly concerned about the capacities that are essential to autonomy (e.g., our ability to weigh reasons, think through problems, etc.). Finally, our account ties neatly into the idea that writing is thinking. We can also affirm that writing is thinking. But we think that the value of this kind of thinking is ultimately grounded in the cultivation of humanity.

4 Objections and Replies

The argument above can be restated very simply. It begins with the claim that you have a moral duty to foster and safeguard your own autonomy. Then we argued that learning how to write humanities papers is an important part of cultivating your autonomy. From these premises, it follows that it would be wrong to use chatbots and miss out on this opportunity. By refusing to cultivate your autonomy, you express disrespect for your possession of this precious capacity.Footnote 37 Framed in this way, the argument faces several obvious objections. First, there may be some concerns about the scope of the proposal. But this objection is not so worrisome. It can be dealt with by offering some clarifications. There is a more serious objection, however, as someone might argue that the use of various technologies (including chatbots) has the potential to enhance our autonomy rather than undermine it. In this section, we address these objections.

Let us begin by clarifying the scope of our conclusion. We do not mean to imply that the duty to write one’s own papers applies only to students at elite liberal arts universities. Liberal arts education has often been associated with the promotion of autonomy through writing, but we believe that the availability of these educational goods should not be limited to liberal arts students. Our argument could be extended to students in other disciplines, at community colleges and even pre-college settings, such as high school. Every rational agent, in virtue of their humanity, has the right to cultivate their capacities, and they should be given the opportunity to develop their skills of rational reflection through writing. Further, those agents have a duty to cultivate their autonomy, and, so, could have a duty to write their own papers in high school English class.Footnote 38

Let us now address the most difficult challenge: the claim that ChatGPT can enhance our autonomy. It is certainly true that technology often enhances our ability to set and pursue ends. By freeing us from the burden of tasks that are physically or mentally demanding, technology allows us to do things that would have been impossible without it. It also gives us unparalleled access to information. Your smartphone can retrieve far more information than Kant ever had access to at the University of Königsberg (by several orders of magnitude). Surely, chatbots offer similar advantages. For instance, they can free up time by relieving you of tedious tasks (writing trivial emails, boilerplate letters, etc.). But they might also improve your writing in various ways. You could use ChatGPT to rephrase an awkward sentence, to help you come up with objections to an argument, or to digest complex information. Someone might argue that collaborating with artificial intelligence will make human beings more capable, whereas we have been arguing that reliance on it diminishes our capacities.

There are many recent examples of humans collaborating with artificial intelligence in order to climb to new heights. Chess was forever changed when Deep Blue defeated world champion, Garry Kasparov, in 1997. The game of go underwent a similar revolution when AlphaGo defeated Lee Sedol in 2016. In both cases, the triumph of artificial intelligence led to substantial changes in the way people play those games. It is now routine for players to use bots to study. New openings and move sequences have emerged from these collaborations, and this lends credence to the claim that current players like Magnus Carlsen and Shin Jin-seo might be the greatests of all time.Footnote 39 If reliance on artificial intelligence has made humans better at chess, go, and data analysis, it stands to reason that it can make us better at writing as well. This would mean that chatbots could enhance our ability to set and pursue our own ends, and this seems to be an objection to our claim that chatbots undermine the autonomy of their users.

Our response to this objection begins with the reminder that this paper is aimed at a typical undergraduate student. Here it is helpful to draw a distinction between experts, who have mastered the activity in question, and students, who are in the process of learning it.Footnote 40 Calsen and Jin-seo might have been able to use AI to take their games to higher levels, but this is only because they mastered the skill first. Had they let AI play for them when they were students, they would not have had the mastery to build upon through later partnerships with AI. There is a difference between experts using AI to improve their skills and students relying on AI in ways that prevents them from ever developing those skills.

Let us now add to this the observation that, for many students, writing a humanities paper is difficult and uncomfortable, especially at first. There is plenty of incentive to lighten the load. Chatbots can just do that on behalf of the student, and it is very easy to mask it if they have. People (undergraduate students or otherwise) have a tendency to give in to temptation and to be dishonest with themselves when they do give in.Footnote 41 What’s more, the temptation can be incredibly powerful. As Ferdman points out, the temptation to use ChatGPT does not arise entirely out of laziness; it often comes from the immense pressure students feel to get good grades to avoid falling into “economic precarity” (2023, 19). They have fallen prey to what Thi Nguyen calls “value capture.” He says that value capture is what happens “when an agent’s values are rich and subtle; they enter a social environment that presents simplified—typically quantified—versions of those values; and those simplified articulations come to dominate their practical reasoning” (Nguyen, forthcoming). A student might enter the university with all kinds of goals in mind: personal growth, edification, cultivation of autonomy, learning to appreciate beautiful works of art and literature, etc. But the looming specter of graduation and employment has a tendency to flatten out their values; they end up caring only about their GPA (Nguyen, 2021, 423). Our students are tempted to use ChatGPT, at least in part, because we have not successfully shown them why their education matters.

Furthermore, we need to be aware of ChatGPT’s capacity to be what Regina Rini has called an autonomy trap. Rini means something narrow by this, i.e.,“[that chatbots’] deference to our commands tempts us into venting authoritarian whims, ultimately weakening our own self-control” (Rini, MS). This narrow threat is there, but there’s also a broader threat. Chatbots don’t have many boundaries, and they will enable people to cheat. Further, even if they did have boundaries, there’s something to be said for practicing the hard, uncomfortable work of self-governance by yourself.

For similar reasons, we think that there’s a good reason to thoroughly ban the use of chatbots, at least for certain assignments. Among other things, “legitimate use” (e.g. rephrasing awkward sentences) gives cover for illegitimate use, especially when illegitimate use can masquerade as legitimate use via mechanisms such as self-deception, which have been discussed above. This isn’t to say that ChatGPT has no place in writing. But we need to distinguish writing in general from writing in the context of education. In the former, chatbots may have a place. In the latter, not as much. For example, imagine being tasked with writing a summary of your recent conference travel in order to get the university to process your reimbursement. Instead of writing this up yourself, you upload your boarding passes and hotel receipts to ChatGPT. It writes up a summary of where you went and what you did. You proofread it for errors and then submit it to human resources. Have you shirked a moral obligation by using a chatbot instead of writing the report yourself? We think not. We are not suggesting that it is always wrong to use chatbots. What worries us about undergraduate students using chatbots is that they will miss out on crucial opportunities to reflect on their values, evaluate the consistency of their commitments, and consider impactful arguments. None of those important educational values are undermined when you let ChatGPT write your travel summary.

Finally, we don’t want to be taken as suggesting that our account just involves instructors making light additions to their policies (banning ChatGPT on certain assignments), with students always having the duty to write their own papers. We share Kant’s commitment to the idea that educators are obligated to promote the autonomy of their students.Footnote 42 And if we are right about the connection between paper writing and autonomy, then this would confer obligations on the instructors as well. Writing assignments should be crafted in such a way that it compels students to wrestle with difficult questions and to critically assess their values and commitments. The assignments should also be graded with care, as our critical feedback helps students refine their capacities.Footnote 43 Many instructors are not in a good position to provide this kind of feedback, however, and this arguably sheds light on further duties that others have. For example, administrators, legislators, and so on need to enable us to be able to provide the requisite level of care.

5 Conclusion

We have argued that students have moral reasons to refrain from using chatbots to write their papers. We explained why certain apparent reasons (e.g., that it is cheating) are not sufficiently robust. As we see it, the most compelling moral reasons come from a Kantian commitment to cultivate your own autonomy. This conclusion has broader implications about the relationship between human agency and reliance on artificial intelligence. We argued that certain uses of technology (including tools like ChatGPT) do not threaten autonomy in any morally significant way. On the contrary, they might enhance your ability to set and pursue your own ends. According to our view, a line should be drawn in cases where the reliance on artificial intelligence undermines capacities that are central to rational agency (humanity). New technologies always threaten to obviate the need for human beings to perform certain tasks. This is not always cause for concern. But we have argued that certain skills (such as the ability to reflect rationally on your commitments and values) are uniquely valuable. They have a moral weight that we ought to respect.

If you are an undergraduate student and you have been assigned the task of writing a humanities paper, then you should appreciate what is truly at stake. This is an opportunity to reflect on your values, to weigh arguments for different positions, and to think about the kind of person you are and the kind you want to be. These are the very capacities that constitute your autonomy. By outsourcing this task to ChatGPT, you have not merely passed on an opportunity to become a better writer. You have passed on an opportunity to become a better person. A failure of that kind is morally impermissible. There are many skills in life whose cultivation should be regarded as totally optional. You may or may not care to learn chess, go, soccer, or cricket. You may choose to learn all the streets in your hometown, or you may decide to let your phone do the navigating. But when you are writing a paper, you are doing something more. You are developing your autonomy. And, like Kant, we don’t see that as an optional enterprise. We believe it should be seen as a moral obligation.

This conclusion is especially important in a context where people (including legislators and their constituents) are becoming increasingly skeptical about the value of a liberal arts education.Footnote 44 It is incumbent upon us (for either moral or merely practical reasons) to provide a justification for what we do. If we see ourselves as being in the business of fostering the intellectual autonomy of our students, then we may have a plausible response to their objections (or at least a partial response).Footnote 45 If we are right about the moral weight of autonomy and the duty to promote it, then we will be better prepared to address skeptical challenges about the value of what we do and why someone should bother to learn these skills. In the era of chatbots, writing a philosophy paper might seem as pointless to them as memorizing all the streets in London. Those challenges could come from legislators who want to slash the budgets of our universities, or they might come from students who ask why they should refrain from using ChatGPT to write all their papers. Either way, we should be ready to give them a satisfactory reply.