1 Introduction

At a certain point, during the nineteenth century, the Crow chieftain Plenty Coups was faced with the dilemma of how to react to the (technical) dominance of the new American European settlers. Should his tribe fight them, or should it draw back? Since resistance seemed futile, Plenty Coups decided on a strategy of ‘radical hope’ (see [25] for more information on his person). Adapting to the new situation, he struck a treaty with the American Government that enabled the Crows to survive and to shape their future after a new model, that, if not the traditional one, was a least in part theirs. He thus showed both courage and prudence, following a course that might serve as a paradigm for coping with the kind of technical ‘progress’ that was unheard of in our youth and not necessarily wished for today.

One of the newest developments that will have impact on human behaviour and in our everyday life is ChatGPT, an AI writing tool, based on a large language model created by OpenAI (https://openai.com/about). Capable of generating human-like text, it has been open to the public since November 2022 and as a tool it is shaping the human world. Using tools and applying techniques are characteristic traits of humankind. We refer to [15, 17] for related various philosophical concepts. See also [49, p. 52] for the notion of ‘tool criticism’ in the research field: “The various tools used by researchers for research require a new kind of sensitivity, a practice we term tool criticism. Using tool criticism we aim to promote a rigorous practice of inquiry concerned with the epistemic work carried out by digital tools.”

The term ‘techne’ and the nature of tools were already topics of discussion for the ancient philosophers, such as Plato and Aristotle. What are tools? And how should we relate to them? To what extent do they remodel our world? These are basic questions we might apply to any invented tool, and so we should apply them to ChatGPT as well.

Are there risks beyond being domineered by an AI tool such as ChatGPT, just as Goethe’s Zauberlehrling is by his magic broom and Dürrenmatts’s Physicists are by their scientific discoveries? Even Sam Altman, CEO of OpenAI, stated 03.05.2023: We need a global supervisory authority (…) I would like to see something comparable to the Atomic Energy Agency for Artificial Intelligence.” [original in German, translated by Deepl, 16.06.2023] (Handelsblatt, 04.05.2023, https://www.handelsblatt.com/technik/it-internet/sam-altman-openai-ceo-kuenstliche-intelligenz-sollte-reguliert-werden-wie-atomwaffen/29130680.html (accessed 07.06.2023). So, despite OpenAI’s statement that OpenAI is a research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.” (https://openai.com/about, accessed 07.06.2023), ChatGPT might nonetheless also lead to severe risks for humanity and as such constitutes a moral hazard: “Currently, technology manufacturers, sellers, and service providers are richly rewarded for innovations that bring security risks, while technology users bear the bulk of the costs associated with those risks, including the nearly inevitable exploitation of their data. Technology manufacturers furthermore are positioned to understand and reduce those risks in ways technology users are not. And so technology manufacturers face a moral hazard: they must decide whether to make (or later fail to support) devices having risks that would be costly or impossible to eliminate for users—when those users will likely pay the same to them regardless.” [47, p. 71].

We follow Vagle [48] in his view of the moral hazard associated with connected devices or connected technology that he defines as “any technology based around software or firmware with one or more methods of remotely accessing the device, e.g., through a network, through which some measure of control of the device can be achieved. (…) The explosive growth of connected devices plays a key role in my thesis, in that in their emerging ubiquity, the ecosystem created by these devices presents a sui generis moral hazard problem.” (pp. 72–73).

There is also a difference: where [48] on the moral hazard confronting the technology manufacturers producing devices in the field of cybersecurity from a legal perspective, we apply the concept of moral hazard to one company (OpenAI) using a large language model for one specific product: ChatGPT-3 as a writing tool.

We refer to [35] for a broader discussion on the risk of Artificial General Intelligence (AGI) which is potentially able to “[surpass] human abilities in almost every domain” (p. 143), and his reference to the Center for Human-Compatible AI to deal with the existential risk from AGI. For a writing tool such as ChatGPT, see also the role ethically responsive NLP [Natural Language Processing] technology [3, p. 587] could play, and [16] who favour the “systematic and transparent management of language data (…) an approach to global language data governance that attempts to organize data management amongst stakeholders, values, and rights.” For a detailed discussion about chatbots’ ethical implications and solutions in education and research, we refer to [22].

The evaluations of a writing tool such as ChatGPT, both negative and positive, are similar in that both consider this technological tool to be an extension of human capabilities [24]; for more information about technology as an extension of humankind, see also [24], discussing among others [6, 31].

For our theoretical perspective on ChatGPT as a writing tool, we follow [24] who states: “By extension theory, I mean any theory in which technical objects are conceived of as some kind of extension of the human organism by way of replicating, amplifying, or supplementing bodily or mental faculties or capabilities. (…) All technology is, quite literally, a direct projection or morphological extension of human organs.” We also refer to [34] who is right to highlight the link between writing as an artificial activity and consciousness: “Like any other artificial creations and indeed more than any other, it is utterly invaluable and indeed essential for the realization of fuller, interior, human potentials. Technologies are not mere exterior aids but also interior transformations of consciousness, and never more than when they aft the word. (…) The writing provides for consciousness as nothing else does.” (p. 81).

At the time Ong [34] wrote these lines, in 1988, AI writing tools such as ChatGPT obviously did not yet exist, but in our opinion this insight is of the utmost importance. ChatGPT as an AI writing tool will also affect the interior transformations of consciousness; we just do not know how, to what degree and how far-stretching the individual and societal educational implications will be.

As a technical innovation, ChatGPT will probably constitute a negative disruption in some societal contexts, whereas it will have no negative or even positive effects in others. Following the principles of Technology Assessment for Responsible Innovation [14], steps should now be taken (1) to individuate the contexts where unguided use of ChatGPT might cause harm, (2) to define what kind of harm this might be, and (3) how to avoid it. Clearly, ChatGPT affects all contexts in which self-produced written contents are a compulsory part of exams (e.g. at schools and universities) and it enhances the risk of cheating (see for example, [7, 10, 18,19,20, 22, 26, 38, 39, 44,45,46,47] implicating legal consequences.

For more information about AI-assisted plagiarism, also called ‘AIgiarism’, according to Christian Nedelcu (https://medium.com/@cristian.nedelcu/chatzero-is-the-best-tool-to-spot-aigiarism-737846323985, accessed 10.08.2023) a notion coined by the American venture capitalist Paul Graham. For more information about ‘AIgiarism’, see also Alex Hern in https://www.theguardian.com/technology/2022/dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that (accessed 10.08.2023) and Maretha Jonge in https://www.leidenpedagogiekblog.nl/articles/chatbots-ben-je-als-docent-nog-wel-relevant (accessed 10.08.2023).

In general, three basic strategies to cope with an innovation such as ChatGPT can be distinguished: (a) rejection, (b) modification, (c) acceptance and adaption of the existing societal framework ([14], p. 27). They are embedded in so-called ‘social imaginaries’, “collectively held constellations of speculations and narratives that draw on the understandings, norms, values and experiences of the world” [29, p. 6], quoted by Kikerpill & Siibak [19]. It is not the goal of this paper to discuss these three basic strategies for coping with the social impact of ChatGPT-3 as a writing tool. We focus on the positive and negative voices heard in the media and academia after the introduction of the tool in November 2022. Hence, we will discuss the extent to which ChatGPT-3, as a writing tool, is represented as an educational assistant (in a positive or negative way) and whether this brand-new technological device is seen as a moral hazard in the media and academia. Today’s ChatGPT’s opponents raise valid concerns about deep understanding, academic integrity and cognitive skills. These issues extend beyond ChatGPT to broader AI ethics, including bias, transparency, and accountability.

However, are these arguments as new as they look like? The introduction of new technological developments, such as the use of fire, the wheel, book printing, steam-powered machines, electricity, the telegraph and computers, always evokes positive and negative reactions concerning its social impact, which is often represented as paradise or hell. Debates between techno-optimists and -pessimists are age old and are characterised by the feeling each time that the relevant technological development is unique.The question is, however, whether or not it is just history repeating itself again and again. For this reason, this writing tool will be posited as an extension of human capabilities in a historical perspective through an analysis of Plato’s critical stance on writing in Phaedrus. There are, of course, some differences. Plato's philosophical ideas were shaped by an oral tradition of discourse, knowledge was transmitted through dialogues, debates and recorded with writing. Plato viewed writing as “dead”. Plato's focus on knowledge seems to contrast with ChatGPT's goal of being persuasive to the individual user in a complex world of big data and technological innovation. And yet, it is worthwhile to consider Plato’s thoughts to put the present discussion into a broader perspective.

Before we pay attention to this historical dimension, we will first examine how ChatGPT-3 is currently represented in the media and academia, whether in a positive or negative sense. We will also ask ChatGPT-3 if its writing tool should be considered as an educational assistant or a moral hazard and if it could reflect on any noted similarity between Plato’s critical stance on writing (in Phaedrus) and ChatGPT-3 as a writing tool. We will comment on the answers ChatGPT-3 provide to us. Finally, we will compare them to Plato’s main arguments relevant to writing and draw conclusions.

2 Plato’s Phaedrus

2.1 Plato

Plato lived from 428/27-348 BCE, in a period when literature (in contrast to oral poetry) was common and literacy amongst the educated stratum of the Athenian society was the social norm. He is one of the best Greek prose writers and wrote philosophical works in the form of dialogues. He was a pupil of the moral philosopher Socrates (470-399 BCE), who relied exclusively on oral discussion. Socrates is the hero of most of Plato’s dialogues, which imitate his mode of teaching.

2.2 Phaedrus

Plato’s main thesis on writing and orality is found in the dialogue Phaedrus [1337, 41], pp. 163–177; see https://philpapers.org/browse/plato-phaedrus for a recent bibliography). There, Socrates illustrates his philosophical standpoint to Phaedrus by first telling an Egyptian tale (mythos):

“The story goes that Thamus said many things to Theuth in praise or blame of the various arts, which it would take too long to repeat; but when they came to the letters, “This invention, O king,” said Theuth, “will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus replied: “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess. For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practise their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”” (Plato, Phaedr. 274E-275B, Loeb. transl.)

The dialogue Phaedrus very likely dates back to 365 BCE. [13, p. 233]. Plato published it shortly after his main political work, the Res Publica (Politeia), in which he propounds his political ideas and describes an ideal state. His main argument, that serious philosophical thought cannot be expressed in literature is:

  1. (a)

    Directed against authors of written political-philosophical treatises, especially against the rhetorician and influential political author Isocrates (written with an I at the beginning and not to be confused with Socrates) (436-338 BCE), (b) may reflect an internal discussion within Plato’s own philosophical school, the Academy, on the extent to which the epistemological foundations of Plato’s thought could be published in literature, as Plato may have been urged by his pupils to do.

  2. (b)

    Related to the position the Phaedrus takes in Plato’s work. In the Res Publica, published shortly before the Phaedrus, Plato only hints at his ‘metaphysics’ in several parables (amongst them, the famous allegory of the cave). The Phaedrus thus justifies what Plato has done. Despite the scepticism he expresses towards written texts in the Phaedrus, Plato later appears to alter his stance slightly in terms of what could be seriously expressed by writing. His later dialogues (for example, Parmenides, Sophistes, Timaeus) are increasingly concerned with his fundamental epistemology, which seems to have been part of his so-called ‘unwritten doctrine’. Thus, the later dialogues, beyond offering literary entertainment for a broader reading public, also seem to contain a higher degree of serious philosophical doctrine.

2.3 The setting and the structure of the dialogue

The dramatic date of the Phaedrus is about 420 BCE. It presents a fictional discussion between the young man Phaidros and Socrates. It can be divided into two main parts, a ‘practical’ exemplary part and a theoretical part. Phaidros comes to Socrates. Phaidros is very enthusiastic about the doctrines of the rhetorician Lysias (445-80 BCE). He carries with him a written speech of Lysias (very likely written by Plato himself, but so similar to Lysias original speeches in style that its origin is still debate [13, pp. 77–79], which concerns the topic of love. Socrates makes Phaidros read this speech to him. He shows that Phaidros has (1) misunderstood its content and (2) that the content of the speech itself is ludicrous. Phaidros, a young man, has been completely led astray and has been blinded by the power of nice words. After this, Socrates himself gives two speeches pro and contra love. With this the ‘practical’ part ends and the theoretical part starts (259e1). This deals with the basis and the power of conventional and ‘true’ rhetoric and as a corollary, which is what concerns us here, (274b) the difference between oral and written communication [13, pp. 188–212].

3 Plato’s main arguments against writing

We describe the consequences of writing and publishing written text for (1) the reader, (2) the written text itself, and (3) the philosophical author in the below.

3.1 The reader

  1. (a)

    A written text can remind readers only of what they already know (hypomnesis); it is not part of their memory itself (mneme) (this is literally directed against contentions of Gorgias). By relying on written text, readers are induced to neglect training their own mental capacities. Or in the words of Thamus/Socrates/Plato: “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practise their memory.” (275A).

  2. (b)

    A text at best creates illusory, but not true, knowledge. If there is no (additional) oral instruction, pupils (like Phaidros) will believe they know something, although they know nothing. Misunderstanding and illusion will make them conceited. “For they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” (275A).

3.2 The published written text

  1. (a)

    The written text cannot adapt to specific audiences and situations. This creates risks in how this can be understood.

  2. (b)

    Witten texts from even the most serious authors may be misunderstood by their pupils. Without oral explication, it is ‘helpless’ and useless. Its message will be weakened.

  3. (c)

    It may fall into the hand of malevolent critics and its message will be intentionally disparaged. “And every word, when once it is written, is bandied about, alike among those who understand and those who have no interest in it, and it knows not to whom to speak or not to speak; when ill-treated or unjustly reviled it always needs its father to help it; for it has no power to protect or help itself.” (275E).

3.3 The author

  1. (a)

    The author does not reach the souls of his pupils. He not only fails to achieve his aim; in the worst case, he achieves quite the opposite of what he sought to achieve. He does not convey his philosophical message.

  2. (b)

    The author may be disparaged by malevolent critics.

In consequence, the serious philosopher will not commit to written texts (a) his most difficult thoughts and (b) the thoughts most dear to him. He will use texts only (a) for diversion (in old age) and as a tool (b) to provide a reminder him if his own memory should fail. With his pupils, he will rely mainly on oral instruction. “The gardens of letters he will, it seems, plant for amusement, and will write, when he writes, to treasure up reminders for himself, when he comes to the forgetfulness of old age, and for others who follow the same path, and he will be pleased when he sees them putting forth tender leaves.” (276D).

In Sect. 6, we ask ChatGPT-3 to reflect on any possible similarity between Plato’s critical stance on writing (in Phaedrus) and ChatGPT-3 as a writing tool.

4 ChatGPT-3 as a writing tool: an educational assistant or a moral hazard?

4.1 Introduction

In this section we discuss the extent to which ChatGPT is portrayed, whether positively or negatively, in the media and academia as an educational assistant for writing, and whether it is viewed as a moral hazard in the media and academia.

4.2 ChatGPT’s educational use as a writing assistant

Doris Weßels, a German professor at the Fachhochschule Kiel, University of Applied Sciences, 20.12.2022 considers ChatGPT to be “a milestone in AI development” and suggests: “Let’s rise to the challenge and score points with adaptivity, creativity and speed. In conclusion, we are left with one certainty in these uncertain times: the “new age” will challenge us! And hopefully also promote our development.” [original in German, translated by Deepl, 16.06.2023] https://www.forschung-und-lehre.de/lehre/chatgpt-ein-meilenstein-der-ki-entwicklung-5271 (accessed 16.06.2023).

Nadine Lordick, at the Zentrum für Wissenschaftsdidaktik der Ruhr-Universität Bochum in Germany, explains that tools such as ChatGPT will not bring down academia, 23.02.2023: “Writing in studies and in science has many reasons and will not suddenly lose importance just because ChatGPT can write text sections, and it will not throw all forms of examination overboard.” [original in German, translated by Deepl, 16.06.2023] https://news.rub.de/wissenschaft/2023-02-23-wissenschaftsdidaktik-ki-tools-werden-den-akademischen-betrieb-nicht-zum-einsturz-bringen (accessed 07.06.2023).

Daniel Lametti, an associate professor of psychology at Acadia University in the USA, who published ‘ChatGPT won’t kill the college essay’ at Slate 07.12.2022 [23], argues that: “ChatGP might fail at tests of reasoning, but it’s great at generating humanlike text to different lengths and in various styles. (…) I frequently used ChatGPT while working on this piece. I asked for definitions that, after a fact-check, I included. (…) Like spell check, a thesaurus, and Wikipedia, ChatGPT made the task of writing a little easier. I hope my students use it.” https://slate.com/technology/2022/12/chatgpt-college-essay-plagiarism.html (accessed 07.06.2023).

Many scientific papers have in recent months also been published that explore ChatGPT’s implications for education. As it is not the purpose of this paper to discuss these, we only briefly refer to [1]. They present a SWOT analysis of ChatGPT with implications for educational practice and research, and conclude that: “The strengths include using a sophisticated natural language model to generate plausible answers, self-improving capability, and providing personalised and real-time responses. As such, ChatGPT can increase access to information, facilitate personalised and complex learning, and decrease teaching workload, thereby making key processes and tasks more efficient. The weaknesses are a lack of deep understanding, difficulty in evaluating the quality of responses, a risk of bias and discrimination, and a lack of higher-order thinking skills. Threats to education include a lack of understanding of the context, threatening academic integrity, perpetuating discrimination in education, democratising plagiarism, and declining high-order cognitive skills.” (p. 1).

For more information about the opportunities and challenges ChatGPT may present, see also [7, 18,19,20, 22, 26, 39, 40, 45, 47 and https://www.youtube.com/watch?v=hJP5GqnTrNo accessed 29.02.2024)]. We agree with [27], who emphasise the need to keep in mind that ChatGPT’s educational efficiency and effectivity are not yet evidence based. This is in line with [10], who also argue that “The use of ChatGPT, with its positive and negative impacts on education, is still in its infancy and that implies the need for more empirical research.” (p. 11) It is also important to note that we need empirical studies, including longitudinal studies, in different countries and with a larger number of participants before we can confirm that chatbots such as ChatGPT are really a beneficial tool for students. See also Sect. 4.4 on whether ChatGPT’s educational use is seen as a moral hazard in the media and academia.

4.3 Public perception of ChatGPT’s role as writing assistant in education

The statements by Doris Weßels, Nadine Lordick and Daniel Lametti underline the potential of ChatGPT as a writing tool for educational assistance, but provide no insight into how the broader public perceives its role as an educational writing assistant. To gain an impression of how the population in general views the implications of ChatGPT for the educational field, we now present the results of a number of empirical studies conducted in the months after ChatGPT’s introduction.

[19] applied the critical discursive psychology (CDP) approach to analyse 143 news stories and opinion pieces that were published in international news media during the first two months after the public launch of the technology. They found the following results: “Our analysis shows that socially irresponsible technology release tends to operate with relative impunity and lack of regard for social consequences entailed by such technology introduction. Further, in the first two months since ChatGPT’s release, the educational sector found itself struggling with addressing the implications from ChatGPT, in particular whether the central issue pertains to the ‘product’ of such technologies or the face-the-facts necessity of incorporating ChatGPT into educational processes. Finally, our analysis shows that tech companies’ responsibility stops at the point of technology release, after which the responsibility of dealing with the consequences is discursively and practically transferred over to agents in impacted sectors, including the education sector.”

[12] collected Twitter data (16,830,997 English tweets from 5,541,457 users) posted in the first two months after the release of ChatGPT that included the word ChatGPT. Based on topic modeling and sentiment analysis, they provide an overview of global perceptions and reactions to ChatGPT regarding education. They found that education was the most tweeted content topic, and they concluded: “(…) zooming in on education-related topics, we found that both specific topics (e.g., students’ essay writing, cheating, the capability of ChatGPT to pass exams) and broader issues (e.g., opportunities, limitations, and consequences of the use of ChatGPT) were discussed. (…) although the findings indicated that ChatGPT was generally discussed positively in the first two months after ChatGPT's release, the statements regarding education were more mixed. This aligns with previous research on perceptions of technological innovations, which showed that users face most innovations with varying expectations and emotions. Expectations and emotions are associated with attitudes ranging from absolute conviction of the new technology's usefulness and a positive attitude (“radical techno-optimists”) to complete rejection of the new technology (“techno-pessimists”; Tate et al. [45]).”

[26] also made use of Twitter data. For their analysis they used the academic version of Twitter Search API and selected fourteen education-related search terms along with the keyword “ChatGPT” to collect relevant tweets. Their search covered tweets posted from December 1, 2022, to March 31, 2023. They concluded: “The sentiment analysis indicates that Twitter users have an overall positive attitude towards the use of ChatGPT in education. However, we note that sentiment changes are often associated with significant events that occur within the conversation. Our topic analysis highlights five key areas of concern that emerged from negative tweets: academic integrity, impact on learning outcomes and skill development, limitation of capabilities, policy and social concerns, and workforce challenges. Our social network analysis shows that users from the fields of tech, education, and media were highly implicated in the conversation, while education and tech individual users played a crucial role in leading the diffusion of concerns to broader audiences.”

Another point to address is how users such as students and scholars perceive ChatGPT. [11] conducted an empirical study to investigate this that analysed the responses of seven scholars and 14 PhD students from Turkey, Sweden, Canada and Australia. A thematic content analysis approach was used and he concluded: “nine key themes emerge from the findings. According to their frequency of recurrence, these themes are: “Evolution of learning and education systems”, “changing role of educators”, “impact on assessment and evaluation”, “ethical and social considerations”, “future of work and employability”, “personalized learning”, “digital literacy and AI integration”, “AI as an extension of the human brain”, and “importance of human characteristics”.”

It is worth noting that the above studies by [11, 12, 26] and [5] on the public perception of ChatGPT’s role as a writing assistant in education clearly show that risks are mentioned, but that they are not framed as moral hazards. [19], though, found one aspect of a moral hazard in the sense that after having released their technology the companies transfer their responsibility over to “agents in impacted sectors, including the education sector”.

4.4 ChatGPT’s educational use as a moral hazard

4.4.1 ChatGPT represented as a moral hazard in the media?

After ChatGPT-3’s introduction in November 2022, two open letters were published. The first one by the Future of Life Institute called on: “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” And stated: “(…) As stated in the widely-endorsed Asilomar AI Principles (https://futureoflife.org/open-letter/ai-principles/) Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control. (…) AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.” https://futureoflife.org/open-letter/pause-giant-ai-experiments/, published 22.03.2023, signed 33.002 times 10.08.2023.

The second one published by the Center for AI Safety made the following statement: “AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (https://www.safe.ai/statement-on-ai-risk, accessed 10.08.2023).

In his essay ‘The College Essay Is Dead: nobody is prepared for how AI will transform academia’, published 06.12.2022 in The Atlantic [33], Stephen Marche makes the following critical statement about ChatGPT-3’s educational implications for essay writing: “And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? (…) And yet, despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. (…) The connection between humanism and technology will require people and institutions with a breadth of vision and a commitment to interests that transcend their field. Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance. But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.”

The first Open Letter focused on AI developers that “must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI” (https://futureoflife.org/open-letter/pause-giant-ai-experiments/, accessed 07.06.2023). And they urge technology manufacturers to take responsibility, as they “should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” (https://futureoflife.org/open-letter/pause-giant-ai-experiments/), accessed 07.06.2023). Though they refer to the responsibility of AI research and development and mention liability for AI-caused harm, this cannot be fully characterized as a moral hazard as they do not address “the decision whether to make (or later fail to support) devices having risks that would be costly or impossible to eliminate for users—when those users will likely pay the same to them regardless.” [48, p. 71].

The second Open Letter frames the problem as a general risk for humankind “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (https://www.safe.ai/statement-on-ai-risk, accessed 07.06.2023) but does not frame this problem as a moral hazard technology for which a manufacturer such as OpenAI should be held accountable, having developed a technological device that their users are going to use anyway.

OpenAI’s CEO Sam Altman does not frame the risk as a moral hazard, but just calls for “a global supervisory authority” (https://www.handelsblatt.com/technik/it-internet/sam-altman-openai-ceo-kuenstliche-intelligenz-sollte-reguliert-werden-wie-atomwaffen/29130680.html, accessed 07.06.2023).

Stephen Marche in his essay ‘The College Essay Is Dead: nobody is prepared for how AI will transform academia’, states: “They [engineers and humanists] will be essential in determining the ethical and creative use of chatbots, to take only an obvious example. (…). The connection between humanism and technology [that] will require people and institutions with a breadth of vision and a commitment to interests that transcend their field.”

This statement shows that Marche also does not refer explicitly to technology manufacturers.

To sum up, in the media examples provided above, OpenAI—in its role as technology manufacturer—is almost never mentioned as bearing responsibility for the risks accompanying this new technological device; and its decision of “whether to make (or later fail to support) devices having risks that would be costly or impossible to eliminate for users—when those users will likely pay the same to them regardless” [48, p. 71] is not addressed at all.

4.4.2 ChatGPT represented as a moral hazard in academia?

For the current representation of ChatGPT as a writing tool in academia and whether it should be considered a moral hazard, we refer to Professor Iris van Rooy from Radboud University in the Netherlands, who urgently calls on academia to stop feeding the hype and start resisting: “With disbelief and discontent, I have since watched academics in The Netherlands jumping on the bandwagon and enthusiastically surfing the AI hype wave, e.g., by talking enthusiastically about ChatGPT on national television or in public debates at universities, and even organising workshops on how to use this stochastic parrot in academic education. (…) Academics should be a voice of reason; uphold values such as scientific integrity, critical reflection, and public responsibility. Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that will allow them to recognise misleading claims made by tech companies and understand the limits and risks of hyped and harmful technology that is made mainstream at a dazzling speed and on a frightening scale.” (https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/, accessed 14.01.2023). See also her critical lenses on AI approach (https://irisvanrooijcogsci.com/2023/01/29/critical-lenses-on-ai/, accessed 07.06.2023).

Though van Rooij is clearly very censorious about ChatGPT’s use in education, her critical stance is oriented towards those academics who fail to critically consider the educational implications of this technological tool. She does not frame ChatGPT’s writing tool as a moral hazard.

What about all the scientific papers published in recent months (e.g., [279, 10, 18,19,20, 22, 26, 38, 39, 44,45,46,47]? Though they discuss the opportunities and threats (see Sect. 4.2) presented by ChatGPT, its strengths and weaknesses, as well as sketching the educational implications for ChatGPT quite well, they, too, do not represent it as a moral hazard.

Finally, we would like to refer to Sect. 6 where we ask ChatGPT-3 itself about the extent to which the writing tool ChatGPT-3 should be considered an educational assistant or a moral hazard.

5 ChatGPT’s current representations in the media and academia compared to Plato’s ‘eudemonistic’ skepticism concerning writing

What can we learn from Plato’s Phaedrus if we critically analyse ChatGPT’s current representations in the media and academia? In his seminal work, [34, p.78] makes the point, in 1982, that “Most persons are surprised, and many distressed, to learn that the same objections commonly urged today against computers were urged by Plato in the Phaedrus (274-7) and in the Seventh Letter against writing” by arguing that (1) writing is inhuman as it is “pretending to establish outside the mind what in reality can only be in the mind”, (2) writing destroys memory (which reminds us of objections of pocket calculators), (3) a written text is unresponsive, and (4) a written text can not defend itself as the natural spoken word can. In Sects. 4.2, 4.3 and 4.4 we showed that in today’s society ChatGPT’s opponents in the media and academia also use clear arguments opposing the use of this witting tool, citing such consequences as a lack of deep understanding, and higher-order thinking skills, difficulty in evaluating the quality of responses, threatening academic integrity, perpetuating discrimination in education, democratising plagiarism, and declining high-order cognitive skills.

In this regard, there is another point that we would like to make. Though Plato’s philosophical aims and scopes are obviously much larger than ours, his vision that writing leads to cheating could also be applied to ChatGPT as an educational writing tool assistant. The argument that written texts can be used to cheat and mislead others is excluded in the last section of Plato’s Phaedrus. However, it is a favourite topic of Plato’s, who exposes the false claim of sophists to knowledge and their malign intentions in several other dialogues (cf. the Protagoras, Gorgias, Symposium). In the Phaedrus, the protagonist Phaidros is led astray by the brilliant, but non-sensical published speech of Lysias and, in addition mistakes Lysias’ ‘knowledge’ for his own. And that is exactly what ChaptGPT’s texts might do to us.

The point is not to go into the details of Plato’s sophisticated epistemology and its complex relation to moral behaviour. In a somewhat simplified version, Plato’s premises are as follows: like all ancient ethics, Plato’s moral philosophy is eudemonistic, i.e., everyone should seek to be as happy as possible (eudaimonia = happiness). Happiness itself is defined by means of an anthropological premise: it is reached by living in accordance with human nature (physis). According to most ancient moral philosophies (except Epicurus), what defines human beings in contrast to animals is reason. It is a characteristic that men (partly) share with gods. In consequence, humans must seek to exercise their brains as far as possible and try to attain as much knowledge as possible, while abstaining from anything that would compromise the mental faculties and impair knowledge. Such behaviour will lead to a happy life. The epistemic argument implies that an individual will act according to these insights and behave in a morally correct manner. There is no separate will (voluntas) to counteract reason, if it has found out what is good.

6 ChatGPT-3’s self-reflection

Following  [27, 28] let us ask now how ChatGPT-3 looks at its writing tool. We will comment on its answers in bold.

Please answer the following question: to which extent should ChatGPT-3 as a writing tool be considered as an educational assistant or a moral hazard?

The classification of ChatGPT-3 as either an educational assistant or a moral hazard depends on how it is utilized and the context in which it is used.

Let's explore both perspectives:

Educational assistant:

  1. 1.

    Enhancing learning: ChatGPT-3 can be a valuable tool in education by providing instant access to vast amounts of information. It can assist students in researching topics, understanding complex concepts, and generating ideas for assignments.

    The answer looks like (1) part of a promotional flyer for ChatGPT-3 and (2) the somehow trite elaboration encountered in students’ essays or in fast scholarly productions. It looks like academic ‘Massenware’. Its traits are the following: ChatGPT-3 relies on generalities and uses hollow-sounding phrases. What is information? Modern students are stuffed with ‘information’ all the time and there are (still) more qualified accesses to it than ChatGPT-3. It also does not help to understand complex concepts if these are simply reduced to superficial reasoning. ChatGPT-3 may indeed generate ideas, but ideas of what kind? Since its algorithms are based on the calculation of the plausible, it will create ideas that are dull and look like science, but only perpetuate and repeat what is already known without the least creativity. Flooding the world with such texts, might even hinder progress: there is little hope of finding an ‘Einstein’ in a heap of normal mediocre? Rubbish and even should one be found, he won’t be believed because he does not look mediocre? normal enough.

  2. 2.

    Personalized support: ChatGPT-3 can offer individualized guidance and support to students, helping them overcome challenges and providing explanations tailored to their needs.

    This also looks again a promotional text. The “individualized guidance” offered by this answer tends towards zero. It seems like a standardized that might be applied to all sorts of contexts and in which the word students is the only individualization. “Tailoring to their needs” is also done in the various internet blogs and helps creating closed worlds like the flat earth society.

  3. 3.

    Language Skills Development: By interacting with ChatGPT-3, students can improve their writing skills, grammar, and vocabulary. It can provide feedback and suggestions for improvement, facilitating language learning.

    This is the only point where ChatGPT-3 might help students.

    To sum up, as a ‘metareflection’, ChatGPT-3 might be said to rely on quite a simple notion of what education is about, without taking the dynamic between teachers and their students into account at all. It also did not reflect on different groups of learners.

Moral Hazard:

  1. 1.

    Plagiarism and academic integrity: the ease of accessing information through ChatGPT-3 raises concerns about plagiarism. Students might be tempted to submit generated content as their own, undermining academic integrity.

    We agree, but ChatGPT-3 should have included scholars as well.

  2. 2.

    Critical thinking skills: relying heavily on ChatGPT-3 could hinder the development of critical thinking abilities. Students may become dependent on the tool, limiting their ability to analyse, evaluate, and synthesize information independently.

  3. 3.

    Ethical issues: ChatGPT-3 might generate biased or inaccurate information depending on the input data it was trained on. This can perpetuate misinformation or reinforce existing biases, potentially leading to ethical concerns.

    It is interesting to see that ChatGPT-3 uses terms here like “might’, “may” and “could” whereas in its answers on its use as an educational assistant it sounds much more affirmative, by using the term “can” for example.

Considering the above points, it is essential to use ChatGPT-3 as an educational assistant in a balanced and responsible manner. Encouraging students to utilize it as a supportive tool, while also emphasizing the importance of critical thinking, creativity, and independent learning, can mitigate some of the potential moral hazards. Educators and institutions play a crucial role in guiding students to use such tools ethically and responsibly, while also teaching them the necessary skills for effective knowledge utilization.

Our conclusion is a different one. We would not encourage our students to use a ChatBot – they will use it anyway without our encouragement—but we would stimulate them to use other means of help such as using different sources and evaluating these in a critical way. It is also interesting to see how responsibility is shifted to the educators who must waste their time on reducing the risks connected with the tool.

As ChatGPT did not use characteristics of “moral hazard” (see Sects. 1 and 4.3) we asked it a follow-up question in this regard.

Thanks for your answer, Chat-GPT but could you go more in detail for moral hazard, taking into consideration Vagle [48]: “Currently, technology manufacturers, sellers, and service providers are richly rewarded for innovations that bring security risks, while technology users bear the bulk of the costs associated with those risks, including the nearly inevitable exploitation of their data. Technology manufacturers furthermore are positioned to understand and reduce those risks in ways technology users are not. And so technology manufacturers face a moral hazard: they must decide whether to make (or later fail to support) devices having risks that would be costly or impossible to eliminate for users—when those users will likely pay the same to them regardless.” (p. 71).

Certainly! Taking into account Vagle [48] regarding moral hazard in the context of technology, we can apply them to the use of ChatGPT-3 as a writing tool. Here's a more detailed exploration of the moral hazards:

  1. 1.

    Exploitation of data: when utilizing ChatGPT-3, users input data and information that can be stored and potentially exploited. Technology manufacturers and service providers, in this case, OpenAI, have access to this data, and there may be concerns about how it is used, shared, or monetized. Users bear the potential risks associated with the exploitation of their data while technology manufacturers benefit from the innovation without directly addressing the risks.

  2. 2.

    Lack of user control: users of ChatGPT-3 have limited control over how their data is utilized and secured. They rely on the technology manufacturers to handle and protect their data, making them vulnerable to potential breaches or misuse. This power imbalance between technology users and manufacturers creates a moral hazard where users are left exposed to risks beyond their control.

  3. 3.

    Responsibility for risk mitigation: technology manufacturers are in a position to understand and reduce risks associated with their products. In the case of ChatGPT-3, OpenAI is responsible for addressing issues such as biased or inaccurate outputs, plagiarism facilitation, and the potential spread of misinformation. Failure to adequately address these risks can create moral hazards, as manufacturers have the power to mitigate risks but may choose not to do so if the costs outweigh the benefits.

  4. 4.

    Limited accountability: technology users, including students and educators using ChatGPT-3 as a writing tool, bear the burden of the risks associated with its usage. This includes potential consequences such as plagiarism accusations, ethical concerns arising from biased outputs, or the lack of critical thinking development. Meanwhile, technology manufacturers may not be held directly accountable for these risks and their implications.

In summary, the use of ChatGPT-3 as a writing tool presents moral hazards in terms of data exploitation, limited user control, responsibility for risk mitigation, and the potential lack of accountability. These hazards arise from the power dynamics between technology users and manufacturers, where users bear the costs and risks associated with the technology, while manufacturers reap the rewards without necessarily addressing or mitigating those risks adequately.

ChatGPT-3 tried to use Vagle [48] to discuss moral hazard in more detail. At the end, it introduced the role of the manufacturer—in this case OpenAI, although ChatGPT conspicuously refrained from mentioning its creator by name. But it failed to elaborate on the key point made by Vagle [48, p. 71], namely that such a device should “decide whether to make (or later fail to support) devices having risks that would be costly or impossible to eliminate for users—when those users will likely pay the same to them regardless.”

Thanks for the clarification, ChatGPT. One final point: Could you also reflect on the eventual similarity between Plato’s critical stance on writing (in Phaedrus) and ChatGPT-3 as a writing tool?

Certainly! The critical stance on writing presented by Plato in Phaedrus does share some conceptual similarities with the use of ChatGPT-3 as a writing tool. In Phaedrus, Plato raises concerns about the impact of writing on human memory, knowledge acquisition, and the development of critical thinking skills. Here's how these concerns can be related to ChatGPT-3:

  1. 1.

    External memory: Plato argues that writing externalizes memory, allowing individuals to rely on written texts instead of actively engaging with knowledge through dialogue and oral transmission. Similarly, ChatGPT-3, as a writing tool, provides instant access to information and can act as an external memory that students can rely on instead of engaging in deeper learning processes.

  2. 2.

    Lack of dialogue and interaction: Plato emphasizes the importance of live dialogue and questioning in the pursuit of knowledge. He suggests that writing lacks the ability to respond, clarify, and adapt to the specific needs of the interlocutor. Similarly, ChatGPT-3, while capable of generating responses, lacks the ability to engage in genuine dialogue and adapt its answers based on real-time feedback and context, potentially limiting the depth of interaction and understanding.

  3. 3.

    Dependency and critical thinking: Plato expresses concerns that relying on written texts may lead to a passive and uncritical acceptance of information. Similarly, the use of ChatGPT-3 as a writing tool can potentially create a dependency on the generated content, hindering the development of critical thinking skills and encouraging a more passive approach to learning and writing.

ChatGPT-3 does not consider some points that are important to Plato. It does not reflect on his argument (1) that texts may be intentionally ‘misunderstood’, (2) that published texts can manipulate readers (like propaganda), (3) that ‘uncritical reading of texts’ can spoil the souls of young readers and lead to conceited behaviour. This, Plato considers the worst form of stupidity. ChatGPT-3 also does not consider that Plato only criticizes writing in as far as it concerns philosophical topics and to the extent it overestimates its own value. In contrast, writing for fun and in old age for use as a memory prop is perfectly legitimate. All in all, ChatGPT-3 offers a quite reduced version of what Plato says, and it seems to cater—as Plato himself would say—for those who are ignorant. It completely neglects the public aspect of writing and the societal hazards Plato attributes to it.

It's important to note that while there are similarities between Plato's critique of writing and the use of ChatGPT-3, there are also significant differences. ChatGPT-3 is a sophisticated AI language model that can provide valuable information and assistance, whereas writing, in Plato's view, was a more static medium unable to adapt and respond. Additionally, the concerns raised by Plato were specific to the context of his time and the nature of oral dialogue, while the use of AI in education today presents a unique set of considerations.

Again, this reads as a promotional text; moreover, one based on a misunderstanding of Plato. Every text written by ChatGPT-3 that is published is as static as all the texts Plato is concerned with. The fact that every utterance (written or non-written) is bound to its historical circumstances is a truism that is not only applicable to Plato (who by the way would completely agree with it), but also to ChatGPT-3.

Ultimately, the similarity lies in the potential impact on critical thinking, knowledge acquisition, and the risk of creating dependency. However, it's worth noting that the technology landscape has evolved significantly since Plato's time, and the impact and potential benefits of AI writing tools like ChatGPT-3 need to be evaluated in the context of our contemporary educational practices.

Indeed, the technology landscape has evolved since Antiquity! Moreover, they should also be evaluated in the context of our general human practices.

Thanks for your answer, ChatGPT.

You're welcome! I'm glad I could provide you with the information you were looking for. If you have any more questions, feel free to ask. I'm here to help!

16.06.2023. Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 24 Version

Finally, we as authors would like to make one concluding remark about writing as a form of human creativity. In his dialogue Symposium (Dinner Party), an impressive literary feat, Plato poses the question as to what Eros (= human creativity) is. In a final speech (Plat. Symp. 203b1-e5), he makes Socrates relate a tale (mythos) told to him by a sage woman called Diotima about the origin of Eros who is a mighty demon [40, 42]. According to Diotima, Eros is a child of Penia (poverty) and Poros (abundance) born at the wedding party of Aphrodite. In non-metaphorical language, Plato thus seems to say that human creativity is a combination of a some felt need that urges us to create something and an abundant reservoir of thoughts (or even physical things) that offer the material for our creation. Aphrodite, the Greek goddess of love, symbolizes both beauty and perfection, and this is also what creativity is about. We just have to sort out what is useful and what not, what is good and what not. According to Plato, this process will and can never stop because mankind perpetuates itself by its own physical and mental creativity. If we turn back to ChatGPT-3 and other AI-writing tools, they are manifestations of this human creativity. They should not only be “evaluated in the context of our contemporary educational practices” as stated above by ChatGPT-3, but also in the context of our general human practices. Will they enhance or will they reduce our creativity? So far, there is no definite answer, but there is also no choice. We have to face them like Plenty Coups and his tribe faced the European settlers, with radical hope.

7 Conclusion

ChatGPT is portrayed both in positive and negative ways in the media and academia. Both stances were apparent in the media and academia after its introduction in November 2022. Examples of positive social imaginaries of this new technological tool, as an extension of human capabilities or even the human brain, are “increase[d] access to information, facilitate personalised and complex learning, and decrease[d] teaching workload, thereby making key processes and tasks more efficient.” [10, p. 1] But there were also negative social imaginaries in the media and academia. A first negative aspect of ChatGPT-3 is that it facilitates cheating, especially in educational and academic contexts, and rule-breaking. Obviously, cheating was already relatively common in education and academics (e.g., plagiarism). But engaging others to write one’s articles is more expensive than using ChatGPT-3 as a writing tool. Another point is that traditional plagiarism is easier to detect than plagiarism in the form of a text generated by ChatGPT—a practice also known as ‘AIgiarism’. A second negative aspect is that ChatGPT-3 may even impair human mental capacities. Although such misgivings are by no means a new phenomenon (think of the impact of former technological innovations, such as the television, the calculator, the smartphone and navigators), the consequences of ChatGPT for human cognitive development may have even more far-reaching consequences. What will happen if training in formulating and summarising is no longer provided? And what if, one day, ChatGPT should become the only information source to be used? The ability to use different sources (currently still a key capability in media literacy (see [4, 21, 30, 32] to check the reliability of texts [1] would be lost. Writing texts would no longer be necessary, as the AI system could do it (better); how would we be able to motivate learners to make the effort–in this case: the differentiated information gathering and knowledge wor—something that is already sometimes tremendously difficult today? [8] Although ChatGPT as a writing tool was not portrayed as a moral hazard either in the media or in academia, obviously, concerns about risks in a general sense were voiced. There was one study [19], however, in which one aspect of moral hazard could be distinguished, in the sense that “after having released their technology companies transfer their responsibility over to agents in impacted sectors, including the education sector”.

To put ChatGPT as a writing tool into a historical perspective, we concluded that the scope and the framework of Plato’s thoughts are much broader and more differentiated and his discussion of writing is more far reaching than the current discussion of ChatGPT. It is clear that Plato’s argument as to the nature of published texts (Sect. 3.2) does not pertain to ChatGPT specifically, and it is hence rightly excluded from modern representation. We then find much of what ChatGPT does to the user in it that, mutatis mutandis, is equivalent to Plato’s criticism in this field (Sect. 3.1). The general anthropological background is missing. The argument is rather that users of the technical tool ChatGPT might lose some of their mental faculties and, in consequence, will not be able to fulfill some social ‘functions’. Whereas Plato focusses on cheating oneself, modern discussion is more about cheating others. There is also much fuss in the current age about the loss of the ability to amass and digest ‘information’ (whatever that may be). In contrast, Plato’s arguments about authorship (Sect. 3.3) that form the backbone of his thoughts are, from where we stand, only touched upon in an oblique way. According to Plato, it is the intention of serious authors to teach or to entertain their readers. A written text is only a means to reach that aim. In this sense, ChatGPT is—in contrast to human beings—no author (see also [9, 2736, 43]. It lacks ‘intentionality’; its object is not to teach its users anything, nor does it wish to enter into discussion. Its sole aim is to generate a response that aligns with the formulated prompts using bits and pieces of other texts it finds on the internet. The lack of authorship leads to the connected question of ‘responsibility’, which has also arisen in other fields of technology (for example, regarding automated cars). With ChatGPT, it is difficult to track the responsibility for any falsehoods and disinformation it produces. Who is responsible for a text written by ChatGPT? There are, it would seem, three possible answers to this question: (a) the company developing the algorithm, (b) the ChatGPT user who prompted the tool to compile a specific text, or (c) the human author(s) from whom the ‘information’ compiled by ChatGPT originated. This is not the place to pursue this interesting question further. Plato, however, might teach us that it could be the core issue to discuss in future.

As we conclude this exploration, it is fitting to reflect on the intersection of ancient wisdom and cutting-edge technology. Plato, the eminent philosopher of ancient Greece, grappled with questions of truth, knowledge, and morality. His dialogues continue to inspire intellectual discourse. Today, we encounter a different kind of dialogue than Plato did—the interaction between humans and chatbots. These digital entities, like ChatGPT, engage users in conversation, aiming to persuade, inform, or entertain, exactly as did the ancient authors (sophists, poets) Plato opposed to in his writings. Chatbots operate within an intricate moral landscape, where transparency, bias, and accountability matter. Plato’s world was perhaps less large, but—as traditional ethos was rapidly disssolving—not less intricate than ours. In juxtaposing Plato's philosophical legacy with the realm of AI, we find both continuity and contrast. While Plato's Socratic dialogues shaped minds through dialectical exchange, chatbots navigate textual data, algorithms, and neural networks. As we ponder the implications of AI ethics, let us honor the ancient sage while embracing the challenges of our technologically driven era.

In essence, the comparison between Plato and modern chatbots invites us to consider not only the evolution of knowledge transmission but also the ethical responsibilities we bear as creators and users of artificial intelligence.