Abstract
This paper proposes a reevaluation of assessment methods in philosophy higher education, advocating for a shift away from traditional written assessments towards oral evaluation. Drawing attention to the rising ethical concerns surrounding large language models (LLMs), we argue that a renewed focus on oral skills within philosophical pedagogy is both imperative and underexplored. This paper offers a case for redirecting attention to the neglected realm of oral evaluation, asserting that it holds significant promise for fostering students with some of our traditional academic values that we want to maintain. We identify implications of this shift in emphasis which situates our discipline to contribute positively to solving some of the most pressing socio-political issues. Additionally, our proposal aims to demonstrate how philosophy can solidify its relevancy to the twenty-first century student and academy more broadly.
Similar content being viewed by others
Notes
Critics have pointed out that writing produced by LLMs is often shallow, and that it struggles to produce sustained chains of reasoning without repeating or contradicting itself; as a result, one might worry that these concerns of educators overestimate their abilities. However, LLMs can be a crutch for students, particularly in introductory courses, without producing particular “good” writing or argumentation; it’s sufficient that the outputs of LLMs are often difficult to distinguish from B- or even C-level writing. This much is demonstrated by the fact that 43% of college students say they have used LLMs like ChatGPT, and 22% admit to having used them on assignments [35].
While our primary emphasis is on philosophy, we assert that our arguments are relevant to a wider range of humanities instruction.
See Plato’s Protagoras. In this dialogue, the methodological differences between a philosopher and a sophist are apparent.
At least, for certain types of thinking. Other modalities include thinking via images, which, as Alshanetsky points out, may be more predominant for neurodivergent people. The point is that much of our thought depends on language in this way.
For our purposes, it’s not necessary that this be an occurrent conscious state, but if one understands an object, she should be capable of becoming consciously aware of the object and its connection to other objects of understanding. One can understand the causes of the American Revolution even when she’s not thinking about it, if the information is somewhere “in her head.” However, as we argue below, it’s less plausible that she understands the American Revolution if all the relevant information is stored in a scaffold, even if she can easily access it. In this sense, understanding resists offloading onto scaffolds.
Fyfe [14] presents the results of an experiment in which he asked students to write with GPT-2. Some of his students reported that the LLM helped them to articulate thoughts that they already but were struggling to express. As Fyfe points out, this suggests a question: “But how can a student recognize the untrained outputs from a language model as their own ideas?” (6). While Fyfe argues that writing with LLMs may have other forms of pedagogical value, he doesn’t directly answer this question. Given the constitutive role that language plays in (some types of) thought, as pointed out by Alshanetsky, we are skeptical that ideas articulated by LLMs can generally be called the user’s own.
Insofar as conversation is an essentially cooperative activity, it follows that one cannot have a genuine conversation with an LLM, since (we assume) it lacks agency. An LLM can simulate a conversation, but a proper conversation requires agency on both sides.
This is currently the standard view. However, some may disagree, but we argue that this stance is susceptible to the Eliza effect.
Other examples can be found in Carr 2014. Carr elucidates the implications of automation, focusing on computers and software, as well as the human ramifications stemming from these technologies. Put simply, Carr shows why we should be worried about overusing technology.
We are greatly indebted to Damian Fisher for helping us with the content in the next few paragraphs.
An additional problem concerns”hallucinations” which refers to the false information generated by LLMs.
Julia Staffel [31] claims that LLM’s are currently bad at counter factual reasoning and justifying its output, among other things.
We doubt that LLMs will completely replace writing. At the most extreme, writing could be comparable to music. For example, if an AI can play the guitar better than a human, a human will still find the merits of playing the guitar because it’s the act of playing in and of itself that makes the act worthwhile, not necessarily the outcome. So, if an LLM can produce better writing, there will still be people that write for the act itself.
This is not always the case; for example, if one’s speech is recorded, then it may be heard by an indeterminate audience. We ignore such cases for the present paper.
For an informative paper on AI and practical wisdom see Ruth Groff and John Symons’ “Is AI Capable of Aristotelian Full Moral Virtue? The Rational Power of phronesis, Machine Learning and Regularity” in Artificial Dispositions: Investigating Ethical and Metaphysical Issues, William A. Bauer and Anna Marmodoro (Editors) Bloomsbury Press. See also John P. Sullins [32]”Artificial Phronesis” in Science, Technology, and Virtues: Contemporary Perspectives for an alternative interpretation.
The primary goal of outreach in philosophy, from the perspective of the discipline's relevance, should include acquiring more students. However, it's important to note that this isn't the sole aim. The goals of outreach may vary from the perspectives of both professors and students, leaving room for additional objectives beyond student acquisition.
While we touch on certain avenues for philosophical outreach, the matter is multifaceted and exceeds the scope of this paper. Explicitly delineating and integrating this territory into pedagogy holds merit. However, one may question whether a focus on practicality risks sacrificing essential philosophical depth. Given the sustained threat of philosophy departments being cut in higher education, the benefits likely outweigh the risks. The familiar concerns about the dangers of seeking "relevance" obtain here. (We thank an anonymous reviewer for drawing our attention to this latter point).
See the Association for Practical and Professional Ethics: About Ethics Bowl/APPE IEB®—Association for Practical and Professional Ethics (appe-ethics.org).
See the following three websites for a more elaborate description of the fishbowl technique: Fishbowl Discussion Teaching Strategy | Facing History & Ourselves; How to Implement the Fishbowl Teaching Strategy in Your Classroom—The Edvocate (theedadvocate.org); Fishbowl Discussion (classroom) (wisc.edu).
See [14]. Although the primary focus of this article is on the ways that LLMs trouble our concept of’cheating’ or’plagiarism’, the article addresses many of the themes in our paper, including authenticity and creativity.
References
Alshanetsky (2019) https://global.oup.com/academic/product/articulating-a-thought-9780198785880?cc=us&lang=en
Alshanetsky (2020) https://aeon.co/essays/what-comes-first-ideas-or-words-the-paradox-of-articulation
Aristotle. Trans by Irwin, T. Nicomachean ethics (2nd edition). Hacket Publishing Company.
Carr N (2015) The glass cage: how our computers are changing us. W. W, Norton and Company.
Clark, A., Chalmers, D.: The extended mind. analysis 58(1), 7–19 (1998)
Coeckelbergh, M., & Gunkel, D. (2023). ChatGPT: deconstructing the debate and moving it forward. AI & Society.
Cooper, Z. [@ZaneGTCooper]. (2023, January 18). Yes, this is very historically accurate and useful and should definitely be used in classrooms. This is my convo with [Image attached]
Timmo, D.. “Fishbowl Discussion (Classroom).” Fishbowl discussion (classroom). Accessed October 4, 2023. https://kb.wisc.edu/instructional-resources/page.php?id=104085.
[Tweet]. Twitter. https://twitter.com/ZaneGTCooper/status/1615577714836275200?s=20
Finley, C. (2023). Incorporating ChatGPT in Philosophy Classes. PLATO. https://www.plato-philosophy.org/incorporating-chatgpt-in-philosophy-classes/
“Fishbowl.” Facing History & Ourselves. Accessed October 4, 2023. https://www.facinghistory.org/resource-library/fishbowl.
Floridi, L.: AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models. Philosophy & Technology (2023). https://doi.org/10.2139/ssrn.4358789
Forster, E. M. (2023). Aspects of the Novel. Project Gutenberg. (Original work published 1927). https://www.gutenberg.org/cache/epub/70492/pg70492-images.html
Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society.
Grayling, A.C. (2019). A history of philosophy. Penguin Press.
Grice, H.P.: Logic and conversation. In: Davidson, D., Harman, G. (eds.) The Logic of Grammar, pp. 64–75. Dickenson (1975)
Grimm, S. (2021). Understanding. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/understanding/
Griswold, C. (2020). Plato on rhetoric and poetry. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/Entries/plato-rhetoric/
Groff, R. & Symons, J. (Forthcoming). “Is AI Capable of Aristotelian Full Moral Virtue? The Rational Power of phronesis, Machine Learning and Regularity” in Artificial Dispositions: Investigating Ethical and Metaphysical Issues. William A. Bauer and Anna Marmodoro (Editors). Bloomsbury.
Guerrero, A. (2023b, June 27). The Fourth Branch (guest post) - Daily Nous. Daily Nous - news for & about the philosophy profession. https://dailynous.com/2023/06/27/the-fourth-branch-guest-post/
Heersmink, R. (2016). The internet, cognitive enhancement, and the values of cognition. Minds and Machines, 26, 389–407.Kasirzadeh, A., & Gabriel, I. (2023). In conversation with Artificial Intelligence: aligning language models with human values. Philosophy & Technology, 36(2), 1–24.
Lynch, B. (Host). (2023). ChatGPT didn’t write this podcast (No. 44) [Audio podcast episode]. In When Experts Attack. Kansas Public Radio. https://kansaspublicradio.org/podcast/when-experts-attack/2023-04-05/when-experts-attack-44-chatgpt-didnt-write-this-podcast
Lynch, M. “How to Implement the Fishbowl Teaching Strategy in Your Classroom.” The Edvocate, January 4, 2022. https://www.theedadvocate.org/how-to-implement-the-fishbowl-teaching-strategy-in-your-classroom/.
Mercier, H., Sperber, D.: Why do humans reason? Arguments for an argumentative theory. Behavioral and brain sciences 34(2), 57–74 (2011)
Mercier, H., Sperber, D.: The enigma of reason. Harvard University Press (2017)
Nyholm, S. (2023): This is technology ethics: An Introduction. Wiley-Blackwell.
Plato. (1986). Phaedrus (Rowe, C. J., trans). Aris and Phillips.
Plato. (2013). Protagoras (Jowett, B. trans). Retrieved from Project Gutenberg.
Rapp, B., Fischer-Baum, S., Miozzo, M.: Modality and Morphology: What We Write May Not Be What We Say. Psychol. Sci. 26(6), 892–902 (2015). https://doi.org/10.1177/0956797615573520
Saul, J.: Dogwhistles, political manipulation, and philosophy of language. New work on speech acts 360, 84 (2018)
Staffel, J. (2023). ChatGPT and its impact on teaching philosophy and other subjects. [Video]. YouTube. https://www.youtube.com/watch?v=bkjVkfU9Gro.
Sullins, S. (2021). Artificial phronesis: What ir is and what it is not*”. In Science, Technology, & Virtues: Contemporary Perspectives by Emanuele, R. and Stapleford, T.A. (ed). Oxford University Press.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Welding, L. (2023, March 17). Half of college students say using AI on schoolwork is cheating or plagiarism. BestColleges. https://www.bestcolleges.com/research/college-students-ai-tools-survey/
Winner, L. (1983). Technologies as forms of life. In Cohen, R. S., Wartofsky, M. W. (Eds.), Epistemology, methodology, and the social sciences. Springer. https://doi.org/10.1007/978-94-017-1458-7_10
Zagzebski, L.: Recovering understanding. In: Steup, M. (ed.) Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue, pp. 235–256. Oxford University Press (2001)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lemasters, R., Hurshman, C. A shift towards oration: teaching philosophy in the age of large language models. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00455-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-024-00455-0