Skip to main content
Log in

A shift towards oration: teaching philosophy in the age of large language models

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

This paper proposes a reevaluation of assessment methods in philosophy higher education, advocating for a shift away from traditional written assessments towards oral evaluation. Drawing attention to the rising ethical concerns surrounding large language models (LLMs), we argue that a renewed focus on oral skills within philosophical pedagogy is both imperative and underexplored. This paper offers a case for redirecting attention to the neglected realm of oral evaluation, asserting that it holds significant promise for fostering students with some of our traditional academic values that we want to maintain. We identify implications of this shift in emphasis which situates our discipline to contribute positively to solving some of the most pressing socio-political issues. Additionally, our proposal aims to demonstrate how philosophy can solidify its relevancy to the twenty-first century student and academy more broadly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Critics have pointed out that writing produced by LLMs is often shallow, and that it struggles to produce sustained chains of reasoning without repeating or contradicting itself; as a result, one might worry that these concerns of educators overestimate their abilities. However, LLMs can be a crutch for students, particularly in introductory courses, without producing particular “good” writing or argumentation; it’s sufficient that the outputs of LLMs are often difficult to distinguish from B- or even C-level writing. This much is demonstrated by the fact that 43% of college students say they have used LLMs like ChatGPT, and 22% admit to having used them on assignments [35].

  2. For example, Julia Staffel [31] suggests that there may be some value to incorporating LLMs (such as ChatGPT) into the classroom. Teaching Philosophy in a World with ChatGPT—Daily Nous. In addition, see. Additionally, see Cassie Finley [10] “Incorporating ChatGPT in Philosophy Classes.”.

  3. While our primary emphasis is on philosophy, we assert that our arguments are relevant to a wider range of humanities instruction.

  4. See Plato’s Protagoras. In this dialogue, the methodological differences between a philosopher and a sophist are apparent.

  5. At least, for certain types of thinking. Other modalities include thinking via images, which, as Alshanetsky points out, may be more predominant for neurodivergent people. The point is that much of our thought depends on language in this way.

  6. For our purposes, it’s not necessary that this be an occurrent conscious state, but if one understands an object, she should be capable of becoming consciously aware of the object and its connection to other objects of understanding. One can understand the causes of the American Revolution even when she’s not thinking about it, if the information is somewhere “in her head.” However, as we argue below, it’s less plausible that she understands the American Revolution if all the relevant information is stored in a scaffold, even if she can easily access it. In this sense, understanding resists offloading onto scaffolds.

  7. Fyfe [14] presents the results of an experiment in which he asked students to write with GPT-2. Some of his students reported that the LLM helped them to articulate thoughts that they already but were struggling to express. As Fyfe points out, this suggests a question: “But how can a student recognize the untrained outputs from a language model as their own ideas?” (6). While Fyfe argues that writing with LLMs may have other forms of pedagogical value, he doesn’t directly answer this question. Given the constitutive role that language plays in (some types of) thought, as pointed out by Alshanetsky, we are skeptical that ideas articulated by LLMs can generally be called the user’s own.

  8. Insofar as conversation is an essentially cooperative activity, it follows that one cannot have a genuine conversation with an LLM, since (we assume) it lacks agency. An LLM can simulate a conversation, but a proper conversation requires agency on both sides.

  9. This is currently the standard view. However, some may disagree, but we argue that this stance is susceptible to the Eliza effect.

  10. Other examples can be found in Carr 2014. Carr elucidates the implications of automation, focusing on computers and software, as well as the human ramifications stemming from these technologies. Put simply, Carr shows why we should be worried about overusing technology.

  11. We are greatly indebted to Damian Fisher for helping us with the content in the next few paragraphs.

  12. An additional problem concerns”hallucinations” which refers to the false information generated by LLMs.

  13. Julia Staffel [31] claims that LLM’s are currently bad at counter factual reasoning and justifying its output, among other things.

  14. We doubt that LLMs will completely replace writing. At the most extreme, writing could be comparable to music. For example, if an AI can play the guitar better than a human, a human will still find the merits of playing the guitar because it’s the act of playing in and of itself that makes the act worthwhile, not necessarily the outcome. So, if an LLM can produce better writing, there will still be people that write for the act itself.

  15. This is not always the case; for example, if one’s speech is recorded, then it may be heard by an indeterminate audience. We ignore such cases for the present paper.

  16. For an informative paper on AI and practical wisdom see Ruth Groff and John Symons’ “Is AI Capable of Aristotelian Full Moral Virtue? The Rational Power of phronesis, Machine Learning and Regularity” in Artificial Dispositions: Investigating Ethical and Metaphysical Issues, William A. Bauer and Anna Marmodoro (Editors) Bloomsbury Press. See also John P. Sullins [32]”Artificial Phronesis” in Science, Technology, and Virtues: Contemporary Perspectives for an alternative interpretation.

  17. The primary goal of outreach in philosophy, from the perspective of the discipline's relevance, should include acquiring more students. However, it's important to note that this isn't the sole aim. The goals of outreach may vary from the perspectives of both professors and students, leaving room for additional objectives beyond student acquisition.

  18. While we touch on certain avenues for philosophical outreach, the matter is multifaceted and exceeds the scope of this paper. Explicitly delineating and integrating this territory into pedagogy holds merit. However, one may question whether a focus on practicality risks sacrificing essential philosophical depth. Given the sustained threat of philosophy departments being cut in higher education, the benefits likely outweigh the risks. The familiar concerns about the dangers of seeking "relevance" obtain here. (We thank an anonymous reviewer for drawing our attention to this latter point).

  19. See the Association for Practical and Professional Ethics: About Ethics Bowl/APPE IEB®—Association for Practical and Professional Ethics (appe-ethics.org).

  20. See the following three websites for a more elaborate description of the fishbowl technique: Fishbowl Discussion Teaching Strategy | Facing History & Ourselves; How to Implement the Fishbowl Teaching Strategy in Your Classroom—The Edvocate (theedadvocate.org); Fishbowl Discussion (classroom) (wisc.edu).

  21. See [14]. Although the primary focus of this article is on the ways that LLMs trouble our concept of’cheating’ or’plagiarism’, the article addresses many of the themes in our paper, including authenticity and creativity.

References

  1. Alshanetsky (2019) https://global.oup.com/academic/product/articulating-a-thought-9780198785880?cc=us&lang=en

  2. Alshanetsky (2020) https://aeon.co/essays/what-comes-first-ideas-or-words-the-paradox-of-articulation

  3. Aristotle. Trans by Irwin, T. Nicomachean ethics (2nd edition). Hacket Publishing Company.

  4. Carr N (2015) The glass cage: how our computers are changing us. W. W, Norton and Company.

  5. Clark, A., Chalmers, D.: The extended mind. analysis 58(1), 7–19 (1998)

    Google Scholar 

  6. Coeckelbergh, M., & Gunkel, D. (2023). ChatGPT: deconstructing the debate and moving it forward. AI & Society.

  7. Cooper, Z. [@ZaneGTCooper]. (2023, January 18). Yes, this is very historically accurate and useful and should definitely be used in classrooms. This is my convo with [Image attached]

  8. Timmo, D.. “Fishbowl Discussion (Classroom).” Fishbowl discussion (classroom). Accessed October 4, 2023. https://kb.wisc.edu/instructional-resources/page.php?id=104085.

  9. [Tweet]. Twitter. https://twitter.com/ZaneGTCooper/status/1615577714836275200?s=20

  10. Finley, C. (2023). Incorporating ChatGPT in Philosophy Classes. PLATO. https://www.plato-philosophy.org/incorporating-chatgpt-in-philosophy-classes/

  11. “Fishbowl.” Facing History & Ourselves. Accessed October 4, 2023. https://www.facinghistory.org/resource-library/fishbowl.

  12. Floridi, L.: AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models. Philosophy & Technology (2023). https://doi.org/10.2139/ssrn.4358789

    Article  Google Scholar 

  13. Forster, E. M. (2023). Aspects of the Novel. Project Gutenberg. (Original work published 1927). https://www.gutenberg.org/cache/epub/70492/pg70492-images.html

  14. Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society.

  15. Grayling, A.C. (2019). A history of philosophy. Penguin Press.

  16. Grice, H.P.: Logic and conversation. In: Davidson, D., Harman, G. (eds.) The Logic of Grammar, pp. 64–75. Dickenson (1975)

    Google Scholar 

  17. Grimm, S. (2021). Understanding. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/understanding/

  18. Griswold, C. (2020). Plato on rhetoric and poetry. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/Entries/plato-rhetoric/

  19. Groff, R. & Symons, J. (Forthcoming). “Is AI Capable of Aristotelian Full Moral Virtue? The Rational Power of phronesis, Machine Learning and Regularity” in Artificial Dispositions: Investigating Ethical and Metaphysical Issues. William A. Bauer and Anna Marmodoro (Editors). Bloomsbury.

  20. Guerrero, A. (2023b, June 27). The Fourth Branch (guest post) - Daily Nous. Daily Nous - news for & about the philosophy profession. https://dailynous.com/2023/06/27/the-fourth-branch-guest-post/

  21. Heersmink, R. (2016). The internet, cognitive enhancement, and the values of cognition. Minds and Machines26, 389–407.Kasirzadeh, A., & Gabriel, I. (2023). In conversation with Artificial Intelligence: aligning language models with human values. Philosophy & Technology36(2), 1–24.

  22. Lynch, B. (Host). (2023). ChatGPT didn’t write this podcast (No. 44) [Audio podcast episode]. In When Experts Attack. Kansas Public Radio. https://kansaspublicradio.org/podcast/when-experts-attack/2023-04-05/when-experts-attack-44-chatgpt-didnt-write-this-podcast

  23. Lynch, M. “How to Implement the Fishbowl Teaching Strategy in Your Classroom.” The Edvocate, January 4, 2022. https://www.theedadvocate.org/how-to-implement-the-fishbowl-teaching-strategy-in-your-classroom/.

  24. Mercier, H., Sperber, D.: Why do humans reason? Arguments for an argumentative theory. Behavioral and brain sciences 34(2), 57–74 (2011)

    Article  Google Scholar 

  25. Mercier, H., Sperber, D.: The enigma of reason. Harvard University Press (2017)

    Book  Google Scholar 

  26. Nyholm, S. (2023): This is technology ethics: An Introduction. Wiley-Blackwell.

  27. Plato. (1986). Phaedrus (Rowe, C. J., trans). Aris and Phillips.

  28. Plato. (2013). Protagoras (Jowett, B. trans). Retrieved from Project Gutenberg.

  29. Rapp, B., Fischer-Baum, S., Miozzo, M.: Modality and Morphology: What We Write May Not Be What We Say. Psychol. Sci. 26(6), 892–902 (2015). https://doi.org/10.1177/0956797615573520

    Article  Google Scholar 

  30. Saul, J.: Dogwhistles, political manipulation, and philosophy of language. New work on speech acts 360, 84 (2018)

    Google Scholar 

  31. Staffel, J. (2023). ChatGPT and its impact on teaching philosophy and other subjects. [Video]. YouTube. https://www.youtube.com/watch?v=bkjVkfU9Gro.

  32. Sullins, S. (2021). Artificial phronesis: What ir is and what it is not*”. In Science, Technology, & Virtues: Contemporary Perspectives by Emanuele, R. and Stapleford, T.A. (ed). Oxford University Press.

  33. Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.

  34. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.

  35. Welding, L. (2023, March 17). Half of college students say using AI on schoolwork is cheating or plagiarism. BestColleges. https://www.bestcolleges.com/research/college-students-ai-tools-survey/

  36. Winner, L. (1983). Technologies as forms of life. In Cohen, R. S., Wartofsky, M. W. (Eds.), Epistemology, methodology, and the social sciences. Springer. https://doi.org/10.1007/978-94-017-1458-7_10

  37. Zagzebski, L.: Recovering understanding. In: Steup, M. (ed.) Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue, pp. 235–256. Oxford University Press (2001)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Lemasters.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lemasters, R., Hurshman, C. A shift towards oration: teaching philosophy in the age of large language models. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00455-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-024-00455-0

Keywords

Navigation