Skip to main content
Log in

How to cheat on your final paper: Assigning AI for student writing

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This paper shares results from a pedagogical experiment that assigns undergraduates to “cheat” on a final class essay by requiring their use of text-generating AI software. For this assignment, students harvested content from an installation of GPT-2, then wove that content into their final essay. At the end, students offered a “revealed” version of the essay as well as their own reflections on the experiment. In this assignment, students were specifically asked to confront the oncoming availability of AI as a writing tool. What are the ethics of using AI this way? What counts as plagiarism? What are the conditions, if any, we should place on AI assistance for student writing? And how might working with AI change the way we think about writing, authenticity, and creativity? While students (and sometimes GPT-2) offered thoughtful reflections on these initial questions, actually composing with GPT-2 opened their perspectives more broadly on the ethics and practice of writing with AI. In this paper, I share how students experienced those issues, connect their insights to broader conversations in the humanities about writing and communication, and explain their relevance for the ethical use and evaluation of language models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Availability of data and material

The course syllabus and module assignments are available open access in the MLA CORE repository: https://doi.org/10.17613/0h18-5p41.

Code availability

Not applicable.

Notes

  1. Plagiarism appears as its own section of “Academic Misconduct” (8.4) in NC State’s student conduct policy (“POL 11.35.01 – Code of Student Conduct” 2020).

  2. See Elkins and Chun for a report on an assignment oriented more toward literary writing (2020). See also the reflections by Lang et al. (2021).

  3. As defined by Long and Magerko, AI literacy includes “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (2020, 2). See also Ng et al. for a review of recent approaches to defining AI literacy (2021).

  4. Long and Magerko recommend that “Researchers seeking to foster AI literacy may want to avoid misleading tactics like Turing deceptions and black-box algorithms” (2020, 8)—and I could not disagree more. Inviting students into this experience instead resembles how Meredith Broussard works with AI “to commit acts of investigative journalism” into its consequences (2018, 6).

  5. Such cross-disciplinary approaches might especially be warranted given the landscape of ethics training in CS programs (Raji et al. 2021).

  6. The course syllabus and module assignments are available open access in the MLA CORE repository: http://dx.doi.org/10.17613/0h18-5p41.

  7. For an extensive introduction to this class of machine learning software as well as its emerging uses and risks, see Bommasani et al. (2021).

  8. Scott has since moved to a position at the software company Sourcegraph.

  9. A quick demo of GPT-2 can be done online with the site “Write With Transformer” created by the company Hugging Face: https://transformer.huggingface.co/doc/distil-gpt2 In a subsequent version of this assignment, students used the open-source language model GPT-J which also makes a web-based demo available in a browser: https://6b.eleuther.ai/.

  10. Similar discussions can be found in the use of AI or automated systems to grade student writing (Anson 2006; Anson and Perelman 2017).

  11. We considered training the model on student’s own writing for better results, but realized that would require a prohibitive and potentially invasive amount of training data from students: 50 + MB their writing in plain text. We agreed a simpler approach would still achieve the assignment’s goals.

  12. This may especially apply in context of ESL and language diversity, as students and/or instructors disqualify their expression against standard written English.

  13. See also Reid on the “possibility space” of nonhuman rhetoric (2020).

  14. Personal conversation with the author. Rieder’s own experiments in physical computing and new media composition are similarly interested in hybridity. He describes this as “everting” new media, or “infusing the real with aspects of the virtual.”

  15. For an extensive introduction to assemblage in writing studies, as well as current examples in pedagogical practice, see Yancey and McElroy (2017).

  16. For specific examples from those frameworks, see Long and Magerko (2020), Ng et al. (2021).

  17. As Lauren Goodlad complains, “while there is increasing talk of making AI ‘ethical,’ ‘democratic,’ and ‘human-centered,’ scholars in the humanities seldom shape these discussions” (Goodlad and Dimock 2021, 317).

References

Download references

Acknowledgements

My thanks to the students in the HON 202 seminar “Data and the Human” for their enthusiastic participation in this experiment. Many thanks also to NC State University colleagues including Scott Bailey, Zachary Beare, Chris Anson, Helen Burgess, and David Rieder for sharing their perspectives and recommendations. Thanks also to peer reviewers for constructive suggestions that improved this article.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Paul Fyfe.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional declarations for articles in life science journals that report the results of studies involving humans and/or animals

Not applicable.

Ethics approval

Federal regulations allow specific categories of human subjects research to be exempt from continuing IRB review [45 CFR 46.104(d)]. Exemption Category 1 applies to research conducted in established or commonly accepted educational settings involving normal educational practices, including research on classroom instructional strategies.

Consent to participate

All enrolled students completed the major assignments as course requirements.

Consent for publication

All students gave their emailed consent to have their essays included and quoted in my report. Quotes from student papers have been anonymized in this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fyfe, P. How to cheat on your final paper: Assigning AI for student writing. AI & Soc 38, 1395–1405 (2023). https://doi.org/10.1007/s00146-022-01397-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-022-01397-z

Keywords

Navigation